i-manager's Journal on Pattern Recognition (JPR)


Volume 12 Issue 1 January - June 2025

Research Paper

A Feature Extraction Technique for Micro-Sleep Discernment Based on Random Sampling

Reeba Jennifer R.* , Balaji V. R.**
* Department of Electrical Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, Tamil Nadu, India.
** Department of Electronics and Communication Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, Tamil Nadu, India.
Jennifer, R. R., and Balaji, V. R. (2025). A Feature Extraction Technique for Micro-Sleep Discernment Based on Random Sampling. i-manager’s Journal on Pattern Recognition, 12(1), 1-15. https://doi.org/10.26634/jpr.12.1.21913

Abstract

A micro-sleep may include head-bobbing or a blank starred eyes closed gesture which can be deadly behind the wheel. The drowsy state of the driver may lead to erratic pattern of driving typically causing a fatal accident. Several technologies that alert drivers who are drowsy behind the wheel were proposed earlier which lags in its detection time and accuracy due to the downturn in its computation performance. Hence, a novel characteristic extraction technique is suggested that fuses a packet Random Sampling (RS) medium along with ANN so as to detect the cognitive features even more accurately from the brain at a faster computation speed. The processing unit decomposes the driver's features into several packets and samples it randomly by means of the Bootstrapping Methodology and the weight of the generated features are thereby calculated by the McCulloch-Pitts neuron rule through ANN implementation. The estimated net weight aid in ascertaining the performance scores only to set up an efficient model concerning the classification accuracy which is found to be 93% followed by less than 0.5 seconds computation time. The Neural Network Plot is targeted towards the extraction of exact attribute values of the differentiated states of the driver's drowsiness, thus contributing a promising RS-ANN micro sleep discernment architecture.

Research Paper

Smart Voting System with Face Recognition and OTP Verification using Convolutional Neural Network

Gudipalli Rajani* , Guntapalli Leeladhar**, Garlapati Rupesh Chowdary***, Kanakala Lalitha Kumari****, Kornepati Hema Phaneeshwar*****
*-***** Department of Computer Science and Engineering, Vasireddy Venkatadri Institute of Technology, Guntur, Andhra Pradesh, India.
Rajani, G., Leeladhar, G., Chowdary, G. R., Kumari, K. L., and Phaneeshwarv, K. H. (2025). Smart Voting System with Face Recognition and OTP Verification using Convolutional Neural Network. i-manager’s Journal on Pattern Recognition, 12(1), 16-25. https://doi.org/10.26634/jpr.12.1.21945

Abstract

In an era where secure digital solutions are essential for democratic integrity, this paper introduces a Smart Voting System that combines facial recognition and one-time password (OTP) verification to ensure reliable voter authentication. Utilizing Convolutional Neural Networks (CNNs) and the pre-trained MobileNetV2 architecture, the system accurately identifies users through webcam-based facial recognition. A secondary OTP layer, sent through email, ensures that only legitimate voters proceed to vote. Implemented as a web-based platform using Flask, the system offers real-time validation, robust fraud prevention, and a user-friendly interface. Experimental evaluations reveal high classification accuracy and reliability, proving its effectiveness in reducing impersonation and ensuring secure, transparent, and efficient elections. This approach provides a scalable framework adaptable to future electoral enhancements.

Research Paper

Swin-Transformer Based Recognition of Diabetic Retinopathy Grade

Sanjay Gandhi Gundabatini* , Sai Sindhu Manne**, Sunkara Likhit Babu***, Vangapandu Bhargava Rao****, Sanka Tejaswi*****
*-***** Department of Computer Science and Engineering, Vasireddy Venkatadri Institute of Technology, Guntur, Andhra Pradesh, India.
Gundabatini, S. G., Manne, S. S., Babu, S. L., Rao, V. B., and Tejaswi, S. (2025). Swin-Transformer Based Recognition of Diabetic Retinopathy Grade. i-manager’s Journal on Pattern Recognition, 12(1), 26-34. https://doi.org/10.26634/jpr.12.1.21927

Abstract

Diabetic Retinopathy (DR), a common diabetes-related disorder, is a leading driver of blindness worldwide. Quick detection and precise staging are essential for effective management and vision preservation. This study explores the Swin Transformer, an advanced deep learning framework with a multi-layered setup and a unique sliding window method, to create an automated tool for DR stage assessment. Utilizing the APTOS 2019 Blindness Detection dataset, the system accurately identifies small retinal signs like microaneurysms and more pronounced features such as hemorrhages, achieving high precision. Improved preprocessing, including image enrichment and calibration, enhances its versatility. Results indicate that this approach outperforms traditional Convolutional Neural Networks (CNNs) in precision, computational thrift, and growth potential, with a test accuracy of 99.57% and a test loss of 0.0220.

Research Paper

Heart Disease Prediction for ECG Images using CNN Models

Chandika Hari Prasad * , Bezawada Spandana**, Ambati Madhavi***, Bontha Jayanth****, Chilaka Sri Deepak*****
*-***** Department of Computer Science and Engineering, Vasireddy Venkatadri Institute of Technology, Guntur, Andhra Pradesh, India.
Prasad, C. H., Spandana, B., Madhavi, A., Jayanth, B., and Deepak, C. S. (2025). Heart Disease Prediction for ECG Images using CNN Models. i-manager’s Journal on Pattern Recognition, 12(1), 35-44. https://doi.org/10.26634/jpr.12.1.21930

Abstract

Heart disease continues to be a leading cause of global mortality, underscoring the urgent need for timely, accurate, and scalable diagnostic solutions. Traditional ECG interpretation is typically manual, time-consuming, and error-prone, creating a demand for intelligent automated systems. This paper presents a deep learning-based framework utilizing Convolutional Neural Networks (CNNs) for the classification of ECG images across multiple cardiac conditions. The proposed system incorporates advanced architectures, DenseNet169 and MobileNet, alongside a baseline CNN to enhance feature extraction and classification precision. A well-curated and preprocessed ECG dataset comprising five diagnostic categories (AHB, HMI, MI, COVID-19, and Normal) is used to train and validate the models. Experimental results demonstrate that DenseNet169 outperforms other architectures, achieving a classification accuracy of 82%, with high sensitivity in detecting both critical and subtle anomalies. Moreover, the system's ability to extend detection beyond conventional heart disease, covering COVID-19-related abnormalities and myocardial infarctions, makes it particularly relevant in current healthcare contexts. The findings highlight the potential of CNNs as non-invasive, efficient, and robust tools to support clinical decision-making and enhance early diagnosis.

Research Paper

Skin Disease Detection using Vision Transformers

Ramya Asalatha.Busi* , Velaga Naga Bhavana**, Revathi Navuluri***, Vamsi Donkada****, Shaik Faizaan Ahmed*****
*-***** Department of Computer Science and Engineering, Vasireddy Venkatadri Institute of Technology, Nambur, Guntur, Andhra Pradesh, India.
Busi, R. A., Bhavana, V. N., Navuluri, R., Donkada, V., and Ahmed, S. F. (2025). Skin Disease Detection using Vision Transformers. i-manager’s Journal on Pattern Recognition, 12(1), 45-52. https://doi.org/10.26634/jpr.12.1.21946

Abstract

Skin diseases are prevalent health issues that significantly impact individual's quality of life. Early and accurate diagnosis is crucial for timely treatment, leading to faster recovery. With advancements in machine learning and computer vision, Vision Transformers (ViTs) have emerged as a powerful alternative to Convolutional Neural Networks (CNNs) for automatic skin disease detection. This study explores the application of Vision Transformers in diagnosing skin diseases, highlighting their potential to support dermatologists and healthcare professionals. The proposed method utilizes the HAM10000 image dataset comprising various skin conditions, including melanoma, benign keratosis, basal carcinoma and other common ailments. Vision Transformers, known for their ability to capture long-range dependencies and global context in images, are employed to extract high-level features from input images. These features are then fed into a classification layer for disease detection. The ViT model learns to identify patterns associated with different skin diseases through training on an extensive dataset of skin images. When presented with a new image, the model extracts relevant features, enabling it to accurately classify the disease. The test accuracy and val loss are 93.36% and 0.2181. This study demonstrates the effectiveness of Vision Transformers in skin disease detection, offering a promising tool for improving diagnostic accuracy and supporting early intervention in dermatology.