A Feature Extraction Technique for Micro-Sleep Discernment Based on Random Sampling
Smart Voting System with Face Recognition and OTP Verification using Convolutional Neural Network
Swin-Transformer Based Recognition of Diabetic Retinopathy Grade
Heart Disease Prediction for ECG Images using CNN Models
Skin Disease Detection using Vision Transformers
Landslide Susceptibility Mapping through Weightages Derived from Statistical Information Value Model
An Efficient Foot Ulcer Determination System for Diabetic Patients
Statistical Wavelet based Adaptive Noise Filtering Technique for MRI Modality
Real Time Sign Language: A Review
Remote Sensing Schemes Mingled with Information and Communication Technologies (ICTS) for Flood Disaster Management
FPGA Implementation of Shearlet Transform Based Invisible Image Watermarking Algorithm
A Comprehensive Study on Different Pattern Recognition Techniques
User Authentication and Identification Using NeuralNetwork
Flexible Generalized Mixture Model Cluster Analysis withElliptically-Contoured Distributions
Efficient Detection of Suspected areas in Mammographic Breast Cancer Images
A micro-sleep may include head-bobbing or a blank starred eyes closed gesture which can be deadly behind the wheel. The drowsy state of the driver may lead to erratic pattern of driving typically causing a fatal accident. Several technologies that alert drivers who are drowsy behind the wheel were proposed earlier which lags in its detection time and accuracy due to the downturn in its computation performance. Hence, a novel characteristic extraction technique is suggested that fuses a packet Random Sampling (RS) medium along with ANN so as to detect the cognitive features even more accurately from the brain at a faster computation speed. The processing unit decomposes the driver's features into several packets and samples it randomly by means of the Bootstrapping Methodology and the weight of the generated features are thereby calculated by the McCulloch-Pitts neuron rule through ANN implementation. The estimated net weight aid in ascertaining the performance scores only to set up an efficient model concerning the classification accuracy which is found to be 93% followed by less than 0.5 seconds computation time. The Neural Network Plot is targeted towards the extraction of exact attribute values of the differentiated states of the driver's drowsiness, thus contributing a promising RS-ANN micro sleep discernment architecture.
In an era where secure digital solutions are essential for democratic integrity, this paper introduces a Smart Voting System that combines facial recognition and one-time password (OTP) verification to ensure reliable voter authentication. Utilizing Convolutional Neural Networks (CNNs) and the pre-trained MobileNetV2 architecture, the system accurately identifies users through webcam-based facial recognition. A secondary OTP layer, sent through email, ensures that only legitimate voters proceed to vote. Implemented as a web-based platform using Flask, the system offers real-time validation, robust fraud prevention, and a user-friendly interface. Experimental evaluations reveal high classification accuracy and reliability, proving its effectiveness in reducing impersonation and ensuring secure, transparent, and efficient elections. This approach provides a scalable framework adaptable to future electoral enhancements.
Diabetic Retinopathy (DR), a common diabetes-related disorder, is a leading driver of blindness worldwide. Quick detection and precise staging are essential for effective management and vision preservation. This study explores the Swin Transformer, an advanced deep learning framework with a multi-layered setup and a unique sliding window method, to create an automated tool for DR stage assessment. Utilizing the APTOS 2019 Blindness Detection dataset, the system accurately identifies small retinal signs like microaneurysms and more pronounced features such as hemorrhages, achieving high precision. Improved preprocessing, including image enrichment and calibration, enhances its versatility. Results indicate that this approach outperforms traditional Convolutional Neural Networks (CNNs) in precision, computational thrift, and growth potential, with a test accuracy of 99.57% and a test loss of 0.0220.
Heart disease continues to be a leading cause of global mortality, underscoring the urgent need for timely, accurate, and scalable diagnostic solutions. Traditional ECG interpretation is typically manual, time-consuming, and error-prone, creating a demand for intelligent automated systems. This paper presents a deep learning-based framework utilizing Convolutional Neural Networks (CNNs) for the classification of ECG images across multiple cardiac conditions. The proposed system incorporates advanced architectures, DenseNet169 and MobileNet, alongside a baseline CNN to enhance feature extraction and classification precision. A well-curated and preprocessed ECG dataset comprising five diagnostic categories (AHB, HMI, MI, COVID-19, and Normal) is used to train and validate the models. Experimental results demonstrate that DenseNet169 outperforms other architectures, achieving a classification accuracy of 82%, with high sensitivity in detecting both critical and subtle anomalies. Moreover, the system's ability to extend detection beyond conventional heart disease, covering COVID-19-related abnormalities and myocardial infarctions, makes it particularly relevant in current healthcare contexts. The findings highlight the potential of CNNs as non-invasive, efficient, and robust tools to support clinical decision-making and enhance early diagnosis.
Skin diseases are prevalent health issues that significantly impact individual's quality of life. Early and accurate diagnosis is crucial for timely treatment, leading to faster recovery. With advancements in machine learning and computer vision, Vision Transformers (ViTs) have emerged as a powerful alternative to Convolutional Neural Networks (CNNs) for automatic skin disease detection. This study explores the application of Vision Transformers in diagnosing skin diseases, highlighting their potential to support dermatologists and healthcare professionals. The proposed method utilizes the HAM10000 image dataset comprising various skin conditions, including melanoma, benign keratosis, basal carcinoma and other common ailments. Vision Transformers, known for their ability to capture long-range dependencies and global context in images, are employed to extract high-level features from input images. These features are then fed into a classification layer for disease detection. The ViT model learns to identify patterns associated with different skin diseases through training on an extensive dataset of skin images. When presented with a new image, the model extracts relevant features, enabling it to accurately classify the disease. The test accuracy and val loss are 93.36% and 0.2181. This study demonstrates the effectiveness of Vision Transformers in skin disease detection, offering a promising tool for improving diagnostic accuracy and supporting early intervention in dermatology.