A Deep Learning CNN Approach with Unified Feature Extraction for Breast Cancer Detection and Classification
A CNN-LSTM Hybrid Model for Parkinson's Disease Detection from Handwritten Spirals using Transfer Learning
Advanced Medical Image Fusion using Multi-Layer Adaptive Curvature Filtering and Pulse Coupled Neural Network for Enhanced Diagnostic Accuracy
Vehicle Number Plate Detection System using Machine Learning
Hand Gesture Recognition Based on Electromyography Signals using Artificial Neural Network
Identification of Volcano Hotspots by using Resilient Back Propagation (RBP) Algorithm Via Satellite Images
Data Hiding in Encrypted Compressed Videos for Privacy Information Protection
Improved Video Watermarking using Discrete Cosine Transform
Contrast Enhancement based Brain Tumour MRI Image Segmentation and Detection with Low Power Consumption
Denoising of Images by Wavelets and Contourlets using Bi-Shrink Filter
Radiologists typically have a hard time to classify the breast cancer, which leads to unnecessary biopsies to remove suspicions, and this ends up in adding exorbitant expenses to an already burdened patient and health care system. As well as early detection and diagnosis can save the lives of cancer patients. In this paper, a computer-aided diagnosis (CAD) system based on hybrid intelligence framework using Gabor wavelet-based deep learning convolutional neural network (GW-DL-CNN) for the detection and classification of breast cancer in mammographic images is proposed. In addition, a machine learning framework with Gabor wavelet-based support vector machine (GW-SVM) also implemented. Both, GW-SVM and GW-DL-CNN models are proposed to help the radiologist in a much better way to detect and classify the breast cancer from mammographic images. Further, Chan-Vese (C-V) features-based level set segmentation also utilized for segmenting the objects without clearly defined boundaries in mammographic images. The unified features extracted from C-V and GW are fed into an architecture of DL-CNN to classify the type of breast cancer such as malignant, benign, or normal using fully complex valued relaxation network (FCRN) classifier. The proposed frameworks of GW-SVM, GW-DL-CNN with FCRN classifier is achieved the model accuracy of 98.6%, specificity of 98%, sensitivity of 98% and F1-Score is 97.08% respectively.
Parkinson's Disease (PD) is a progressive neurological disorder that significantly affects motor skills, typically altering a person's handwriting. This work investigates deep learning-based approaches for classifying Parkinson's Disease using images of handwritten spiral drawings. The study began with a transfer learning approach using EfficientNet, as well as a CNN-LSTM architecture that combines convolutional and recurrent layers for spatial- temporal feature modeling. However, both approaches individually yielded suboptimal results, each achieving only 50% classification accuracy. To overcome these limitations, a hybrid model is proposed that integrates MobileNet, a lightweight and efficient convolutional neural network, with a Long Short-Term Memory (LSTM) layer to capture both spatial and temporal dynamics in handwriting patterns. This MobileNet+LSTM architecture demonstrated significant performance gains, achieving 87% accuracy on the publicly available Parkinson's Drawings Dataset. These results suggest that combining transfer learning with temporal sequence modeling is a highly effective strategy for handwriting-based Parkinson's diagnosis, offering a non-invasive and scalable solution for early detection.
Medical image fusion is an essential method of combining complementary information from multiple imaging modalities, like Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET), to increase diagnostic precision. In this work, a new fusion method is introduced based on Multi-Layer Adaptive Curvature Filtering (MLACF) and Pulse Coupled Neural Network (PCNN). The MLACF algorithm splits images into small-scale, large-scale, and background parts while maintaining structural and edge information. The PCNN-based fusion approach then fuses the decomposed elements to improve feature preservation and visual perception. The method is tested based on several image fusion performance measures, such as entropy, structural similarity index (SSIM), Q_AB/F, and feature mutual information (FMI). Experimental results show that the method performs well in preserving key diagnostic details, and its performance outshines that of traditional fusion methods.
The goal of this research is to employ OCR and conventional image processing methods to create a reliable system for detecting and recognizing vehicle number plates. The system's goal is to improve vehicle identification and traffic monitoring in metropolitan settings. It uses a number of image preprocessing techniques, including contrast correction, noise reduction, and grayscale conversion, to enhance the quality of number plate photos that are collected. The method precisely isolates individual characters by implementing character segmentation using bounding box extraction and morphological procedures after preprocessing. The OCR model is trained using a manually selected dataset that includes pictures of license plates in various settings. While data cleaning procedures handle problems like missing or ambiguous characters, the dataset is painstakingly tagged to guarantee accuracy in character identification. Character dimensions and contextual information are taken into consideration during feature selection and engineering, which improves model performance even further. In order to precisely identify and recreate the vehicle number plate, the segmented characters are first used to extract attributes, which are subsequently matched to a collection of known characters. For convenience and connection with traffic control systems, the finished product is saved as a text file. By offering a scalable solution for effective vehicle monitoring, this study not only shows that conventional techniques for number plate recognition are feasible, but it also advances intelligent transportation systems.
Hand gesture recognition plays a crucial role in human-computer interaction (HCI) and assistive technologies, particularly for individuals with motor impairments. In this study, an Artificial Neural Network (ANN) classifier is developed for recognizing hand gestures based on Electromyography (EMG) signals. The EMG dataset, consisting of multiple gesture classes, is preprocessed and normalized before being used to train an ANN model with two hidden layers. The model is trained using the Levenberg–Marquardt (trainlm) algorithm, with a cross-entropy loss function for multi-class classification. To evaluate the efficacy of the proposed ANN model, a comparative analysis was conducted against two traditional classifiers: Support Vector Classifier (SVC) and K-Nearest Neighbor (KNN). The experimental results show that the proposed ANN significantly outperforms the traditional classifiers. Additionally, the ANN demonstrates superior macro-averaged performance in terms of precision, recall, and F1-score, indicating its robustness and reliability across multiple gesture classes. These results demonstrate the effectiveness of ANN-based classification for EMG-based hand gesture recognition and highlight its potential for deployment in real-time prosthetic control, assistive technologies, and HCI systems.