Speech Feature Extraction and Emotion Recognition using Deep Learning Techniques
Tapping into Emotions: Understanding the Neural Mechanisms of Emotional Responses through EEG using Music
Drowsiness Detection and its Analysis of Brain Waves using Electroencephalogram
Smart Wearable Phonocardiogram for Real Time Heart Sound Analysis and Predictive Cardiac Healthcare
A Review of Non-Invasive Breath-Based Glucose Monitoring System for Diabetic Patients
Blockchain 3.0: Towards a Secure Ballotcoin Democracy through a Digitized Public Ledger in Developing Countries
Fetal ECG Extraction from Maternal ECG using MATLAB
Brief Introduction to Modular Multilevel Converters and Relative Concepts and Functionalities
Detection of Phase to Phase Faults and Identification of Faulty Phases in Series Capacitor Compensated Six Phase Transmission Line using the Norm of Wavelet Transform
A Novel Approach to Reduce Deafness in Classical Earphones: MUEAR
A novel mathematical ECG signal analysis approach for features extraction using LabVIEW
Filtering of ECG Signal Using Adaptive and Non Adaptive Filters
Application of Polynomial Approximation Techniques for Smoothing ECG Signals
A Novel Approach to Improve the Wind Profiler Doppler Spectra Using Wavelets
A Review of Non-Invasive Breath-Based Glucose Monitoring System for Diabetic Patients
Speech Emotion Recognition (SER) is crucial for human-computer interaction, enabling systems to better understand emotions. Traditional feature extraction methods like Gamma Tone Cepstral Coefficients (GTCC) are used in SER for their ability to capture auditory features aligned with human hearing, but these methods fail to capture emotional nuances effectively. Mel Frequency Cepstral Coefficients (MFCC) have gained prominence for better representing speech signals in emotion recognition. This work introduces an approach combining traditional and modern techniques, comparing GTCC-based extraction with MFCC and utilizing the Ensemble Subspace k-Nearest Neighbors (ES-kNN) classifier to improve accuracy. Additionally, deep learning models like Long Short-Term Memory (LSTM) and Bidirectional LSTM (Bi-LSTM) are explored for their ability to capture temporal dependencies in speech. Datasets such as CREMA-D and SAVEE are used
Emotion detection by evaluating Electroencephalogram (EEG) signals is an emerging field that provides insights into human emotional states by monitoring brain activity, using music therapy and entertainment. This study aims to establish a connection between brain activity and emotion recognition through music. The applications of this study include mental health assessments, emotionally intelligent agents, adaptive learning, pain assessment, patient monitoring, security and surveillance, and personalized music recommendations. Traditional EEG-based emotion detection techniques frequently struggle with the complex and noisy nature of data. Given that EEG signals are raw data, this study proposes the use of an actor-critic algorithm, enabling accurate, real-time emotion detection in the presence of musical stimuli. The actor-critic network architecture is a sophisticated framework designed to predict emotional states from EEG features, leveraging the rich, real-time data provided about brain activity. In this setup, the actor network generates predictions about an individual's emotional state based on the EEG signals it processes, making informed guesses about various emotional conditions, such as happiness, sadness, or stress. The critic network, on the other hand, evaluates the accuracy of these predictions, assessing how well the actor's predictions align with actual emotional states, thus providing feedback essential for refining and enhancing the actor's predictive capabilities.
Innovative possibilities in healthcare and personal well-being monitoring are enabled by a system integrating EEG sensors, Arduino microcontrollers, and Python scripts for real-time drowsiness detection. The device captures brain wave signals, processes them, and analyzes patterns indicating reduced alertness, utilizing EEG sensors to record signals, Arduino for data processing, and CP2102 for data transfer to a computer. Python scripts analyze EEG signals to detect patterns such as suppressed alpha waves or increased theta waves, signaling drowsiness. This system has diverse applications, including monitoring patients recovering from anesthesia, assessing sleep quality in individuals with sleep disorders, detecting neurological conditions like narcolepsy, and tracking drowsiness in drivers, pilots, or operators of heavy machinery. It can also optimize sleep stages for better rest quality and enhance cognitive performance. The system offers advantages such as immediate intervention, noninvasive operation, affordability, portability, and ease of integration with existing systems, while achieving high accuracy in detecting drowsiness-related brain wave patterns. Future opportunities include integration with wearable devices, advanced machine learning for improved pattern recognition, and multi-modal sensing, showcasing the potential to transform healthcare and personal wellness monitoring for safer and healthier lives.
This study presents the design of an innovative, compact, and wearable device for continuous monitoring of phonocardiogram (PCG) and electrocardiogram (ECG) waveforms. Aimed at regular usage, this device enables users to effortlessly monitor their heart health, providing real-time data on cardiac function. By combining PCG and ECG monitoring, the device provides a comprehensive view of the heart's electrical and acoustic activity, which is essential for early detection of potential cardiac issues. Notable attributes of the device include continuous monitoring, wireless data transmission, and noise reduction technology to ensure high-quality signal acquisition. A user-friendly interface makes it accessible to a wide range of users, while built-in data analysis and archiving capabilities allow for long-term tracking and evaluation of heart health. The device can integrate with other health monitoring systems, allowing for a more holistic approach to patient care. Furthermore, this design offers significant opportunities for research and development in the field of predictive cardiac healthcare. By utilizing advanced algorithms and machine learning, the system can analyze heart sound patterns and detect abnormalities that may not be easily identified through conventional methods. Ultimately, this wearable system aims to improve early diagnosis, personalized treatment, and overall management of cardiac health.
Diabetes management is critical for a vast population worldwide, and traditional blood glucose monitoring methods typically require invasive blood sampling, leading to patient discomfort and poor adherence. This study proposes the development of a non-invasive breath-based glucose monitoring system that leverages gas sensors to detect specific volatile organic compounds (VOCs) in exhaled breath, particularly acetone, which correlates with blood glucose levels. The system will utilize metal oxide semiconductor (MOS) sensors and machine learning algorithms to provide accurate real-time glucose readings. By eliminating the need for finger pricks, this innovative device aims to enhance the convenience and compliance of glucose monitoring for diabetic patients, ultimately contributing to better disease management and quality of life. The feasibility, accuracy, and usability of the system will be validated through clinical trials, paving the way for future advancements in diabetes care technologies.