i-manager's Journal on Software Engineering (JSE)


Volume 19 Issue 1 July - September 2024

Research Paper

INTERACTIVE VIRTUAL PHYSICS LAB ENVIRONMENT

Vincent Gausi*

Abstract

This abstract introduces an innovative approach to virtual physics education by proposing an Interactive Virtual Physics Lab Environment (IVPLE). The IVPLE is designed to provide students with a dynamic and engaging platform for conducting physics experiments in a virtual setting. Leveraging cutting-edge technologies such as virtual reality (VR) and augmented reality (AR), the IVPLE aims to enhance the traditional physics lab experience, making it more accessible, immersive, and adaptable to diverse learning styles. The IVPLE incorporates realistic simulations of classical and modern physics experiments, allowing students to interact with virtual apparatus, collect data, and perform analyses within a lifelike virtual environment. Through intuitive user interfaces and responsive feedback mechanisms, learners can manipulate variables, observe experimental outcomes, and refine their understanding of fundamental physics principles. Key features of the IVPLE include collaborative learning spaces, enabling students to engage in group experiments and share insights in real time. The platform also offers adaptive learning paths, tailoring experiments to individual student progress and providing targeted feedback to support conceptual understanding. Furthermore, the IVPLE integrates gamified elements to foster motivation and reward-based learning, transforming physics education into an enjoyable and interactive experience. This abstract outlines the conceptual framework, technological foundations, and potential impact of the Interactive Virtual Physics Lab Environment, highlighting its potential to revolutionize physics education by transcending traditional constraints and fostering a deeper, more intuitive understanding of fundamental physical concepts.

Research Paper

Development OF A Model For Detecting Emotions Using CNN And LSTM

Shashwat Singh*

Abstract

In this paper, we focus on developing a real-time deep learning system for emotion recognition using both speech and facial inputs. For speech emotion recognition, we utilized three major datasets: SAVEE, Toronto Emotion Speech Set (TESS), and CREMA-D, which collectively contain over 75,000 samples. These datasets cover a wide range of human emotions, such as Anger, Sadness, Fear, Disgust, Calm, Happiness, Neutral, and Surprise, with emotions mapped to numerical labels from 1 to 8. Our primary objective was to build a system capable of detecting emotions from both live speech inputs via a PC microphone and pre-recorded audio files. To achieve this, we employed the Long Short-Term Memory (LSTM) network architecture, a type of Recurrent Neural Network (RNN) that is particularly effective for sequential data, such as speech. The LSTM model was rigorously trained on the RAVDEES dataset, which contains 7,356 distinct audio files, with 5,880 used for training. The model achieved an impressive training accuracy of 83%, marking significant progress in speech-based emotion recognition. For facial emotion recognition, we applied a Convolutional Neural Network (CNN), known for its strength in image processing tasks. We leveraged four well-known facial emotion datasets: FER2013, CK+, AffectNet, and JAFFE. FER2013 includes over 35,000 labeled images, representing various facial expressions associated with seven key emotions. CK+ provides 593 video sequences that capture the transition from neutral to peak expressions, allowing for precise emotion classification. By combining the LSTM for speech emotion detection and the CNN for facial emotion recognition, our system demonstrated robust capabilities in identifying and classifying emotions across multiple modalities. The integration of these two architectures enabled us to create a comprehensive real-time emotion recognition system capable of processing both audio and visual data.

Research Paper

Build your own SOC lab

Monika Sahu*

Abstract

The "Build Your Own SOC Lab" project addresses the critical need for effective cybersecurity measures in today's increasingly digital world. This paper offers a detailed guide for organizations and individuals aiming to establish a functional and efficient Security Operations Center (SOC). Emphasizing cost-effectiveness, adaptability, and scalability, it provides step-by-step instructions for setting up a SOC lab, covering essential components such as hardware, software tools, and network infrastructure. The project also explores various use cases, including threat detection, incident response, and security monitoring, to facilitate hands-on learning and enhance cybersecurity capabilities.

Review Paper

Literature Survey on Development of a Model for Detecting Emotions Using CNN and LSTM

Shashwat Singh*

Abstract

In this paper, our focus revolved around the utilization of three significant datasets: SAVEE, Toronto Emotion Speech Set (TESS), and CREMA-D, together encompassing a vast repository of 75,000 samples. These datasets encapsulate a wide spectrum of human emotions, ranging from Anger, Sadness, Fear, and Disgust to Calm, Happiness, Neutral states, and Surprises, which are mapped to numerical labels from 1 to 8, respectively. Our project's central objective was the development of a realtime deep learning system specifically tailored for emotion recognition using speech inputs sourced from a PC microphone. The primary aim was to engineer a robust model capable of not only capturing live speech but also intricately analyzing audio files, thereby enabling the system to discern and classify specific emotional states.To achieve this goal, we opted for the Long Short-Term Memory (LSTM) network architecture, a specialized form of artificial Recurrent Neural Network (RNN). The decision to employ LSTM was driven by its proven track record in delivering heightened accuracy when tasked with speech-centric emotion recognition endeavors. Our model underwent rigorous training using the RAVDEES dataset, a rich repository housing 7,356 distinct audio files. Leveraging this dataset, we strategically selected 5,880 files for training purposes, a meticulous approach aimed at bolstering accuracy and ensuring the model's efficacy in detecting and recognizing emotions across a diverse array of speech samples. The culmination of our efforts resulted in a commendable training dataset accuracy of 83%, marking a significant milestone in the advancement of speech-based emotion recognition systems.

Article

VIRTUAL NAVIGATION CONTROLS SYSTEM AND OBJECT DETECTION USING COMPUTER VISION

Sangwani Mkandawire*

Abstract

Virtual Navigation control And Object detection system is a project aimed at providing users with virtual navigation tools using computer vision technology and also object detection capabilities. By leveraging the device’s camera, the system enabled users to control their devices without the need of a physical mouse peripherals, utilizing hand gestures tracked by the camera. The advanced object detection capabilities focuses on real time object recognition, information display, color analysis The importance of this system is that there is enhanced user experience, Empowerment and accessibility, Educational and learning opportunities and insights and analysis. Technologies include computer vision library, machine learning algorithms and interface development. This system provides an importance of this system is that it provides ergonomic benefits, hand-free interaction with devices, enhanced mobility and flexibility and adaptation to modern interfaces etc.