i-manager's Journal on Software Engineering (JSE)


Volume 19 Issue 1 July - September 2024

Research Paper

Interactive Virtual Physics Lab Environment

Vincent Gausi* , Mtende Mkandawire**
*-** DMI St. John the Baptist University, Lilongwe, Malawi.
Gausi, V., and Mkandawire, M. (2024). Interactive Virtual Physics Lab Environment. i-manager’s Journal on Software Engineering, 19(1), 1-8.

Abstract

This paper introduces an innovative approach to virtual physics education through the development of an Interactive Virtual Physics Lab Environment (IVPLE). Designed to provide students with a dynamic and engaging platform for conducting physics experiments virtually, the IVPLE leverages advanced technologies like Virtual Reality (VR) and Augmented Reality (AR) to enhance traditional lab experiences, making them more accessible, immersive, and adaptable to diverse learning styles. The IVPLE incorporates realistic simulations of classical and modern physics experiments, allowing students to interact with virtual apparatus, collect data, and perform analyses within a virtual environment. Through intuitive user interfaces and responsive feedback mechanisms, learners can manipulate variables, observe experimental outcomes, and deepen their understanding of fundamental physics principles. Key features include collaborative learning spaces, which enable students to engage in group experiments and share insights in real time. The platform also offers adaptive learning paths, tailoring experiments to individual student progress and providing targeted feedback to reinforce conceptual understanding. Additionally, gamified elements foster motivation and reward-based learning, transforming physics education into an enjoyable and interactive experience. This paper outlines the conceptual framework, technological foundations, and potential impact of the IVPLE, underscoring its potential to revolutionize physics education by transcending traditional constraints and fostering a more intuitive understanding of core physical concepts.

Research Paper

Virtual Navigation Controls System and Object Detection using Computer Vision

Sangwani Mkandawire* , Fanny Chatola**
*-** Department of Computer Science, DMI St. John the Baptist University, Lilongwe, Malawi.
Mkandawire, S., and Chatola, F. (2024). Virtual Navigation Controls System and Object Detection using Computer Vision. i-manager’s Journal on Software Engineering, 19(1), 9-16.

Abstract

The Virtual Navigation Control and Object Detection System is a solution designed to provide users with virtual navigation tools and real-time object detection capabilities using computer vision. Leveraging the device's camera, the system enables hands-free control through gesture recognition, eliminating the need for physical peripherals. Advanced object detection offers users immediate information about their surroundings, enhancing user experience, accessibility, and educational opportunities. Key technologies include computer vision libraries, machine learning algorithms, and a user- centered interface. Testing demonstrated over 95% accuracy in gesture recognition and 92% accuracy in object detection under normal lighting (85% in low-light conditions). Integration tests confirmed smooth communication between modules, with real-time operation averaging a 0.2-second response time. The system's scalability allows for handling multiple objects and complex gestures, though further refinement may enhance its performance in diverse environments. These results indicate substantial progress toward an accessible, reliable, and secure alternative to traditional input methods. Potential applications include healthcare, where hands-free control improves hygiene, and accessibility aids for visually impaired users, highlighting the system's broad applicability in enhancing human-computer interaction.

Research Paper

Development of a Model for Detecting Emotions using CNN and LSTM

Manish Goswami* , Aditya Parate**, Nisarga Kapde***, Shashwat Singh****, Nitiksha Gupta*****, Meena Surjuse******
*-****** Computer Science and Engineering, S. B. Jain Institute of Technology, Management and Research, Nagpur, India.
Goswami, M., Parate, A., Kapde, N., Singh, S., Gupta, N., and Surjuse, M. (2024). Development of a Model for Detecting Emotions using CNN and LSTM. i-manager’s Journal on Software Engineering, 19(1), 17-28.

Abstract

This paper presents the development of a real-time deep learning system for emotion recognition using both speech and facial inputs. For speech emotion recognition, three significant datasets: SAVEE, Toronto Emotion Speech Set (TESS), and CREMA-D were utilized, comprising over 75,000 samples that represent a spectrum of emotions: Anger, Sadness, Fear, Disgust, Calm, Happiness, Neutral, and Surprise, mapped to numerical labels from 1 to 8. The system identifies emotions from live speech inputs and pre-recorded audio files using a Long Short-Term Memory (LSTM) network, which is particularly effective for sequential data. The LSTM model, trained on the RAVDEES dataset (7,356 audio files), achieved a training accuracy of 83%. For facial emotion recognition, a Convolutional Neural Network (CNN) architecture was employed, using datasets such as FER2013, CK+, AffectNet, and JAFFE. FER2013 includes over 35,000 labeled images representing seven key emotions, while CK+ provides 593 video sequences for precise emotion classification. By integrating LSTM for speech and CNN for facial emotion recognition, the system shows robust capabilities in identifying and classifying emotions across modalities, enabling comprehensive real-time emotion recognition.

Research Paper

Build Your Own SOC Lab

Kanakamedala Kashish* , Monika Sahu**, Neelam Sharma***, Siddhartha Choubey****
*-**** Shri Shankaracharya Technical Campus, Junwani, Bhilai, Chhattisgarh, India.
Kashish, K., Sahu, M., Sharma, N., and Choubey, S. (2024). Build Your Own SOC Lab. i-manager’s Journal on Software Engineering, 19(1), 29-34.

Abstract

This initiative addresses the critical need for robust cybersecurity in the modern digital landscape. It serves as a comprehensive guide tailored for organizations and individuals seeking practical resources in digital security. Emphasizing cost-effectiveness, adaptability, and scalability, it provides detailed instructions for setting up a functional SOC lab. Covering essential components, including hardware, software tools, and network infrastructure, this guide ensures thorough preparation for tackling cybersecurity challenges. It explores various use cases, such as threat detection, incident response, and security monitoring, enabling hands-on learning in SOC operations. By enhancing stakeholders' capabilities in protecting digital assets and mitigating cyber threats, this initiative contributes to the resilience and security of modern digital ecosystems. Through practical insights and methodologies, it empowers individuals and organizations to navigate the evolving cybersecurity landscape effectively.

Review Paper

Literature Survey on Development of a Model for Detecting Emotions using CNN and LSTM

Manish Goswami* , Aditya Parate**, Nisarga Kapde***, Shashwat Singh****, Nitiksha Gupta*****, Meena Surjuse******
*-****** Department of Computer Science and Engineering, S. B. Jain Institute of Technology, Management and Research, Nagpur, India.
Goswami, M., Parate, A., Kapde, N., Singh, S., Gupta, N., and Surjuse, M. (2024). Literature Survey on Development of a Model for Detecting Emotions using CNN and LSTM. i-manager’s Journal on Software Engineering, 19(1), 35-43.

Abstract

This paper explores the utilization of three major datasets, SAVEE, Toronto Emotion Speech Set (TESS), and CREMA-D, which together contain a substantial repository of 75,000 samples. These datasets cover a broad spectrum of human emotions, from anger, sadness, fear, and disgust to calm, happiness, neutral states, and surprise, mapped to numerical labels from 1 to 8, respectively. The primary objective is to develop a real-time deep learning system specifically tailored for emotion recognition using speech inputs from a PC microphone. This system aims to create a robust model capable of not only capturing live speech but also analyzing audio files in detail, allowing for the classification of specific emotional states. To achieve this, the Long Short-Term Memory (LSTM) network architecture, a specialized form of Recurrent Neural Network (RNN), was chosen for its proven accuracy in speech-centered emotion recognition tasks. The model was rigorously trained using the RAVDESS dataset, comprising 7,356 distinct audio files, with 5,880 files carefully selected for training to enhance accuracy and improve the model's effectiveness in detecting emotions across diverse speech samples. The resulting model achieved a training dataset accuracy of 83%, marking a substantial milestone in advancing speech-based emotion recognition systems.