i-manager's Journal on Software Engineering (JSE)


Volume 18 Issue 1 July - September 2023

Research Paper

A Hybrid ANN-CNN Model for Predicting Non-Linear Relationship of Covid-19 Cases Based on Weather Factors

Yahaya Mohammed Sani * , Andrew Gahwera**
*-** Department of Computer Science, School of Computing and Information Technology, Makerere University, Kampala, Uganda.
Sani, Y. M., and Gahwera, A. (2023). A Hybrid ANN-CNN Model for Predicting Non-Linear Relationship of Covid-19 Cases Based on Weather Factors. i-manager’s Journal on Software Engineering, 18(1), 1-18. https://doi.org/10.26634/jse.18.1.20121

Abstract

With the global increase in the emergence of viral diseases, the most recent being the Coronavirus Disease 2019 (COVID- 19) in 2020-2021, it has decimated the world with little understanding of its history and the factors that influence its transmission dynamics. Weather significantly influences the spread of respiratory infectious diseases like influenza, yet the impact of weather on COVID-19 transmission in Nigeria remains unexamined and necessitates further clarification. This study presents and compares the results of six machine learning models, the developed Hybrid ANN-CNN, ANN, CNN, LSTM, LASSO, and Multiple Linear Regression models, aiming to predict the impact of weather factors on COVID-19 cases. The dataset used in this study includes daily datasets of Nigerian COVID-19 cases and seven weather variables collected from May 1, 2020, to April 30, 2021. The results indicate that the developed Hybrid ANN-CNN outperforms the remaining five models based on Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for all cases. Specifically, for confirmed cases, the Hybrid ANN-CNN had an MAE of 0.0274, for recovery cases 0.0257, and for death cases 0.0425. Similarly, for RMSE, the developed Hybrid ANN-CNN had values of 0.0469 for confirmed cases, 0.0813 for recovery cases, and 0.0840 for deaths. This was followed by LASSO with an MAE of 0.01384 and CNN and LSTM with 0.1384 and 0.1385, respectively.

Research Paper

Artificial Intelligence Disruption and its Impacts on Future Employment in Africa - A Case of the Banking and Financial Sector in Ghana

William Bediako Danso* , Eric Hanson**
*-** BlueCrest University College, Accra, Ghana, West Africa.
Danso, W. B., and Hanson, E. (2023). Artificial Intelligence Disruption and its Impacts on Future Employment in Africa - A Case of the Banking and Financial Sector in Ghana. i-manager’s Journal on Software Engineering, 18(1), 19-35. https://doi.org/10.26634/jse.18.1.20082

Abstract

Artificial Intelligence is a transformative technology with the potential to both displace and create jobs across industries. Experts believe it will greatly benefit humanity by reducing tedious tasks and advancing technology. The World Economic Forum predicts that by 2025, 50% of tasks will be automated, compared to the current 29%, and around 75% of companies plan to adopt AI, with 50% expecting it to drive job growth, according to their 2023 report. Fast-growing roles stem from technology, digitalization, and sustainability. By 2025, AI will replace 75 million jobs but generate 133 million new ones, resulting in a net increase of 58 million jobs globally. However, certain industries will face significant displacement, and AI's impact on unemployment rates will differ across countries, regions, and industries. AI will likely displace jobs in manufacturing but boost employment in healthcare and education. However, experts warn of AI's risks for job market in Africa, citing automation replacing repetitive tasks like data entry and customer service. Adapting skills to the changing job landscape is crucial, but it comes with added costs for both individuals and organizations. AI advancements are set to automate routine jobs, potentially causing employment shifts in Africa, similar to global trends. New opportunities in AI, data science, and tech may emerge, but the impact hinges on AI adoption speed, infrastructure, and policies. Challenges like skills gaps, data ecosystems, ethics, policies, infrastructure, and user attitudes hinder AI adoption, affecting industries, as seen in Ghana. To boost AI adoption in Africa, building strong ecosystems involving policymakers, universities, companies, startups, and partnerships is crucial. Failure to address these challenges will hinder Ghana and Africa's progress, causing them to fall behind globally. This paper highlights the hurdles faced by Ghana and African nations in AI adoption, emphasizing job displacement and unemployment effects on job seekers. Its aim is to equip policymakers and stakeholders with insights into AI's disruptive nature, aiding in the creation of sustainable policies for the industry. The study started with a review of AI disruption's impact on future jobs in Africa using secondary sources on evolving AI tech. It also involved gathering firsthand data via interviews in Ghana to understand challenges in AI adoption, especially among industry professionals.

Research Paper

Comparative Analysis for Survival Prediction from Titanic Disaster using Machine Learning

Anjani Suputri Devi D.* , Manjusha D.**, Pujith P.***, G. V. Satyanarayana Ch.****, Sailusha V.*****, Vivekananda Reddy G.******
*-****** Sri Vasavi Engineering College, Tadepalligudem, Andhra Pradesh, India.
Devi, D. A. S., Manjusha, D., Pujith, P., Satyanarayana, Ch. G. V., Sailusha, V., and Reddy, G. V. (2023). Comparative Analysis for Survival Prediction from Titanic Disaster using Machine Learning. i-manager’s Journal on Software Engineering, 18(1), 36-44. https://doi.org/10.26634/jse.18.1.20137

Abstract

Among the most notorious shipwrecks in history is the Titanic. Out of the 2,224 passengers and crew, 1,502 perished when the Titanic sank on April 15, 1912, during her maiden voyage, following an iceberg collision. Ship safety laws have improved as a result of this dramatic disaster that stunned the world. Scientists and investigators are beginning to understand what could have caused some passengers to survive while others perished in the Titanic catastrophe. A contributing factor in the high death toll from the shipwreck was the insufficient number of lifeboats available for both passengers and the crew. An intriguing finding from the sinking is that certain individuals, such as women and children, had a higher chance of surviving than others. Since the accident, new regulations were drafted mandating that the number of lifeboats match the number of passenger seats. Numerous machine learning techniques were used to forecast the passengers' survival rate. Preprocessing and data cleaning are essential measures to reduce bias. In this paper, decision trees and random forests, two machine learning techniques, are used to determine the probability of passenger survival. The primary goal of this work is to distinguish between the two distinct machine learning algorithms to analyze traveler survival rates based on accuracy. Machine learning technologies are utilized to forecast which passengers would survive the accident. The highest accuracy achieved is 81.10% for Gradient Boost Trees.

Research Paper

Blockchain and Machine Learning for Data Analytics, Privacy Preserving, and Security in Fraud Detection

Anand Dubey* , Siddhartha Choubey**
*-** Department of Computer Science Engineering, Shri Shankaracharya Technical Campus, Junwani, Bhilai, Chhattisgarh, India.
Dubey, A., and Choubey, S. (2023). Blockchain and Machine Learning for Data Analytics, Privacy Preserving, and Security in Fraud Detection. i-manager’s Journal on Software Engineering, 18(1), 45-55. https://doi.org/10.26634/jse.18.1.20091

Abstract

Blockchain technology has emerged as a revolutionary distributed ledger system with the potential to transform various industries, including finance, supply chain, healthcare, and more. However, the decentralized nature of blockchain introduces unique challenges in terms of fraud detection and prevention. This abstract provides an overview of the current state of research and technologies related to fraud detection in blockchain technology-based systems. The paper begins by discussing the fundamental characteristics of blockchain, highlighting its immutability, transparency, and decentralization. These characteristics provide a promising foundation for ensuring data integrity and security but also pose significant challenges in detecting and mitigating fraudulent activities. Next, the paper explores various types of fraud that can occur in blockchain systems, such as double-spending, Sybil attacks, 51% attacks, smart contract vulnerabilities, and identity theft. Each type of fraud is explained along with its potential impact on the integrity and reliability of blockchain systems. To address these challenges, the paper presents an overview of existing fraud detection techniques in blockchain systems. These techniques encompass a range of approaches, including anomaly detection, machine learning algorithms, consensus mechanisms, cryptographic techniques, and forensic analysis. The strengths and limitations of each technique are discussed to provide a comprehensive understanding of their applicability in different scenarios. Furthermore, the paper highlights emerging trends in fraud detection research within the blockchain domain. These trends include the integration of artificial intelligence and blockchain technology, the use of decentralized and federated machine learning approaches, the development of privacy-preserving fraud detection mechanisms, and the utilization of data analytics and visualization techniques for improved detection and investigation. The paper concludes by emphasizing the importance of continuous research and development in fraud detection for blockchain technology-based systems. As blockchain adoption expands across industries, it is crucial to enhance the security and trustworthiness of these systems by effectively detecting and preventing fraud. Future directions for research and potential challenges are also discussed, encouraging further exploration in this vital area of study.

Research Paper

Machine Learning for Detecting Substance use Behaviors from Wearable Biosensor Data Streams: A Survey and Outlook for 5G/6G Connectivity

Dr. Ushaa Eswaran* , Vishal Eswaran**, Keerthna Murali***, Vivek Eswaran****
* Department of Electronics and Communications Engineering, Indira Institute of Technology and Sciences, Idupur, Andhra Pradesh, India.
** CVS Health Centre, Dallas, Texas, United States.
*** Dell Technologies, Austin, Texas, United States.
**** Medallia Inc, Austin, Texas, United States.
Eswaran, U., Eswaran, V., Murali, K., and Eswaran, V. (2023). Machine Learning for Detecting Substance use Behaviors from Wearable Biosensor Data Streams: A Survey and Outlook for 5G/6G Connectivity. i-manager’s Journal on Software Engineering, 18(1), 56-63. https://doi.org/10.26634/jse.18.1.20177

Abstract

Wearable biosensors allow continuous monitoring of physiological and behavioral signals associated with substance use disorders. Nevertheless, generating clinically meaningful insights requires novel machine learning techniques capable of handling high-velocity, noisy data streams. This paper surveys existing algorithms proposed for detecting substance use from wearables, highlighting key trends such as deep learning approaches and personalization. Ongoing challenges related to ground truth labeling, model interpretation, and computational efficiency are also discussed. The advent of 5G/6G networks is poised to transform this field by facilitating coordinated sensing across multiple wearables and enabling advanced on-device analytics. This work consolidates progress in wearable analytics for substance use monitoring and identifies open problems to steer future research. An experimental evaluation using wearable data from 20 participants demonstrates that a personalized CNN model achieved the best performance in detecting cannabis use events, boasting 89% precision, 83% recall, and a 0.94 AUROC score, outperforming classical machine learning approaches such as random forests.