i-manager's Journal on Pattern Recognition (JPR)


Volume 11 Issue 1 January - June 2024

Research Paper

Brain Tumour Detection using Deep Learning Technique

R. Meena* , S. Sharmila**, E. Sheela***, K. Thamizharasi****
*-**** Department of Biomedical Engineering, Gnanamani College of Technology, Namakkal, Tamilnadu, India.
Meena, R., Sharmila, S., Sheela, E., and Thamizharasi, K. (2024). Brain Tumour Detection using Deep Learning Technique. i-manager’s Journal on Pattern Recognition, 11(1), 1-6. https://doi.org/10.26634/jpr.11.1.20824

Abstract

Recent advancements in medical imaging technology, such as integrating InceptionV3 algorithms with MRI scans, have revolutionized brain tumor detection. These algorithms leverage deep learning to analyze MRI images rapidly and accurately, aiding in the precise identification of potential tumors. This integration enhances the efficiency of radiologists, enabling timely interventions and improving patient outcomes. The seamless synergy between MRI technology and deep learning algorithms marks a significant leap forward in neurology, promising more personalized and effective care for patients with brain tumors. Ongoing innovation in medical imaging and AI holds great potential for further improving diagnostic accuracy and treatment effectiveness in the future.

Research Paper

Hand Gesture Recognition using Python and OpenCV

Jyothi Lekshmi K. S.* , T. Brindha**
*-** Department of Information Technology, Noorul Islam Centre for Higher Education, Kumaracoil, Tamil Nadu, India.
Lekshmi, K. S. J., and Brindha, T. (2024). Hand Gesture Recognition using Python and OpenCV. i-manager’s Journal on Pattern Recognition, 11(1), 7-12. https://doi.org/10.26634/jpr.11.1.20863

Abstract

Hand gesture recognition has become increasingly popular as a practical means of enhancing human-computer interaction, especially with its widespread adoption in gaming devices like Xbox and PS4, as well as in other devices such as laptops and smartphones. This technology finds applications in various fields, including accessibility support, crisis management, and medicine. Typically, these systems employ Java and a comprehensive database, showcasing various evolutionary methods and precise descriptions, along with the output and tests conducted to refine the software artifact. The proposed system is at the intersection of machine learning and image processing, utilizing different APIs and tools to streamline processes and enhance customization. It aims to be developed using OpenCV and Python, leveraging the former for image processing and the latter for machine learning. This results in a lighter system with less complex code and a reduced database footprint. This approach enables the system to run efficiently even on mini computers without compromising user experience, leading to a cost-effective solution capable of handling various tasks.

Research Paper

Medical Image Fusion using Multi Scale Guided Filtering

Srikanth M. V.* , Suneel Kumar A.**, Nagasirisha B.***, Venkata Lakshmi T.****
*-** Usha Rama College of Engineering and Technology, Andhra Pradesh, India.
***-**** Seshadri Rao Gudlavalleru Engineering College, Andhra Pradesh, India.
Srikanth, M. V., Kumar, A. S., Nagasirisha, B., and Lakshmi, T. V. (2024). Medical Image Fusion using Multi Scale Guided Filtering. i-manager’s Journal on Pattern Recognition, 11(1), 13-23. https://doi.org/10.26634/jpr.11.1.20735

Abstract

One of the key components of computer vision applications like satellite and remote sensing and medical diagnosis is multi-modal image fusion. There are various multi-modal image fusion techniques, and each has advantages and disadvantages of its own. This paper proposes a new method based on multi-scale guided filtering. Initially, each source image is divided into coarse and fine layers at various scales using a guided filter. In order to fuse coarse and fine layers, two different saliency maps are used: an energy saliency map to coarse layers and a modified spatial frequency energy saliency map to fine levels. According to the simulation results, the suggested technique performs better in terms of quantitative evaluations of quality than other state-of-the-art techniques. All the simulation results are carried on a standard brain atlas database.

Research Paper

Object Detection for the Visually Impaired

Brightson Chimwanga*
Department of Computer Science & Engineering, DMI-St. John the Baptist University, Lilongwe, Malawi.
Chimwanga, B. (2024). Object Detection for the Visually Impaired. i-manager’s Journal on Pattern Recognition, 11(1), 24-29. https://doi.org/10.26634/jpr.11.1.21049

Abstract

This paper presents the design and development of a mobile application, built using Flutter, that leverages object detection to enhance the lives of visually impaired individuals. The application addresses a crucial challenge faced by this community: the lack of real-time information about their surroundings. A solution is proposed that utilizes pre-trained machine learning models, potentially through TensorFlow Lite for on-device processing, to identify objects within the user's field of view as captured by the smartphone camera. The application goes beyond simple object recognition; detected objects are translated into natural language descriptions through text-to-speech functionality, providing crucial auditory cues about the environment. This real-time information stream empowers users to navigate their surroundings with greater confidence and independence. Accessibility is a core principle of this paper. The user interface will be designed for compatibility with screen readers, ensuring seamless interaction for users who rely on assistive technologies. Haptic feedback mechanisms will be incorporated to provide non-visual cues and enhance the user experience. The ultimate goal of this paper is to create a user-friendly and informative application that empowers visually impaired individuals to gain greater independence in their daily lives. The application has the potential to improve spatial awareness, foster a sense of security, and promote overall inclusion within society.

Research Paper

A Framework for Wood Quality Assessment using DenseNet Algorithm

A. Dhanamathi* , K. Ajith**, V. Balamurugan***, S. Sridhar****
*-**** Roever Engineering College, Perambalur, Tamilnadu, India.
Dhanamathi, A., Ajith, K., Balamurugan, V., and Sridhar, S. (2024). A Framework for Wood Quality Assessment using DenseNet Algorithm. i-manager’s Journal on Pattern Recognition, 11(1), 30-36. https://doi.org/10.26634/jpr.11.1.21060

Abstract

Wood defect detection is a critical aspect of quality control in the woodworking industry. This work introduces Deep Wood Inspect, a pioneering system that leverages the capabilities of deep learning for the precise identification and classification of defects in wooden materials. The proposed methodology utilizes densely connected Convolutional Neural Networks (CNNs), specifically DenseNet, to analyze high-resolution images of wood surfaces, providing an automated and efficient solution for defect detection. By integrating advanced image processing techniques with machine learning algorithms, Deep Wood Inspect not only enhances the accuracy of defect identification but also accelerates the inspection process, reducing manual labor and minimizing human error. The system's adaptability to various types of wood and defect categories further contributes to its robustness, making it a valuable tool for both large- scale manufacturers and smaller woodworking enterprises seeking to uphold high standards of quality control.