i-manager's Journal on Image Processing (JIP)


Volume 9 Issue 3 July - September 2022

Research Paper

Vehicular Detection Technique using Image Processing

Sheryl Radley* , Julia Punitha Malar Dhas**, S. Stewart Kirubakaran***
* Meenakshi College of Engineering, Chennai, Tamil Nadu, India.
**-*** Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India.
Radley, S., Dhas, J. P. M., and Kirubakaran, S. S. (2022). Vehicular Detection Technique using Image Processing. i-manager’s Journal on Image Processing, 9(3), 1-9. https://doi.org/10.26634/jip.9.3.19142

Abstract

The increasing traffic volume creates a greatest challenge in today's traffic research. It is important to know the road traffic density for effective traffic management and Intelligent Transportation System (ITS). Traditional method of detecting vehicles from video is image subtraction which is not effective as it is susceptible to changing levels of brightness. And hence the proposed algorithm detects vehicles from an image in a more precise manner. The proposed work uses image processing that involves the techniques such as image acquisition, image enhancement, and Image segmentation to retrieve image from the source and enhance the contrast and brightness of the image for successful surveillance on transportation system.

Research Paper

Fruit Recognition using Image Processing

Pratibha Sahu* , Abhishek Dewangan**, Snehlata Mandal***
*-*** Department of Computer Science and Engineering, Shri Shankaracharya Group of Institutions, Chhattisgarh, India.
Sahu, P., Dewangan, A., and Mandal, S. (2022). Fruit Recognition using Image Processing. i-manager’s Journal on Image Processing, 9(3), 10-16. https://doi.org/10.26634/jip.9.3.19047

Abstract

Manually classifying and evaluating anything is difficult. It is difficult to manually count ripe fruits and evaluate their quality. Increasing labor costs, a shortage of skilled workers, and declining storage costs are just some of the major challenges associated with fruit production, marketing, and storage, among others. An effective method for localizing all clearly visible objects or portion of an object from an image has been proposed in this study, requiring less memory and processing resources. The main obstacles for object detection, such as object overlap, background noise, low resolution, etc, that prevents us from obtaining better results has been overcome by processing every input image. It also built an enhanced classification or recognition algorithm based on convolutional neural networks, which has shown to perform better than baseline studies.

Review Paper

A Review on Image Captioning System from Artificial Intelligence, Machine Learning and Deep Learning Techniques

Revathi B. S.* , A. Meena Kowshalya**
*-** Government College of Technology, Coimbatore, Tamil Nadu, India.
Revathi, B. S., and Kowshalya, A. M. (2022). A Review on Image Captioning System from Artificial Intelligence, Machine Learning and Deep Learning Techniques. i-manager’s Journal on Image Processing, 9(3), 17-33. https://doi.org/10.26634/jip.9.3.19054

Abstract

Image Captioning is the process of generating textual descriptions of an image, which need to be syntactically and semantically correct. This paper extensively surveys very early literature that includes the advent of Artificial Intelligence, the Machine Learning pathway, the early Deep Learning and the current Deep Learning methodology for Image Captioning. This survey paper aims to develop a system to predict captions for the given images with a higher accuracy by combining the results of different Deep Learning Techniques. This model based on a neural network consists of a vision CNN followed by the language generator RNN. It generates complete sentence in natural language from an input image. The state of art is achieved by comparing three different encoder –decoder models. By comparing three models, the blue score of CNN-LSTM Model with Flikr 8k dataset is 0.44, CNN-LSTM Word Embedding with Flikr 8k dataset is 0.68 and CNN –GRU model Visual Attention with MSCOCO Dataset is 0.86.

Review Paper

Skin Cancer Detection using Machine Learning Techniques: A Review

Mansi Mishra* , R. K. Khare**
*-** Department of Computer Science and Engineering, Shri Shankaracharya Technical Campus, Chhattisgarh, India.
Mishra, M., and Khare, R. K. (2022). Skin Cancer Detection using Machine Learning Techniques: A Review. i-manager’s Journal on Image Processing, 9(3), 34-40. https://doi.org/10.26634/jip.9.3.19074

Abstract

Skin cancer is one of the most common types of cancer worldwide, accounting for about one-third of all diagnoses. Unrepaired DNA breaks in skin cells, which result in genetic flaws or mutations on the skin, are the primary cause of skin cancer. Early detection of skin cancer signs is necessary due to the rising incidence of cases, high death rate, and expensive medical treatments. Researchers have created a number of early detection methods for skin cancer because of how dangerous these problems are. Skin cancer is detected and benign skin cancer from melanoma is distinguished using lesion features including symmetry, color, size, form, etc. Skin lesions are organized in layers, and dermatologists take this into account when making a diagnosis. CNN outperformed even board-certified dermatologists. Machine-assisted methods for detecting cancer are also more efficient. Deep learning is an artificial intelligence operation that simulates the work of the human brain in organizing data and designing decision-making patterns. This research gives a thorough analysis of deep learning methods for skin cancer early detection.

Review Paper

A Review on Music Recommendations Based on Facial Expression

Yash Kale* , Sandeep Maurya**, Anisha Prajapati***
*-*** Department of Computer Engineering, Thakur College of Engineering and Technology, Mumbai, India.
Kale, Y., Maurya, S., and Prajapati, A. (2022). A Review on Music Recommendations Based on Facial Expression. i-manager’s Journal on Image Processing, 9(3), 41-47. https://doi.org/10.26634/jip.9.3.19012

Abstract

Deciding which music to listen to from the huge collection of existing options is often confusing. Depending on the user's mood, multiple suggestion frames are available for topics such as music, food, and shopping. The main purpose of this music recommendation system is to provide users with suggestions based on users' tastes. By analysing the user's facial expressions and emotions, it is possible to understand the user's current emotional and psychological state. Music and video are areas where there is a great opportunity to offer customers a wide range of choices, taking into account customer preferences and recorded information. It is well known that people use facial expressions to more clearly expresses what they want to say and the context of the words. More than 60% of the users believe that the song library has too many songs at any given time to find the one that needs to be played. Developing a recommendation system could help users decide which music to listen to and reduce stress levels. Users do not have to waste time searching and searching for songs; it will recognize the track that best fits the user's mood and present the songs to the user according to the user's mood. Music plays a role in emotions, which in turn affects mood. Books, movies, and Television (TV) shows are a few more means, but unlike these, music conveys its message in pure moments. It can help us when people feel low. When people listen to sad songs, their mood tends to drop. When they listen to happy songs, it makes them feel happier. This music recommendation model will mainly work to improve the mood of the user by providing a detection track for the user's facial expression and recommending the preferred song according to the user's expression. User images are captured using webcams. A user's picture is taken, and depending on the user's mood or feeling, appropriate songs are displayed from the user's playlist to meet the user's requirements.