i-manager's Journal on Image Processing (JIP)


Volume 9 Issue 2 April - June 2022

Research Paper

Forensic Investigation with Digital Evidence

Ch. Swapnapriya* , Jose Moses Gummadi**, Kurama Venkata Ramana***
*-*** Department of Computer Science and Engineering, University College of Engineering, Jawaharlal Nehru Technologial University, Kakinada, Andhra Pradesh, India.
** Department of Computer Science and Engineering, Malla Reddy Engineering College, Hyderabad, Telangana, India.
Priya, C. S., Gummadi, J. M., and Ramana, K. V. (2022). Forensic Investigation with Digital Evidence. i-manager’s Journal on Image Processing, 9(2), 1-11. https://doi.org/10.26634/jip.9.2.18953

Abstract

To establish the time of death, it focuses on the detection and identification of body fluids at the crime scene, which /is a very important forensic model. There are many methods for estimating the time of death, which in medicine is called Livor Mortis. This paper is mainly focused on forensic medicine by determining the approximate time of death by collecting various bodily fluids. There are number of different approaches that affect the post-mortem appearance of the body. Here, as a first step in the investigation, photographs are taken to ascertain the change in skin color after death caused by the accumulation of blood. It has been noticed that skin color is due to predetermined hemoglobin and red chromospheres present in red blood cells called melanin. They are separated and grouped by an outstanding digital technology called 3D Filter Block Similarity Clustering (3-DFBSC), which is a very good approach for the proposed system where clustering is very easy. With these digital methods, cluster analysis plays an important role in the analysis of image matching methods with which it can establish the time of death with these digital methods.

Research Paper

Facial Emotion Recognition using Hybrid Features-Novel Leaky Rectified Triangle Linear Unit Activation Function Based Deep Convolutional Neural Network

Anjani Suputri Devi D.* , Suneetha Eluri**
*-** Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Kakinada, Andhra Pradesh, India.
Devi, D. A. S., and Eluri, S. (2022). Facial Emotion Recognition using Hybrid Features-Novel Leaky Rectified Triangle Linear Unit Activation Function Based Deep Convolutional Neural Network. i-manager’s Journal on Image Processing, 9(2), 12-27. https://doi.org/10.26634/jip.9.2.18968

Abstract

Facial Expression Recognition (FER) is an important topic that is used in many areas. FER categorizes facial expressions according to human emotions. Most networks are designed for facial emotion recognition but still have some problems, such as performance degradation and the lowest classification accuracy. To achieve greater classification accuracy, this paper proposes a new Leaky Rectified Triangle Linear Unit (LRTLU) activation function based on the Deep Convolutional Neural Network (DCNN). The input images are pre-processed using the new Adaptive Bilateral Filter Contourlet Transform (ABFCT) filtering algorithm. The face is then detected in the filtered image using the Chehra face detector. From the detected face image, facial landmarks are extracted using a cascading regression tree, and important features are extracted based on the detected landmarks. The extracted feature set is then passed as input to the Leaky Rectified Triangle Linear Unit Activation Function Based Deep Convolutional Neural Network (LRTLU-DCNN), which classifies the input image expressions into six emotions, such as happiness, sadness, neutrality, anger, disgust, and surprise. Experimentation of the proposed method is carried out using the Extended Cohn-Kanade (CK+) and Japanese Female Facial Expression (JAFFE) datasets. The proposed work provides a classification accuracy of 99.67347% for the CK+ dataset along with 99.65986% for the JAFFE dataset.

Research Paper

Geological Map Feature Extraction using Object Detection Techniques - A Comparative Analysis

P. A. N. Dilhan* , R. Siyambalapitiya**
*-** Department of Statistics and Computer Science, Faculty of Science, University of Peradeniya, Peradeniya, Sri Lanka.
Dilhan, P. A. N., and Siyambalapitiya, R. (2022). Geological Map Feature Extraction using Object Detection Techniques - A Comparative Analysis. i-manager’s Journal on Image Processing, 9(2), 28-36. https://doi.org/10.26634/jip.9.2.18916

Abstract

Conducting a geological field survey at the initial stage is an important step in geo-oriented projects and construction. Therefore, better and more accurate solutions are only possible with field analysis and proper modeling. The geological modeling process takes a long time, especially depending on the area of interest. It is inefficient to digitize 2D geological maps with traditional software that uses manual user interaction. This paper proposes a state-of-the-art feature detection methodology for detecting geological features on high-resolution maps. With the development of efficient deep learning algorithms and the improvement of hardware systems, the accuracy of detecting specific objects in digital images, such as human facial features, has reached more than 90%. Current object detection models based on convolutional neural networks cannot be directly applied to high-resolution geological maps due to the input image size limitations of conventional object detection solutions, mostly limited by hardware resources. This paper proposes a sliding window method for character detection of geological features. Detection models are trained using transfer learning with You Look Only Once-v3 (YOLO-v3), Single Shot Multi-Box Detector (SSD), Faster-Region-based Convolutional Neural Network (Faster-RCNN), and Single Shot Multi-Box Detector_RetinaNet (SSD_RetinaNet). All models provide competitive success rates with an average precision (AP) of 0.96 on YOLOv3, 0.88 AP on EfficientNet, 0.92 AP on Faster- RCNN, and 0.97 AP on SSD_RetinaNet. YOLOv3 outperformed the best detection over SSD according to F1 recall and score. Since the input size of detection models is limited, a sliding window algorithm is used to separate high-resolution map images. The final detected strike features are provided as a digital dataset that can be used for further manipulations. Thus, Convolutional Neural Network (CNN) based object detection along with a sliding window protocol can be applied to manual map digitization processes to provide instantaneous digitized data with higher accuracy. This automated process can be used to detect small features and digitize other high-resolution drawings.

Research Paper

Blur Image Detection and Classification using Resnet-50

Bhuvaneswari Polavarapu* , Hema Mamidipaka**
*-** Department of Electronics and Communication Engineering, JNTU-GV College of Engineering, Vizianagaram, Dwarapudi, Andhra Pradesh, India.
Polavarapu, B., and Mamidipaka, H. (2022). Blur Image Detection and Classification using Resnet-50. i-manager’s Journal on Image Processing, 9(2), 37-43. https://doi.org/10.26634/jip.9.2.18875

Abstract

Blur classification is important for blind image restoration. It is difficult to detect blur in a single image without knowing any additional information. This paper uses edge detection methods and a deep learning convolutional neural network called Resnet-50 to classify blurry-type images. The Resnet model effectively reduces the gradient vanishing problem and uses connection skipping to train the network. Typically, images are subject to defocus blur and motion blur, which are caused by the incorrect depth of field and the movement of objects during capture. The dataset used here is the blur dataset from Kaggle, which consists of sharp images, images with blur, and motion blur. In this paper, edge detection methods are applied to images using Laplace, Sobel, Prewitt, and Roberts filters and derived features such as mean, variance, and maximum signal-to-noise ratio, which are used to train a classification algorithm for image classification.

Research Paper

Implementation of Haze Removal Algorithm to Enhance Low Light Images

K. Maheswari* , R. Charan Kadapa**
*-** Department of Electronics and Communication Engineering, Sanskrithi School of Engineering, Puttaparthi, Andhra Pradesh, India.
Maheswari, K., and Kadapa, R. C. (2022). Implementation of Haze Removal Algorithm to Enhance Low Light Images. i-manager’s Journal on Image Processing, 9(2), 44-49. https://doi.org/10.26634/jip.9.2.18796

Abstract

The image is captured in foggy atmospheric conditions, resulting in hazy, visually degraded visibility; it obscures image quality. Instead of producing clear images, pixel-based metrics are not guaranteed. This updated image is used as input in computer vision for low-level tasks like segmentation. To improve this, it introduces a new approach to de-hazing an image, the end-to-end approach, to keep the visual quality of the generated images. So, it takes one step further to explore the possibility of using the network to perform a semantic segmentation method with U-Net. U-Net will be built and used in this model to improve the quality of the output even more.