NON-INVASIVE NEONATAL GOLDEN HUE DETECTOR
Species Classification and Disease Identification Using Image Processing and Convolutional Neural Networks
A novel meta-heuristic jellyfish Optimize for Detection and Recognition of Text from complex images
Rice Leaf Disease Detection Using Convolutional Neural Network
Comparative Analysis of usage of Machine learning in Image Recognition
Identification of Volcano Hotspots by using Resilient Back Propagation (RBP) Algorithm Via Satellite Images
Data Hiding in Encrypted Compressed Videos for Privacy Information Protection
Improved Video Watermarking using Discrete Cosine Transform
Contrast Enhancement based Brain Tumour MRI Image Segmentation and Detection with Low Power Consumption
Denoising of Images by Wavelets and Contourlets using Bi-Shrink Filter
To establish the time of death, it focuses on the detection and identification of body fluids at the crime scene, which /is a very important forensic model. There are many methods for estimating the time of death, which in medicine is called Livor Mortis. This paper is mainly focused on forensic medicine by determining the approximate time of death by collecting various bodily fluids. There are number of different approaches that affect the post-mortem appearance of the body. Here, as a first step in the investigation, photographs are taken to ascertain the change in skin color after death caused by the accumulation of blood. It has been noticed that skin color is due to predetermined hemoglobin and red chromospheres present in red blood cells called melanin. They are separated and grouped by an outstanding digital technology called 3D Filter Block Similarity Clustering (3-DFBSC), which is a very good approach for the proposed system where clustering is very easy. With these digital methods, cluster analysis plays an important role in the analysis of image matching methods with which it can establish the time of death with these digital methods.
Facial Expression Recognition (FER) is an important topic that is used in many areas. FER categorizes facial expressions according to human emotions. Most networks are designed for facial emotion recognition but still have some problems, such as performance degradation and the lowest classification accuracy. To achieve greater classification accuracy, this paper proposes a new Leaky Rectified Triangle Linear Unit (LRTLU) activation function based on the Deep Convolutional Neural Network (DCNN). The input images are pre-processed using the new Adaptive Bilateral Filter Contourlet Transform (ABFCT) filtering algorithm. The face is then detected in the filtered image using the Chehra face detector. From the detected face image, facial landmarks are extracted using a cascading regression tree, and important features are extracted based on the detected landmarks. The extracted feature set is then passed as input to the Leaky Rectified Triangle Linear Unit Activation Function Based Deep Convolutional Neural Network (LRTLU-DCNN), which classifies the input image expressions into six emotions, such as happiness, sadness, neutrality, anger, disgust, and surprise. Experimentation of the proposed method is carried out using the Extended Cohn-Kanade (CK+) and Japanese Female Facial Expression (JAFFE) datasets. The proposed work provides a classification accuracy of 99.67347% for the CK+ dataset along with 99.65986% for the JAFFE dataset.
Conducting a geological field survey at the initial stage is an important step in geo-oriented projects and construction. Therefore, better and more accurate solutions are only possible with field analysis and proper modeling. The geological modeling process takes a long time, especially depending on the area of interest. It is inefficient to digitize 2D geological maps with traditional software that uses manual user interaction. This paper proposes a state-of-the-art feature detection methodology for detecting geological features on high-resolution maps. With the development of efficient deep learning algorithms and the improvement of hardware systems, the accuracy of detecting specific objects in digital images, such as human facial features, has reached more than 90%. Current object detection models based on convolutional neural networks cannot be directly applied to high-resolution geological maps due to the input image size limitations of conventional object detection solutions, mostly limited by hardware resources. This paper proposes a sliding window method for character detection of geological features. Detection models are trained using transfer learning with You Look Only Once-v3 (YOLO-v3), Single Shot Multi-Box Detector (SSD), Faster-Region-based Convolutional Neural Network (Faster-RCNN), and Single Shot Multi-Box Detector_RetinaNet (SSD_RetinaNet). All models provide competitive success rates with an average precision (AP) of 0.96 on YOLOv3, 0.88 AP on EfficientNet, 0.92 AP on Faster- RCNN, and 0.97 AP on SSD_RetinaNet. YOLOv3 outperformed the best detection over SSD according to F1 recall and score. Since the input size of detection models is limited, a sliding window algorithm is used to separate high-resolution map images. The final detected strike features are provided as a digital dataset that can be used for further manipulations. Thus, Convolutional Neural Network (CNN) based object detection along with a sliding window protocol can be applied to manual map digitization processes to provide instantaneous digitized data with higher accuracy. This automated process can be used to detect small features and digitize other high-resolution drawings.
Blur classification is important for blind image restoration. It is difficult to detect blur in a single image without knowing any additional information. This paper uses edge detection methods and a deep learning convolutional neural network called Resnet-50 to classify blurry-type images. The Resnet model effectively reduces the gradient vanishing problem and uses connection skipping to train the network. Typically, images are subject to defocus blur and motion blur, which are caused by the incorrect depth of field and the movement of objects during capture. The dataset used here is the blur dataset from Kaggle, which consists of sharp images, images with blur, and motion blur. In this paper, edge detection methods are applied to images using Laplace, Sobel, Prewitt, and Roberts filters and derived features such as mean, variance, and maximum signal-to-noise ratio, which are used to train a classification algorithm for image classification.
The image is captured in foggy atmospheric conditions, resulting in hazy, visually degraded visibility; it obscures image quality. Instead of producing clear images, pixel-based metrics are not guaranteed. This updated image is used as input in computer vision for low-level tasks like segmentation. To improve this, it introduces a new approach to de-hazing an image, the end-to-end approach, to keep the visual quality of the generated images. So, it takes one step further to explore the possibility of using the network to perform a semantic segmentation method with U-Net. U-Net will be built and used in this model to improve the quality of the output even more.