Brain Tumour Detection using Deep Learning Technique
AI Driven Detection and Remediation of Diabetic Foot Ulcer(DFU)
Advancements in Image Processing: Towards Near-Reversible Data Hiding and Enhanced Dehazing Using Deep Learning
State-of-the-Art Deep Learning Techniques for Object Identification in Practical Applications
Landslide Susceptibility Mapping through Weightages Derived from Statistical Information Value Model
An Efficient Foot Ulcer Determination System for Diabetic Patients
Statistical Wavelet based Adaptive Noise Filtering Technique for MRI Modality
Real Time Sign Language: A Review
Remote Sensing Schemes Mingled with Information and Communication Technologies (ICTS) for Flood Disaster Management
FPGA Implementation of Shearlet Transform Based Invisible Image Watermarking Algorithm
A Comprehensive Study on Different Pattern Recognition Techniques
User Authentication and Identification Using NeuralNetwork
Flexible Generalized Mixture Model Cluster Analysis withElliptically-Contoured Distributions
Efficient Detection of Suspected areas in Mammographic Breast Cancer Images
Vehicle theft is a major crime all over the world, usually related with other crimes or just stolen for its spare parts. Car manufactures include immobilizers with the cars to prevent theft. Many anti-theft devices are available in the market, which alerts with an alarm or tracks the stolen vehicle. This paper proposes a novel method that uses facial recognition to prevent vehicle theft. Using this system only authorised person by the owner of the car can operate the car and they have to be registered in the system. The system works by scanning the face of the driver before ignition, and if the user's face is recognised with the images in the system, the car will start. When we switch on the device, it scans the owner's face for the first time. The car may be started after a successful registration. If the face does not match with the images in the system, the system denies access and the buzzer sounds and an alert message is sent to the owner's mobile phone. The image of the unauthorised person will be captured and sent to the vehicle owner. GPS can also be used to locate the vehicle.
Foot ulcer caused by diabetes is a medical disorder that affects many people all over the world. This paper proposes a method for evaluating the severity of the wound that has been caused by the disease. The image of the wound is taken using a camera and MATLAB software was used to analyze the width and severity of the ulcer using the Adaptive K-Mean algorithm. Self-assessment of an ulcer in the foot is a proper, convenient, and cost-effective way to save money on travel, medications, and hospital visits. To determine the depth of the wound, and how much it has inflicted on the patient, the scanned is image is preprocessed with Gaussian filter to eliminate noise, and then adaptive K-Mean algorithm is applied and analyzed. From the results, the treatment procedure can be determined.
The provision of methods to support audiovisual interactions with growing volumes of video data is an increasingly important challenge for data processing. Currently, there has been some success in generating lip movements using speech or generating a talking face. Among them, talking face generation aims to get realistic talking heads synchronized with the audio or text input. This task requires mining the connection between audio signal/text and lip-sync video frames and ensures the temporal continuity between frames. Thanks to the problems like polysemy, ambiguity, and fuzziness of sentences, creating visual images with lip synchronization remains challenging. This problem is solved employing a datamining framework to find out the synchronous pattern between different channels from large recorded audio/text dataset and visual dataset, and applying it to get realistic talking face animations. Specifically, we decompose this task into two steps: muscular movement of mouth prediction and video synthesis. First, a multimodal learning method is proposed to get accurate lip movement while speaking with multimedia inputs (both text and audio). In the second step, Face2Vid framework is used to get video frames conditioned on the projected lip movement. This model is used to translate the language within the audio to a different language and dub the video in new language alongside proper lip synchronization. This model uses tongue processing and machine translation (MT) to translate the audio then uses the generative adversarial network (GAN) and recurrent neural network (RNN) to apply proper lip synchronization.
Crop plants play a vital role in the field of agriculture and also influence climatic change, and therefore, taking care of crop plants is very essential. Similar to humans, plants are also affected by many diseases caused by bacteria, fungi and virus. Recognizing of these diseases at the right time and restoring them is very essential to prevent whole crop from destruction. This paper suggests a deep learning-based model plant to detect plant disease in crops. The model would use neural network and would be able to identify several diseases from plants using images of their leaves. Augmentation is applied on dataset to increase the sample size. Then Convolution Neural Network (CNN) is used with multiple pooling and convolution layers. Plant dataset is used to instruct the model. After training the model, it is tested to validate the output. Based on the trained dataset, we classify the disease according to the class to which it belongs. In our future work we would consider suggesting appropriate pesticides provided to eradicate the identified disease. Also, in future our model can be integrated with drone to detect plant diseases.
Hearing impaired people often consider communication as the primary important challenge they face with normal people. On the other hand, every normal person does not know the sign language. Sign language is the way of communication for hearing impaired people and speech impaired people. Our paper aims to bridge the gap between the hearing impaired persons and normal people with the advent of new technologies of web applications, Machine Learning and Natural Language Processing. Conversion of speech into the sign language is the main scope here. This system takes audio as input, converts this audio message into text and displays the relevant Indian Sign Language. By using this system, the communication is made easier between normal and hearing impaired persons.