Brain Tumour Detection using Deep Learning Technique
AI Driven Detection and Remediation of Diabetic Foot Ulcer(DFU)
Advancements in Image Processing: Towards Near-Reversible Data Hiding and Enhanced Dehazing Using Deep Learning
State-of-the-Art Deep Learning Techniques for Object Identification in Practical Applications
Landslide Susceptibility Mapping through Weightages Derived from Statistical Information Value Model
An Efficient Foot Ulcer Determination System for Diabetic Patients
Statistical Wavelet based Adaptive Noise Filtering Technique for MRI Modality
Real Time Sign Language: A Review
Remote Sensing Schemes Mingled with Information and Communication Technologies (ICTS) for Flood Disaster Management
FPGA Implementation of Shearlet Transform Based Invisible Image Watermarking Algorithm
A Comprehensive Study on Different Pattern Recognition Techniques
User Authentication and Identification Using NeuralNetwork
Flexible Generalized Mixture Model Cluster Analysis withElliptically-Contoured Distributions
Efficient Detection of Suspected areas in Mammographic Breast Cancer Images
Pelvic Inflammatory Disease (PID) is a reproductive health infective disease of feminine genital tract and is commonly affecting the young women and adult female. Clinical manifestation of PID differs among patients and decision of medical experts are based on clinician experience instead of hidden data in the knowledge database. The diagnosis of PID based on heuristic lead to errors, where ectopic pregnancy could be mistaken for PID. This paper presents Artificial Neural Network based model to diagnose pelvic inflammatory diseases based on a set of clinical data. The ANN model was trained with 259 clinical data as input to the neural network. The system can predict the presence or absence of PID based on the available symptoms. An accuracy of 96.1% was recorded based on the confusion matrix. The obtained result is promising, an indication that the system can be effective in diagnosis of PID cases.
Hand controllers and electromechanical devices have been used by humans to control robots or machines but there were some constraints in several factors of interaction. Pattern recognition and Gesture recognition are the growing fields of analysis. Hand gesture recognition is very significant for human-computer interaction (HCI). In this work, we present a completely unique real-time methodology for robot control using hand gesture recognition. It is necessary for the user to communicate and control a device in the natural efficient way in human-robot interaction based. The implementation is done using Kinect sensor and Matlab environment. The robot arm is controlled by Firebird V robot. We have implemented a prototype using gesture as a tool for communication with ma-chine command signals are generated using gesture control algorithm. These generated signals are then given to the robot to perform a set of task. This Kinect sensor recognizes the hand gestures and then assigns functions to be performed by the robot for each hand gesture.
Generating three dimensional point cloud for an object in image has found many applications in used in many computer vision systems. In this work a convolutional neural network based semantic segmentation has been used to find region of interest in an image. The region of interest has been represented as point clouds in three dimensional space. Then using image processing technique area based filter operations have been applied to find the total surface area. Finally adding all these small volumes total volume has been calculated. A large number of algorithms have been adapted reconstruction methods have been experimented and tested only for uniform backgrounds, which is disadvantageous for the applications on real images which consists of complex nonuniform regions. In this work semantic segmentation has been used to partition the regions into similar instance based regions. We have used UNET model for the region based segmentation. Then using encoderdecoder scheme the 3D point cloud has been generated after merging pixel clouds. This paper proposes an end-to-end efficient generation network, which is composed of an encoder, a 3D image model, and a decoder. First, a single-view image of object and a nearest-shape retrieval has been formed from UNET are fed into the network; then, the two encoders are merged adaptively according to their homo-graphic or similarity in nature. Then decoder generates fine-grained point clouds from the pixel clouds generated from multiple view images. Each point in the cloud represents a weight according the intensity and color information from which the density and volume of object has been calculated. The experiments on uniform background images show that our method attains accuracy 12 to 15 %margin compared with volumetric and point set generation methods particularly toward large solid objects, and it works multiple view angles as well.
Motion estimation process is an important module in digital video coding applications as it demands more computations when compared to other modules of digital video coding. In order to overcome this difficulty, many motion estimation algorithms were proposed. This paper presents an analysis of some famous algorithms in motion estimation process for digital video coding. In this review, the search procedures, computational complexity and quality of these algorithms are discussed.
In general humans have five senses, among all vision is the most important and best gift given to the humans by GOD, but it is limited to some of the people due to their Visual Impairment issues. If vision is the problem then GOD will give the capabilities in other senses. The proportion of visually impaired and blind people in the overall world has become a very large. In a survey report given by WHO (World Health Organization) in 2010, they estimated nearly 285.389 million people are suffering with visual impairment problems across the globe. Many equipment's (Ex: Cane, Assistive shoe, Spectacles) are developed by different authors for detection of obstacles by visual impaired people over the time. All these equipment's are developed by using different techniques like IoT enabled smart cane, GPS/GSM based smart cane, Wearable devices like Assistive shoe's and blind vision spectacles which detects the obstacles, Smart Phone based navigation technology , Image processing techniques based smart cane which uses the camera for capturing the images, ETA's (Electronic Travel Aid's), normal Ultrasonic sensor based smart canes, Sensors(Ultrasonic, LDR's, Soil moisture and water detection) used smart cane and the most advanced smart canes which uses the Algorithms of Machine Learning and Deep Learning ANN, CNN, RNN. In this paper, we present a clear survey of the navigation systems of blind/Visual impaired people that are proposed by different authors highlighting various technologies used, designs implemented, working challenges faced and requirements of blind people for their autonomous navigation either in indoor or outdoor environment. Also we aims at presenting several existing literatures which are based on object detection by blind people. Due to the advancement in techniques and technology, study, analysis and evaluation of all these proposals by different authors will play a vital role. Hence this survey will concentrate on analyzing the process involved in detection of obstacles with different techniques.