Landslide Susceptibility Mapping through Weightages Derived from Statistical Information Value Model
An Efficient Foot Ulcer Determination System for Diabetic Patients
Statistical Wavelet based Adaptive Noise Filtering Technique for MRI Modality
Real Time Sign Language: A Review
Remote Sensing Schemes Mingled with Information and Communication Technologies (ICTS) for Flood Disaster Management
FPGA Implementation of Shearlet Transform Based Invisible Image Watermarking Algorithm
A Comprehensive Study on Different Pattern Recognition Techniques
User Authentication and Identification Using NeuralNetwork
Flexible Generalized Mixture Model Cluster Analysis withElliptically-Contoured Distributions
Efficient Detection of Suspected areas in Mammographic Breast Cancer Images
The object detection task is one of the most popular examples of an artificial intelligence (AI) system that is used to identify and classify objects. The Faster Regional-based Convolutional Neural Network (FRCNN) was used to classify LEDs that were placed on any surface. This paper will entail the development of a deep learning model running on the Tensorflow Graphical Processing Unit (GPU) that is capable of identifying and classifying the SMT LED components accurately in realtime form. A different amount of dataset to train a deep learning model is used to make a comparison in terms of accuracy.
An effective hybrid modeling technique is designed for the offline recognition of unconstrained handwritten character texts. The features of each character written in the input are extracted and then passed to the neural network. The features of offline characters are extracted by using the statistical method. The data sets containing texts written by different people are used to train the system. Statistical methods are used to extract features like horizontal, vertical, and radial projections. These extracted features are classified by the feedforward and backpropagation algorithms. The algorithm computes gradient values to update the weight of neurons, and it tries to minimize the error by further updating weights to avoid misclassification of data. This system can efficiently recognize cursive texts and convert them into structural form.
QR code verification is commonly used in many business applications. Every image which comes into scanner for classification, contains data about the item to which it is tagged. Standardized QR code symbols can store more data both vertically and horizontally. All the QR codes appear to be comparable in appearance, while all encoded code has different data. The accessibility of phones with computerized camera gives users a portable platform for unleashing the standard tag, rather than using a specialized scanner, as the conventional scanners have several limitations. The purpose of this article is to remove noise added to the QR code images captured by mobile phones using image processing techniques. After removing the noise the image can be used for recognizing the data associated with the QR code. The proposed methodology was tested in MATLAB and the results are presented.
Face detection and identification are critical in automated security systems. It aids us in the construction of effective security and surveillance systems. There is a need for a more rapid and accurate means of identifying and authenticating users. The goal of this paper is to investigate the feasibility of developing a face recognition system employing Local Binary Pattern Histogram (LBPH) recognizer. The computerized face recognition system simulates the human ability to identify faces by acquiring information through the eyes, comparing it with the person's memory, and recognizing them. In this paper, the web camera is the eye of the system and Raspberry Pi is the computational engine with LBPH to detect a face and recognize it.
The recognition accuracy of 3 face recognition algorithms supported in mathematical spaceways (i.e., the Eigenfaces approach, the Fisher Faces technique, and the probabilistic Eigenfaces approach) is tested for various gallery sizes, image resolutions, and ranges of bits per pixel. Identical experiments are then conducted for numerous mixtures of these techniques, and the effectiveness of these mixtures is compared with that of the individual methods. The results show that there are some interesting relationships between recognition accuracy and image resolution. They also show that the mixtures of these techniques do improve performance over the individual approaches, which suggests that it might be worth looking into the unification of these approaches.