Brain Tumour Detection using Deep Learning Technique
AI Driven Detection and Remediation of Diabetic Foot Ulcer(DFU)
Advancements in Image Processing: Towards Near-Reversible Data Hiding and Enhanced Dehazing Using Deep Learning
State-of-the-Art Deep Learning Techniques for Object Identification in Practical Applications
Landslide Susceptibility Mapping through Weightages Derived from Statistical Information Value Model
An Efficient Foot Ulcer Determination System for Diabetic Patients
Statistical Wavelet based Adaptive Noise Filtering Technique for MRI Modality
Real Time Sign Language: A Review
Remote Sensing Schemes Mingled with Information and Communication Technologies (ICTS) for Flood Disaster Management
FPGA Implementation of Shearlet Transform Based Invisible Image Watermarking Algorithm
A Comprehensive Study on Different Pattern Recognition Techniques
User Authentication and Identification Using NeuralNetwork
Flexible Generalized Mixture Model Cluster Analysis withElliptically-Contoured Distributions
Efficient Detection of Suspected areas in Mammographic Breast Cancer Images
Computer-Based Examination (CBE) is a new paradigm in the assessment and measurement of knowledge and capabilities. It relies on computer and its associated technologies to provide solution to some of the problems inherent to human-based approaches to conduct of examinations and invigilation. Such problems include connivance, impersonation, external sourcing and peeking. This paper presents the design of a fingerprint and iris-based framework for CBE invigilation. The framework comprises of modules for CBE, e-invigilation and control. The CBE module comprises of a network backbone, a server and several workstations. The e-invigilation module is designed to use high definition and resolution iris scanners such as Iris Shield-USB MK and CMITech BMT series to capture the iris image of the candidates for processing while the control module will handle the tasks of fingerprint-based authentication of examinees, process monitoring and relaying of situation reports.
Credit card fraud detection is an important aspect of financial institutions that provide various online payment services to its customers. One of criteria which affect performance of credit card fraud detection models is the selection of variables. This paper studies the effects of feature engineering on two sets of feature ranked imbalanced credit card fraud datasets for four classifier techniques. This paper employs the credit card fraud datasets (Taiwan and European bank) obtained from UCI and ULB repositories containing 30,000 and 284,807 transactions respectively. Feature ranking on the sets of datasets is carried out using correlation analysis technique. Algorithms of four classifiers are produced and used on feature and raw ranked data. The algorithms of the classifiers are run in MATLAB. The performance metrics applied in assessing the effects of the four classifiers on the feature and raw ranked datasets are specificity, precision, Matthews correlation coefficient, sensitivity, accuracy, and balanced classification rate. Results from the comparative analysis show that decision tree variants classifiers outperform naïve bayes, support vector and neural network radial basis function techniques respectively. The feature ranked and raw datasets of the European credit card fraud data recorded highest performance metrics for decision trees. The paper investigates the effect of feature ranking of two imbalanced credit card fraud data on four machine learning techniques using filter approach.
Water Quality remains one of the most important factor that influences the aquaculture system as it effects can make or mar the state of organisms as well as the environment. Furthermore, the use of Artificial intelligence especially the Artificial Neural Network (ANN) has greatly improved the forecasting capability of water quality due to better solutions produced as compared to other approaches. The performance of these AI techniques lies in the quality of dataset used for its implementation, which is in turn a function of the preprocessing (Normalization) techniques performed on them. In this paper, the effect of different normalization techniques namely; the Min-Max, Decimal Point, Unitary and the Z-Score were investigated on the prediction of the water quality of the Tank Cultured Re-circulatory Aquaculture System at the WAFT Laboratory, using the ANN. The Water Quality Index was based on the prediction of the Dissolved Oxygen (DO) as a function of the Temperature, Alkalinity, PH and conductivity. The performance of the techniques on the ANN was evaluated using the Mean Square Error (MSE), Nash-Sutcliffe Efficiency coefficient (NSE). The comparison of the evaluation of the various techniques depicts that all the approaches are applicable in the prediction of the DO. The Decimal point technique has the least MSE as compared to others, while the Min-Max technique has better performance with respect to the NSE.
This paper presents a framework for a text-to-speech translation on Android Devices based on Natural Language Processing (NLP) and text-to-speech synthesizer (TTS) to deliver real-time agricultural update to farmers by agricultural extension service workers (AEW) as speech is the most used and natural way for people to communicate with one another. In order to increase the naturalness of oral communications between Agricultural Extension Service workers and farmers, speech aspects must be involved. This is because most local farmers have good understanding of their local language and have strong preferences for it over any other language. Since, majority of farmers are in rural areas, they have little or no understanding of English language, agriculture research output communicated in English language, may be of little or no use to them, if they are delivered in a foreign language. Text-to-Speech Enabled Hybrid Multilingual Translation framework adopts a serial integration of Natural Language Processing (NLP) on one hand and text-to-speech synthesizer (TTS) interpretation technique using android google translate API text-to-speech synthesizer and recognizer to translate English, Hausa, Yoruba, Ibo and Arabic texts in to speech(es) respectively in accordance with farmers registered dialect on the other hand.
Text detection and extraction from the complex images plays a major role in detecting vigorous and valued information. As the rapid growth of obtainable multimedia information and rising prerequisite for data, documentation, indexing and reclamation, many scholars, researchers and scientists have worked a lot on text detection and extraction from the images. The main aim of our work is to give a comparison analysis on the various techniques and methods that were used and applied to detect and extract the text from complex background images. This comparison analysis will help to pick the proper and suitable technique or the method for future purpose. We can find many applications of a text identification and verification such as picture indexing based on text, Image searching the Google based on Keyword, old and required document examination, Extraction of number from number plates of vehicles involved in crime etc. Detecting and extracting the text from images or video is demanding due to unconventionality of textured background, varying font size, different style, resolution, blurring, position, viewing angle and so on. Enormous techniques have already been developed for detecting and extracting the text from the complex background image. All these methods are based on substantial situations. So the purpose of our work is to provide the analysis on the accuracy of widely used algorithms by scholars and researchers in detecting and extracting the text from complex images. In this paper the results of various methods for extracting the text from the images have been analyzed vigorously and this comparison analysis work helps the researches to ease out the time complexity they find in searching for the different combinational works.