Blockchain Scalability Analysis and Improvement of Bitcoin Network through Enhanced Transaction Adjournment Techniques
Data Lake System for Essay-Based Questions: A Scenario for the Computer Science Curriculum
Creating Secure Passwords through Personalized User Inputs
Optimizing B-Cell Epitope Prediction: A Novel Approach using Support Vector Machine Enhanced with Genetic Algorithm
Gesture Language Translator using Morse Code
Efficient Agent Based Priority Scheduling and LoadBalancing Using Fuzzy Logic in Grid Computing
A Survey of Various Task Scheduling Algorithms In Cloud Computing
Integrated Atlas Based Localisation Features in Lungs Images
A Computational Intelligence Technique for Effective Medical Diagnosis Using Decision Tree Algorithm
A Viable Solution to Prevent SQL Injection Attack Using SQL Injection
Person recognition is a necessary requirement in government sector and private organization. AADHAR card is an application software used for person identification based on biometrics field since biometrics may be used to ensure that a person is authenticated. Multimodal biometric is a combination of two or more biometrics that helps to remove the limitation of a single biometric trait or an achievement of multimodal biometric person identification by means of combining deferent biometrics modalities like face, ear, iris, finger prints palm prints and foot. This paper worked on three modalities face, ear and foot for calculating results at rank level. For this, the authors have calculated Weight score of each modalities using different classifiers for face, PCA based neural network classifier, for Ear Eigen images and for foot modified sequential harr transform. After that the authors applied logistic regression method on fused data and calculated results that gave better result as compared to others. All works were performed on self created database of 100 persons.
An index (plural: indexes) is a list of words or phrases ('headings') and associated pointers ('locators') to where useful material relating to that heading can be found in a document. Generally the books consists of Back Indexes for minimizing the effort and time to search certain topic or word. Manually generated such Back Indexes have certain flaws. The paper describes the Back-Index-Tool of the generation of books in machine readable format, thus minimizing the effort of generation of back index of book manually, reducing the redundancy of occurring words in generated Back Index. The aim of the paper is to construct precise and complete back index for E-books.
This paper presents about Non Technical Loss (NTL) in power utilities and it describes how to handle it. Non-technical Loss has been an influential factor in the benefits of electric power utilities. At the same time, to distribute generation extensively installed, the consumption patterns are having many similarities between dishonest users and normal users. Non Technical Loss may be theft of electricity, illegal connection, fault metering and billing error. Improving the reliability of the NTL detection algorithm becomes particularly important. Data mining techniques are used to detect the Non Technical Loss using classification algorithm. The implementation is to build an intelligent computational tool to identify the non-technical losses and to select its most close feature, considering information from the database with consumer profiles.
Many recommendation techniques have been developed over the past decade, and major efforts in both academia and industry have been made to improve recommendation accuracy. However, it has been increasingly noted that it is not sufficient to have accuracy as the sole criteria in measuring recommendation quality, and consider other important dimensions, such as diversity, confidence, and trust, to generate recommendations that are not only accurate but also useful to users. More diverse recommendations, presumably leading to more sales of long-tail items, could be beneficial for both individual users and some business models. The main aim of this paper is to find the best top-N recommendation lists for all users according to two measures, they are accuracy and diversity. To get maximum diversity, optimization based approaches are used while maintaining acceptable levels of accuracy in the proposed method.
Clustering is the unsupervised classification of data items into homogeneous groups called clusters. Data linkage is the task of identifying different data items that refer to the same entity across different data sources. De-duplicating one data set or linking several data sets are important tasks in the data preparation steps of many data mining process. Data linkage is traditionally performed among tables to cluster the data. Traditional method has taken long time for clustering the data from data sets. In this new proposed technique, allowing many such operators are to be active in parallel. TWAC is optimized to produce initial results quickly and can hide irregular delays in data arrival by reactively scheduling background processing. The main aim of this paper is optimizing a query clustering operation and execution. Depending upon given query, entail column only linked and clustered from weather data sets. Finally, particular data's will be clustered and exhibited with clustering timing. Therefore, compared to traditional method, TWA clustering method is rapidly to cluster the data from data set. Eventually, comparisons of execution details are stored in a log file (i.e., txt file). This log file is used to viewing the data in SAP-Crystal report. So, TWACA is an effective solution for providing fast query responses to users, even in the presence of slow and bursty remote sources.