AES-Based Encoding and Decoding Images using MATLAB
A Novel Technique of Sign Language Recognition System using Machine Learning for Differently Abled Person
Implementation of Machine Learning Techniques for Depression in Text Messages: A Survey
A Study of Ransomware Attacks on Windows Platform
Techniques of Migration in Live Virtual Machine and its Challenges
Efficient Agent Based Priority Scheduling and LoadBalancing Using Fuzzy Logic in Grid Computing
A Survey of Various Task Scheduling Algorithms In Cloud Computing
A Viable Solution to Prevent SQL Injection Attack Using SQL Injection
A Computational Intelligence Technique for Effective Medical Diagnosis Using Decision Tree Algorithm
Integrated Atlas Based Localisation Features in Lungs Images
A huge amount of time is spent on manual vetting of processed results to resolve errors in the output generated from the computerization of students academic results for higher institutions in Nigeria. This is not only for salient errors that can be discovered during the verification of results, but also for inconspicuous errors that pass through computed results unnoticed. Such errors can be attributed to errors associated with technical designs of the database and non- compliance of results processing with the institution's rules and study requirements, especially when such rules and requirements are violated. Currently, such errors, in most cases, are often resolved by manual vetting after processing. Some technical issues need to be resolved for reliable computerization of student results. This study provides design solutions that guarantee the correctness and responsiveness of result outputs without manual intervention in computerizing the academic results of the students. The results show that such design approaches save the time spent on manual vetting of computerized results and ensure the reliability of the processing system. Some technical issues are identified and corrected in the development of the system, in addition to the computations of the semester and cumulative GPAs.
In order to improve the performance of I/O and reliability, 1980's disk arrays were introduced, in which we have used multiple disks in parallel. Today, it is used by most of the companies as a product. In this paper, we provide the complete overview of disk arrays and propose a framework for which we can organize current and future work of RAID (Redundant Arrays of Inexpensive Disks). There are two architectural techniques used in disk array, to improve the performance striping across multiple disks, and redundancy is used to improve reliability. This paper describes seven disk array architecture called RAID with levels 0-6 and also equates the cost, performance and their reliability. At last we discuss the advanced topic, such as how to improve performance by purifying the RAID levels and maintain the consistency by designing algorithm.
The effect of poor governance in developing countries can lead to problems as to who is best to vote in elections. This is further affirmed by various political, social, economic and religious segregation, cultural bias and corruption. In quest for good governance and elections within the government without bias, there is a need to consider candidate's credibility. Therefore, this paper proposes a model for predicting credible candidates contesting in an electoral process. Moreover, the rationale of the paper is to assess candidate's credibility and the potential to win elections. Data are extracted from the opinion of elite Nigerians on Twitter and it was used to train the proposed model and to determine a reliable output from the opinions of personal credibility criteria of political leaders in Nigeria. In the proposed system, six (6) political candidates are sampled and the most credible candidate is selected. The data collected are subjected to Adaptive Neuro-Fuzzy Inference System (ANFIS) technique for training and the Gaussian Membership Function (GMF) technique is adopted to calculate the membership grade for each input parameter implemented on MATLAB R2018a. The results obtained are the Average Testing Error (AVE) of 0.17 and the prediction accuracy of 92.85%, respectively. Comparative analysis of the current results shows the prediction of the electoral systems with significant contributions.
Over the years it has been observed that, market expansion in any sector has led to a huge customer base for service providers. This huge base comes with a variety of expectations. When these expectations are not met, it leads to dissatisfaction which ultimately leads to churn. Thus, in competitive markets, Customer Churn is a big problem for any company, causing huge loss of revenue. Our work contributes to developing a model that can predict potential customer for churn. We have used machine learning algorithm like SVM, Random Forest, Linear Regression to predict churn. We have compared different algorithms and speculated to use a method that could provide better accuracy. However, these algorithms do not provide the expected accuracy, so Artificial Neural Networks are applied in the dataset, which provides us with excellent accuracy.
With the advancement of networking applications, the need for security to resolve malicious activity in the network has increased. Network intrusion detection has evolved as a significant security system in networks, enabling it to detect unauthorized access to any network traffic. Through network intrusion systems, a warning message was attained to take necessary action to avoid malicious attacks. However, there is still the need for improvement in network intrusion system since the advancement in technology has created complexity over the detection system, making the current detection system is not effective. Intrusion Detection System (IDS) usually operates based on a trained network traffic pattern. It is defined in such a way that if there exist any variant on the traffic pattern, intrusion will be detected. We need a solution to avoid network attacks, which can be achieved with IDS. Machine Learning (ML) algorithms play a key role in all sectors and domains. In this paper, we investigated the various supervised machine learning algorithms such as Naive Bayes, Random Forest, SVM and XGBoost, and the performance of each algorithm concerning accuracy. This study helps in finding a suitable algorithm to identify the attacks with more accuracy. We used the standard intrusion dataset, i.e. NSLKDD from Canadian Institute for Cyber Security.