IoT Assistive Technology for People with Disabilities
Soulease: A Mind-Refreshing Application for Mental Well-Being
AI-Powered Weather System with Disaster Prediction
AI Driven Animal Farming and Livestock Management System
Advances in AI for Automatic Sign Language Recognition: A Comparative Study of Machine Learning Approaches
Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
MDD-FG System is informatics software developed to enlighten the public about the use of food in the management of mineral deficiency diseases. The mineral content of Malaysian foods is correlated with standard dietary benchmarks. It is constructed based on knowledge database and rule-based data mining principle of simple expert system. Software framework of the system is built with Microsoft Windows 8.1 environment and Visual Studio 2013. The minimum hardware installation requirements include: IntelR Core TM processor, 0.5 – 1.0 GHz processor speed, RAM of 1 Gigabyte and 500 Megabyte HDD Space. The database was designed in Microsoft SQL server. The data are sourced primarily from ICPMS elemental analysis of food contents. The secondary data of minerals and diseases were obtained from the literature and dietary allowances from Malaysian and WHO dietary benchmarks. The system user interface launches home page that displays the MDD-FG in text and graphic illustration. The title-bar displays 'welcome' on the home page window. The three accessible commands in the menu bar are “File”, “View” and “About”. The “View” contains three paths. These are “Food Dishes”, “Mineral Elements” and “Deficiency Diseases”. The six “View” sub-path interfaces are Foods/Diseases Foods/Elements, Diseases/Foods, Diseases/Elements, Elements/Foods and Elements/Diseases. The internal features and system units indicated normal behavior upon white and black box testing. End user response survey was conducted. The software was adjudged user friendly, acceptable and efficient.
At present, interview is still considered one of the pragmatic approaches to gathering software requirements from the different stakeholders in a software project. Despite unrelenting efforts by researchers, requirements gathered using this method still suffer anomalies such as inconsistency and incompleteness; this problem is partly due to communication gaps between Requirement Engineers (RE) and project stakeholders and partly due to the RE directing some questions to the wrong persons. This paper proposes a framework, which mirrors the Zachman's Enterprise Framework to systematically classify requirement interview questions and assign different question categories to appropriate persons in a disciplined way. A working software project is used as an example to illustrate the use of the framework.
It is very important to have quality software. This means that quality should meet many requirements, such as keeping the GUI simple to use, including faults and failures, and so on. A fruitful effort is needed to make this quality a reasonable standard. Testing is one of the most significant part of quality assurance, especially during the development phase. As the development of the program is coming to an end, fixing the errors becomes more difficult, and in fact it developsharder to find the errors. This may mean checking each part during development to find and fix errors before affecting on to the subsequent stage. In this paper, we have discussed the features of different automated software testing tools. In brief, we have obtainable a comprehensive explanation concentrating on numerous feature set, efficiency, easiness and usability of each tool.
Due to advancements in information technology, the Internet of Things (IoT) has been emerging as the next big move in our daily lives. The IOT is rapidly transforming into a highly heterogeneous ecosystem that provides interoperability among different types of devices and communication technologies. The proposed system for recovery of incomplete sensed data by using IOT. So, to recognize and identify all the data automatically IoT requires new solutions for the different physical objects into a global ecosystem. IOT applications collect huge amount of data from all connected sensors. IOT recovers the missing data from IOT sensors by utilizing data from related sensors. To recover missing data an algorithm MapR Edge is introduced. MapR Edge more powerful clustering algorithm which has the ability to send data back to cloud for a faster and more significant data. In this project only three nodes are being used where automatically computations are performed at the sensor, where each sensor is connected independently to the cloud. Whenever the data crosses its destiny value at the nodes, that particular data will be sent to the cloud server. Missing values can be estimated from neighboring nodes.
This paper aims to reduce the manual work involved in the performance evaluation and analysis of students, by automating the process right from retrieval of results to pre-processing, segregating, and storing them into a database. The authors also aim to perform analysis on huge amounts of data effectively and facilitate easy retrieval of various types of information related to students' performance. They aim to achieve this through Python, Crawlers, and other Database tools. Further, a scope is given to establish to data warehouse wherein, data mining techniques can be applied to perform various kinds of analyses, creating a knowledge base and use it further for prediction purposes.
With an increasing demand to structure data for efficient access in large data warehouses, hash tables serve as an efficient way for the implementing dictionaries by providing with keys to values of the dictionary. However, such algorithms tend to get computationally expensive due to collisions in a hashing (or hash) table. A searching in the hash table under reasonable assumptions, could take an expected time of O(1) (Aspnes, 2015). Although, in practice, when hashing performs extremely well, it could take as long as a linked list in a worst case scenario, which is O(n) (Sing & Garg, 2009). Collision occurs when two keys hash to the same slot or value. The purpose of this article is to research and provide a comparative study on the different hashing techniques and then implement a suitable one for a banking record system scenario.