IoT Assistive Technology for People with Disabilities
Soulease: A Mind-Refreshing Application for Mental Well-Being
AI-Powered Weather System with Disaster Prediction
AI Driven Animal Farming and Livestock Management System
Advances in AI for Automatic Sign Language Recognition: A Comparative Study of Machine Learning Approaches
Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
The Machine Learning field has picked up its push in any space of examination and just as of late has turned into a dependable apparatus in the therapeutic area. The experiential area of programmed learning is utilized as a part of assignments, for example, restorative choice bolster, medicinal imaging, protein-protein collaboration, extraction of therapeutic information, and for general patient administration care. Machine Learning (ML) is imagined as a device by which PC based frameworks can be incorporated in the medicinal services field to show signs of improvement, all around sorted out therapeutic consideration. It depicts a ML-based strategy for building an application that is fit for distinguishing and dispersing social insurance data. It concentrates sentences from distributed medicinal papers that say maladies and medications, and distinguishes semantic relations that exist in the middle of ailments and medicines. This paper proposes a new way of information retrieval. A clustered similarity approach is used to overcome the previous approach’s drawbacks. Results are obtained and the proposed work is much better than existing techniques.
Grid computing relies on distributed heterogeneous resources to support convoluted computing problems. Grids are mainly classified into computing grid and data grid. Scheduling the jobs in computing grid is a very big obstacle. For efficient and incentive based use of grid resources, optimal mechanism in grids is needed that can distribute jobs to the prime resources and balance work load among them. In the real world, the ants have an unique ability to team up for finding an optimal path for food resources. The behavior of real ants was simulated by an Ant algorithm. In this paper, the authors proposed an optimized Ant algorithm for balanced job scheduling in the computing grid environment. The primary goal of the approach is to schedule and balance the entire system load, so that no resource is overloaded or under loaded under any circumstances. To achieve the primary goal, a Novel Balanced Ant Colony Optimization (NBACO) always calculates and updates local and global pheromone values, which makes the resource always normally loaded. The secondary aim is to considerably improve grid performance in terms of increased throughput, reduced makespan and resubmission time. To achieve this goal, NBACO always finds the most successive available resources in the grid and assigns the job to it. The successive resource is the one, which has the lower tendency to fail. According to the experimental results, NBACO can outperform other job scheduling and load balancing algorithms.
In the recent research years scientists regularly encounter limitations due to large data sets in many areas. Big data is high-volume, high-velocity and high-variety information assets that demand cost effective, innovative forms of information processing for enhanced insight and decision making and it refers to data volumes in the range of Exabytes (10 ) and beyond. Such volumes exceed the capacity of current on-line storage systems and processing systems. Data, information, and knowledge are being created and collected at a rate, that is rapidly approaching the Exabyte range. But, its creation and aggregation are accelerating and will approach the Zettabyte range within a few years. Data sets grown in size in part because they are increasingly being gathered from ubiquitous information-sensing mobile devices, aerial sensory technologies, software logs, cameras, microphones, Radio-Frequency Identification (RFID) readers, and Wireless Sensor Networks. Big data usually include data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target, as of 2012 ranging from a few dozen Terabytes to many Petabytes of data in a single data set. Big data is difficult to work with using most Relational Database Management Systems and desktop statistics and visualization packages, requiring instead "massively parallel software running on tens, hundreds, or even thousands of servers". This paper analyze the issues and challenges of the recent research activity towards big data concepts.
The features of Object Oriented Programming (i.e., abstraction, encapsulation and visibility) prevent the direct access to some modules of the source code, so that the automated test data generation becomes a challenging task. To solve this problem, Search Based Software Testing (SBST) has been applied. Previously, Random search approach has been applied to generate test suite which achieves code coverage of 70% in less than 10 sec. To address the same problem, new search approach algorithms are used which generates a test suite of high code coverage than early approaches in less search time. The proposed approach, first describes how to structure the test data generation problem for unit-testing. Based on static analysis, it considers methods or constructors to change the state that may reach to test the target. After that, it introduces a generator of instance of classes using two strategies to increase the likelihood to reach the test target, such as Seeding and Diversification, which may produce test suite of high code coverage with less search time.
Internet users have increased with increase in technology. People use internet for their business need, education, online marketing, social communication and more. For this purpose, users are forced to use the internet for their day-to-day operation effectiveness. But content in the web is also increased due to various factors. Any user from any place can upload any type of file easily, and all the contents today are available in the form of digital data. From this huge repertoire browsing, the correct web page is a really challenging task to the user. Retrieving the effective and correct content is not an easy job, for which a number of research works have been performed in the field of web extraction. This paper reviews some of the web extraction techniques and methods.