IoT Assistive Technology for People with Disabilities
Soulease: A Mind-Refreshing Application for Mental Well-Being
AI-Powered Weather System with Disaster Prediction
AI Driven Animal Farming and Livestock Management System
Advances in AI for Automatic Sign Language Recognition: A Comparative Study of Machine Learning Approaches
Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
Reliability is the one of the important quality measure of software product. Testing is one important phase in software development life cycle whose intention is to find the bugs present in the software product. A mathematical model which describes the software testing in software development cycle termed as software reliability growth models. Past few decades many software reliability growth models were proposed. Estimating the accurate release time of software is an important challenging issue of software Development Company. If the software product is released early contain more errors makes software to be less reliable, where as late release will increase the development cost. It is observed that more than half of the total development cost is concentrated during the maintenance phase. Estimating the reliability and accurate development cost of software product by software reliability growth model at maintenance phase is an important issue. Warranty is defined to be an agreement between the customer and vendor of the software to provide the extra protection to the product. Several papers have concentrated on optimal release time by considering the warranty cost. All early proposed NHPP warranty cost models considered a perfect-debugging software reliability growth model while estimating the software maintenance cost. But it was observed that, the software product is influenced by several factors like environment, resources and nature of the faults during the operational phase. In such circumstances it is quite not right to consider a perfect NHPP software reliability growth model in operational phase. In this paper we proposed an imperfect debugging software reliability growth model by combining the cost and reliability at a given warranty period to estimate the optimal release time of software product.
With great computing power comes great responsibility. Security is of utmost importance in a grid environment. Since a grid performs large computations, data is assumed to be available at every node in the processing cycle. This increases the risk of data manipulation in various forms. Also we have to keep in mind what happens to the data when a node fails. An ideal grid will have a small time of convergence and a low recovery time in case of a complete grid failure. By convergence, we mean that each and every processor node will have complete information about each and every other processor node in the grid. Recovery time is the time it takes for the grid to start from scratch after a major breakdown. To put things in the right perspective, a good grid based computing environment will have an intelligent grid administrator to monitor user logs and scheduled jobs and a good grid operating system which will be tailor made to suit the application of the grid. We propose a similarity between the OSI model and a Grid Model to elaborate the functions and utilities of a grid. There have been numerous previous comparisons and each is knowledgeable in its own right. But a similarity with a network model adds more weight since physically a grid is nothing but an interconnection, and interconnection can be best defined in relation to a computer network interconnection. We proposed a secured grid model for the distributed applications in the grid computing environment.
The singular value decomposition (SVD) is an important technique used for factorization of a rectangular real or complex matrix. But computationally SVD is very expensive in terms of time and space. Multicore processors can be used for such type of problems which are computationally intensive. The Cell Broadband Engine is one such multicore processor consisting of a traditional PowerPC based master core meant to run the operating system, and 8 delegate slave processors built for compute intensive processing. This work introduces a modification on the serial singular value decomposition algorithm. It describes parallel implementation of the modified algorithm on Cell BE and issues involved. Exposure of system level optimization features in Cell BE has been employed on algorithm specific operations to achieve improvements to a great extent. The implementation achieves significant performance, thereby giving about 8 times speedup over sequential implementation.
Image denoising is a common procedure in digital image processing aiming at the removal of noise, which may corrupt an image during its acquisition or transmission, while retaining its quality. This procedure is traditionally performed in the spatial or frequency domain by filtering. Recently, a lot of methods have been reported that perform denoising on the Discrete Wavelet Transform (DWT) domain. The transform coefficients within the subbands of a DWT can be locally modeled as i.i.d (independent identically distributed) random variables with Generalized Gaussian distribution. Some of the denoising algorithms perform thresholding of the wavelet coefficients, which have been affected by additive white Gaussian noise, by retaining only large coefficients and setting the rest to zero. However, their performance is not sufficiently effective as they are not spatially adaptive. Some other methods evaluate the denoised coefficients by an MMSE (Minimum Mean Square Error) estimator, in terms of the noised coefficients and the variances of signal and noise. The signal variance is locally estimated by a ML (Maximum Likelihood) or a MAP (Maximum A Posteriori) estimator in small regions for every subband where variance is assumed practically constant. These methods present effective results but their spatial adaptivity is not well suited near object edges where the variance field is not smoothly varied. The optimality of the selected regions where the estimators apply has been examined in some research works. This paper evaluates some of the wavelet domain algorithms as far as their subjective or objective quality performance is concerned and examines some improvements.
As the transmission of data on the Internet increases, the need to protect connected systems also increases. Most existing network-based signatures are specific to exploit and can be easily evaded. In this paper, we propose generating vulnerability-driven signatures at network level without any host-level analysis of worm execution or vulnerable programs. This implementation considers both temporal and spatial information of network connections. This is helpful for identification of complex anomalous behaviors. For detecting the unknown intrusions the proper knowledge base is to be formed after preprocessing the packets captured from the network. As the first step, we design a network-based length-based signature generator (LESG) for the worms exploiting buffer overflow vulnerabilities. This work is focused on the TCP/IP network protocols.
Extraction of semantics from a text is vital application covering various field of artificial intelligence like natural language processing , knowledge representation , machine learning and so forth . Research work is being carried for languages in international platform like English , German , Chinese etc .In India also, research associated with Indian languages like Hindi , Tamil , Bengali and other regional languages is developing faster . In this paper, we have emphasized upon the use of Sanskrit language for semantic extraction . Sanskrit ,being an order free language with systematic grammar gives an excellent opportunity for extracting semantic with higher efficiency .Panini , an ancient grammarian has introduced six Karakas (cases) to identify the semantic role of word in a sentence . These karkas are analyzed and applied for semantic extraction from the Sanskrit text . Input sentences are first converted into syntactic structure , these syntactic structures are then used for semantic analysis. The syntactic structures are present in Part of Speech (POS) tagged form and various features of this tagged data is analyzed to extract semantic role of the word in a sentence. .Finally, semantic label of each word of a sentence are stored in frames called case frames, which act as a knowledge representation tool . So mapping of input POS tagged data to semantic tagged data is done and case frames are generated . Such system are useful in building question-answer based applications , machine learning , knowledge representation and information retrieval.
This Paper proposes a new approach using Markov model for analyzing the reliability of component based software system with similar software components and similar repair component. The components of the software system under consideration can have two distinct configurations, namely, they can be arranged in either series or in parallel. A third case is also considered, in which the software system is up (good) if k-out-of- n software components are good. For all the three cases, a procedure is proposed for calculating the probability of the software system availability and the duration of the software system to reach the steady state.
In our everyday communication infrastructure mobile ad hoc networks plays a central role. In mobile ad hoc networks, how to achieve the multicast communication is a challenging task due to the fact that the topology may change frequently and communication links may be broken because of users’ mobility. We introduced MANSI (Multicast for Ad hoe Network with Swarm Intelligence) protocol, which relies on a swarm intelligence based optimization technique to learn and discover efficient multicast connectivity. The proposed protocol instances that it can quickly and efficiently establish initial multicast connectivity and/or improved the resulting connectivity via different optimization techniques. Using a simulation approach, we investigated performance of the proposed algorithm through a comparison with an algorithm previously proposed in the literature. Based on the numerical results, we demonstrate that our proposed algorithm performs well.
Low power design has gain attention of designers, since a decade. Various methods have been proposed in literature. One way is to adopt Bus Encoding scheme. It has been further observed that, efficiency proof of proposed bus encoding method does not have concrete base. Task becomes complex when we talk about implementation on computer system. In this paper, a modeling technique for Bus CODEC is presented. Model is made versatile to use for any type of bus encoding technique. Major focus was towards dynamic power dissipation, which in turn depends on operating frequency, drain to drain supply voltage, capacitance and switching activity. Work has been done to find out optimum value of supply voltage. Strong attempt has been witnessed for dynamic changing frequency. None of the effort has been seen in turns of data profiling. This paper presents a software model to maintain record of data flow across computer channel. A processor of MHz. frequency has been taken into consideration. Database has been created to support variety of CPU available. Developed system enables to set input as per desired sequence. Subsequently transition activity has been calculated. Care has been taken to make model versatile, that we can use it for variety of Bus Encoder/Decoder. The only restriction is, it should not contain memory elements. This paper includes extensive experimental work of Gray Bus Encoding scheme. System is having capability to update database gradually, as we go on implementing our task. Work has been further extended to maintain library having multiple CPU, with different manufacturing scale, supply voltage and operational frequency. Result has been compiled taking random dataset into consideration.