IoT Assistive Technology for People with Disabilities
Soulease: A Mind-Refreshing Application for Mental Well-Being
AI-Powered Weather System with Disaster Prediction
AI Driven Animal Farming and Livestock Management System
Advances in AI for Automatic Sign Language Recognition: A Comparative Study of Machine Learning Approaches
Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
Various Object-Oriented (OO) inheritance metrics have been proposed and their reviews are available in the literature. This paper presents the empirical approach to OO inheritance tree metric proposed by Rajnish and Bhattacherje and an attempt has been made to define an empirical relation between software development times with respect to its dependence upon inheritance tree metric values. An attempt has also been made to analyze the various dependencies of development time of a program upon its inheritance tree metric values. A statistical analysis was done and focus was on how closely the inheritance tree metrics were correlated to the development time of various C++ class hierarchies.
The lifting scheme called second generation wavelets, can be designed to represent classical wavelets into lifting steps or to increase the number of vanishing moments of wavelets or to create different types of wavelets including adaptive and non-linear filters. The lifting scheme provides a new spatial intuition into the wavelet transform that simplifies the introduction of adaptivity. The adaptive transform is constructed based on adaptive prediction in a lifting scheme procedure.
In this paper an attempt has been made to compare the proposed adaptive lifting scheme that works better than the Non- adaptive lifting scheme and its ability to achieve balance between image quality and computational complexity by using Adaptive Wavelet Image Compression (AWIC) algorithm. We demonstrate the power of our proposed adaptive lifting scheme with successful applications to image compression problems. Its application lossy compression is used to show the performance of the adaptive lifting scheme.
To stay in the competitive and dynamic world of software development, organizations must optimize the usage of their limited resources to deliver quality products on time and within budget. This requires prevention of fault introduction and quick discovery and repair of residual faults. In this paper, we propose a neural network based approach for component based software reliability estimation and modeling. We first explain the neural networks from the mathematical viewpoints of software reliability modeling. Then, we will show how to apply neural networks to predict software reliability by designing different elements of neural networks. Furthermore, we will use the neural network approach to build a dynamic weighted combinational model. The two most important Analytical software reliability growth models are Non-homogeneous Poisson process (NHPP) model and Neural Networks (NN) model. In this paper we propose an approach using the past fault-related data with Neural Networks model to improve reliability predictions in the early testing phase. A numerical example is shown with both actual and simulated datasets and the applicability of proposed model is demonstrated through real software failure count data sets.
Measuring software complexity is one of the major challenges faced by the researchers. Complexity is a major driver of the cost, reliability, and functionality of software systems. To control complexity, one must be able to measure it. In this paper, we proposed a method for measuring software complexity at testing phase. We employ the fuzzy repertory table technique to acquire the necessary domain knowledge of testers from which the software complexity is measured. The technique proposed here measures both absolute and relative complexity. Here, we measured absolute complexity for a product ‘Image Processing Tool’ and the results are analyzed.
Quality Function Deployment (QFD) is a methodology for building the "Voice of the Customer" into product and service design. In the Quality Function Deployment (QFD) process, decision-making is an essential and crucial task. QFD is an extensive process that contains loads of data and involves complex calculations making it more tedious for designers and engineers to deal manually with this data. Moreover, since the traditional QFD exercise employs linguistic expressions and crisp values, fuzzy concepts are to be employed for accurate results. Thus a need for efficient fuzzy integrated QFD software is highly recognized in the QFD software market. Softwares can be suitably designed to meet market requirements only when the associated data are meticulously examined and customer needs are better understood. To this end, the paper aims to analyze the QFD process from both viewpoints — Traditional as well as Software so as to mine valuable information which can be used for the development of QFD software. It then talks about the shortcomings in the available ones and the features required therein. It also discusses fuzzy concepts and its incorporation in the QFD software. The result of this work will assist the software developers in understanding the QFD process and choosing the appropriate tools, which in turn will lead to development of efficient QFD Software.
Face recognition has become increasingly important due to heightened security unrest in the world today. Traditionally, two dimensional (2D) images are used for recognition. However, they are affected by pose, illumination and expression changes. In this paper, a new three dimensional (3D) face matching technique that is able to recognize faces at various angles is proposed. This technique consists of three main steps, which are face feature detection, face alignment and face matching. The face feature detection section consists of face segmentation, eye area and corners detection, mouth area detection and nose area and tip detection. These features are detected using a combination of 2D and 3D images. An improved face area detection method is proposed. Besides that, a new method to detect the eyes and mouth corners automatically using curvature values is proposed. Finally, to detect the nose tip, a method that calculates nose tip candidates and filters them out based on their location is proposed. The feature positions are then used to achieve uniform alignment for the unknown probe face and those already in the database. Finally, face matching, which consists of surface matching, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), is performed to identify the unknown probe face. The proposed method uses PCA and LDA on 3D images, instead of 2D images. Only the face area between the nose and forehead is used for recognition. This proposed technique is able to reduce the effects of pose, illumination and expression changes, which are common problems of 2D face recognition techniques. This is a fully automatic technique that does not require any user intervention at any step of its process.
Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. The execution of scientific workflows in grid environments requires many disputes due to the dynamic nature of such environments and the characteristics of scientific applications. We have proposed a computational e-governance framework for regulating the public requirements. The framework requires scheduling algorithms for allocating resources to application jobs in such a way that the users' requirements are met. This work presents an algorithm that dynamically schedules tasks of workflows to grid sites based on the performance of these sites when running previous jobs from the same workflow. The algorithm captures the dynamic characteristics of grid environments without the need to check out the remote sites. It has been tested in our own grid environment using Globus Toolkit 4.0. The experimental results show that the new scheduling algorithm can lead to significant performance gain in various applications.
In this paper a synthesis approach for designing a fuzzy supervised nonlinear PID controller is considered. The objective of this work is to develop a PID based control algorithm for non linear discrete systems using the combination of non conventional and conventional control techniques. The proposed algorithm is a supervised structure, where a fuzzy supervisor provides at each sample time the suitable PID parameters. In order to improve the dynamic response of the closed loop system, the optimization of the performance of the fuzzy supervisor will be considered. Simulation is carried out for a first order non linear process and the speed control of a DC motor with serial excitation.
Image segmentation is one of the important areas of current research. This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. The authors have used the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Extracting internal markers from the aforementioned region probability map by using higher thresholds still results in a poor object. This method works for low contrast edge detection of images. This could not produce better result for Blurred images to image Analyze and classify the images. By applying this method one can enhance the edge. The authors of the paper have taken this concept from references cited in the paper and implemented it and produced results in the paper. After that they have modified the method by applying thinning technique based on erosion and got good results than existing method. And they found that, it is good for medical images.
Video image compression has been an area where the computational demand is far above the capacity of conventional sequential processing. In this paper, we present a parallel motion estimation model for video sequence compression using cluster computing on a local network. The approach proposed is the decomposition of functions and data on a cluster of workstations using MPI mechanism. Parallel compression is achieved by having a multiple networked personal computer systems that perform compression on different chunks of input frames simultaneously. The method used for video compression is conventional block based motion vector estimation and a refined motion vector approximation that uses less side information for decoding. The implementation result shows that the proposed parallel method has better speedup than sequential algorithm and is very much suitable for real time applications like online video surveillance, video conferencing and telemedicine.