i-manager's Journal on Software Engineering (JSE)


Volume 3 Issue 3 January - March 2009

Article

Software Reusability through Object-Oriented Inheritance Tree Metric

Sunil Kumar Singh* , Kumar Rajnish**, Kamal K Mehta***
* Department of MCA, Sri Shankaracharya College of Engineering and Technology, Bhilai
** Department of Computer Science & Engineering, Birla Institute of Technology, Ranchi
*** Department of Computer Science & Engineering SSCET BHILAI (C.G.)
Sunil Kumar Singh, Kumar Rajnish and Kamal K Mehta (2009). Software Reusability through Object-Oriented Inheritance Tree Metric, i-manager’s Journal on Software Engineering, 3(3),1-5. https://doi.org/10.26634/jse.3.3.185

Abstract

Various Object-Oriented (OO) inheritance metrics have been proposed and their reviews are available in the literature. This paper presents the empirical approach to OO inheritance tree metric proposed by Rajnish and Bhattacherje and an attempt has been made to define an empirical relation between software development times with respect to its dependence upon inheritance tree metric values. An attempt has also been made to analyze the various dependencies of development time of a program upon its inheritance tree metric values. A statistical analysis was done and focus was on how closely the inheritance tree metrics were correlated to the development time of various C++ class hierarchies.

Article

Neural Networks Based Approach for Component Based Software Reliability Evaluation

Chinnaiyan R* , S. Somasundaram**
* Assistant Professor, Department of Computer Applications, A.V.C College of Engineering, Mannampandal, Mayiladuthurai , Tamil Nadu.
** Assistant Professor, Department of Mathematics, Coimbatore Institute of Technology, Coimbatore.
Chinnaiyan R and S. Somasundaram (2009). Neural Networks Based Approach for Component Based Software Reliability Evaluation,i-manager’s Journal on Software Engineering, 3(3),6-10. https://doi.org/10.26634/jse.3.3.188

Abstract

To stay in the competitive and dynamic world of software development, organizations must optimize the usage of their limited resources to deliver quality products on time and within budget. This requires prevention of fault introduction and quick discovery and repair of residual faults. In this paper, we propose a neural network based approach for component based software reliability estimation and modeling. We first explain the neural networks from the mathematical viewpoints of software reliability modeling. Then, we will show how to apply neural networks to predict software reliability by designing different elements of neural networks. Furthermore, we will use the neural network approach to build a dynamic weighted combinational model. The two most important Analytical software reliability growth models are Non-homogeneous Poisson process (NHPP) model and Neural Networks (NN) model. In this paper we propose an approach using the past fault-related data with Neural Networks model to improve reliability predictions in the early testing phase. A numerical example is shown with both actual and simulated datasets and the applicability of proposed model is demonstrated through real software failure count data sets.

Article

Fuzzy Based Framework for Software Complexity Analysis

P. Ramasubramian* , Narayanan Prasanth N**
* Professor and Head, Department of Computer Science and Engineering, The Rajaas Engineering College.
** Lecturer, Department of Information Technology, National College of Engineering, Anna University, Tirunelveli.
Ramasubramanian P, Narayanan Prasanth N (2009). Fuzzy Based Framework for Software Complexity Analysis,i-manager’s Journal on Software Engineering, 3(3),11-15. https://doi.org/10.26634/jse.3.3.189

Abstract

Measuring software complexity is one of the major challenges faced by the researchers. Complexity is a major driver of the cost, reliability, and functionality of software systems. To control complexity, one must be able to measure it. In this paper, we proposed a method for measuring software complexity at testing phase. We employ the fuzzy repertory table technique to acquire the necessary domain knowledge of testers from which the software complexity is measured. The technique proposed here measures both absolute and relative complexity. Here, we measured absolute complexity for a product ‘Image Processing Tool’ and the results are analyzed.

Article

Development of Fuzzy Integrated Quality Function Deployment Software – A Conceptual Analysis

Ashish K. Sharma* , Ashish K. Sharma**, J.R. Sharma***
* Lecturer, Dept. of Information Technology, Manoharbhai Patel Institute of Engineering & Technology, Gondia.
** Asst. Prof. and Head. Dept. of Computer and Information Technology, Manoharbhai Patel Institute of Engineering and Technology, Gondia.
*** Associate Professor, Dept. of Management, Institute of Management Technology (IMT), Nagpur.
Ashish K. Sharma, Mehta I.C and Jitendra Sharma (2009). Development of Fuzzy Integrated Quality Function Deployment Software – A Conceptual Analysis,i-manager’s Journal on Software Engineering, 3(3),16-24. https://doi.org/10.26634/jse.3.3.190

Abstract

Quality Function Deployment (QFD) is a methodology for building the "Voice of the Customer" into product and service design. In the Quality Function Deployment (QFD) process, decision-making is an essential and crucial task. QFD is an extensive process that contains loads of data and involves complex calculations making it more tedious for designers and engineers to deal manually with this data. Moreover, since the traditional QFD exercise employs linguistic expressions and crisp values, fuzzy concepts are to be employed for accurate results. Thus a need for efficient fuzzy integrated QFD software is highly recognized in the QFD software market. Softwares can be suitably designed to meet market requirements only when the associated data are meticulously examined and customer needs are better understood. To this end, the paper aims to analyze the QFD process from both viewpoints — Traditional as well as Software so as to mine valuable information which can be used for the development of QFD software. It then talks about the shortcomings in the available ones and the features required therein. It also discusses fuzzy concepts and its incorporation in the QFD software. The result of this work will assist the software developers in understanding the QFD process and choosing the appropriate tools, which in turn will lead to development of efficient QFD Software.

Research Paper

New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System

Chew L.W* , Seng K.P**, Kenneth Li-minn***
* - *** Member of the School of Electrical & Electronic Engineering, The University of Nottingham, Malaysia Campus.
Chew L.W, Seng K.P and Kenneth Li-minn (2009). New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System,i-manager’s Journal on Software Engineering, 3(3),25-34. https://doi.org/10.26634/jse.3.3.191

Abstract

Face recognition has become increasingly important due to heightened security unrest in the world today. Traditionally, two dimensional (2D) images are used for recognition. However, they are affected by pose, illumination and expression changes. In this paper, a new three dimensional (3D) face matching technique that is able to recognize faces at various angles is proposed. This technique consists of three main steps, which are face feature detection, face alignment and face matching. The face feature detection section consists of face segmentation, eye area and corners detection, mouth area detection and nose area and tip detection. These features are detected using a combination of 2D and 3D images. An improved face area detection method is proposed. Besides that, a new method to detect the eyes and mouth corners automatically using curvature values is proposed. Finally, to detect the nose tip, a method that calculates nose tip candidates and filters them out based on their location is proposed. The feature positions are then used to achieve uniform alignment for the unknown probe face and those already in the database. Finally, face matching, which consists of surface matching, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), is performed to identify the unknown probe face. The proposed method uses PCA and LDA on 3D images, instead of 2D images. Only the face area between the nose and forehead is used for recognition. This proposed technique is able to reduce the effects of pose, illumination and expression changes, which are common problems of 2D face recognition techniques. This is a fully automatic technique that does not require any user intervention at any step of its process.

Research Paper

Efficient Scheduling Algorithm for the Exchange of Data in Grid Environment

Sumathi P* , Punithavalli M**
* Assistant Professor & Head, Department of Computer Applications, PSG College of Arts & Science, Coimbatore, Tamil Nadu, India.
** Director, Department of Computer Science, Sri Ramakrishna College of Arts & Science for Women, Coimbatore, TamilNadu, India.
Sumathi P and Punithavalli M (2009). Efficient Scheduling Algorithm for the Exchange of Data in Grid Environment,i-manager’s Journal on Software Engineering, 3(3),35-42. https://doi.org/10.26634/jse.3.3.192

Abstract

Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. The execution of scientific workflows in grid environments requires many disputes due to the dynamic nature of such environments and the characteristics of scientific applications. We have proposed a computational e-governance framework for regulating the public requirements. The framework requires scheduling algorithms for allocating resources to application jobs in such a way that the users' requirements are met. This work presents an algorithm that dynamically schedules tasks of workflows to grid sites based on the performance of these sites when running previous jobs from the same workflow. The algorithm captures the dynamic characteristics of grid environments without the need to check out the remote sites. It has been tested in our own grid environment using Globus Toolkit 4.0. The experimental results show that the new scheduling algorithm can lead to significant  performance gain in various applications.

Research Paper

A New Synthesis Method for the Fuzzy Tuning of Non Linear PID Controllers

Tounsi-rekik L* , Chibani R**, Chtourou M***
*-*** Department of Electrical Engineering, National School of Engineers, Sfax, Tunisia.
Tounsi-rekik L, Chibani R, Chtourou M (2009). A New Synthesis Method for the Fuzzy Tuning of Non Linear Pid Controllers, i-manager’s Journal on Software Engineering, 3(3),43-51. https://doi.org/10.26634/jse.3.3.194

Abstract

In this paper a synthesis approach for designing a fuzzy supervised nonlinear PID controller is considered. The objective of this work is to develop a PID based control algorithm for non linear discrete systems using the combination of non conventional and conventional control techniques. The proposed algorithm is a supervised structure, where a fuzzy supervisor provides at each sample time the suitable PID parameters. In order to improve the dynamic response of the closed loop system, the optimization of the performance of the fuzzy supervisor will be considered. Simulation is carried out for a first order non linear process and the speed control of a DC motor with serial excitation.

Research Paper

Novel Watershed Segmentation Method for Stumpy Boundary Detection for Image Classification Novel

Naga Raju C * , Reddy L.S.S**
* Professor, Department of Computer Science & Engineering, K.L.College of Engineering, Vijayawada.
** Principal, K.L.College of Engineering, Green Field, Guntur, Andhra Pradesh.
Naga Raju C and Reddy L.S.S (2009). Novel Watershed Segmentation Method for Stumpy Boundary Detection for Image Classification Novel, i-manager’s Journal on Software Engineering, 3(3),52-56. https://doi.org/10.26634/jse.3.3.193

Abstract

Image segmentation is one of the important areas of current research. This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. The authors have used the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Extracting internal markers from the aforementioned region probability map by using higher thresholds still results in a poor object. This method works for low contrast edge detection of images. This could not produce better result for Blurred images to image Analyze and classify the images. By applying this method one can enhance the edge. The authors of the paper have taken this concept from references cited in the paper and implemented it and produced results in the paper. After that they have modified the method by applying thinning technique based on erosion and got good results than existing method. And they found that, it is good for medical images.

Research Paper

Parallel Motion Estimation Using Cluster Computing for Fast Video Sequence Compression

Jeyakumar S* , S.Sundaravadivelu**
* Assistant Professor, Dr.Sivanthi Aditanar College of Engineering, Tiruchendur, India.
** Professor, SSN College of Engineering, Chennai, India.
Jeyakumar S and Sundaravadivelu S (2009). Parallel Motion Estimation Using Cluster Computing for Fast Video Sequence Compression, i-manager’s Journal on Software Engineering, 3(3),57-63. https://doi.org/10.26634/jse.3.3.195

Abstract

Video image compression has been an area where the computational demand is far above the capacity of conventional sequential processing. In this paper, we present a parallel motion estimation model for video sequence compression using cluster computing on a local network. The approach proposed is the decomposition of functions and data on a cluster of workstations using MPI mechanism. Parallel compression is achieved by having a multiple networked personal computer systems that perform compression on different chunks of input frames simultaneously. The method used for video compression is conventional block based motion vector estimation and a refined motion vector approximation that uses less side information for decoding. The implementation result shows that the proposed parallel method has better speedup than sequential algorithm and is very much suitable for real time applications like online video surveillance, video conferencing and telemedicine.

Research Paper

A Comparative Study on Adaptive and Non- Adaptive Lifting based Wavelet Image Compression using AWIC Algorithm

Chenchu Krishnaiah G* , T. Jaya Chandra Prasad**, M. N. Giri Prasad***
* ECE DEPT., GKCE, Sullurpet, A.P, India.
** ECE DEPT., RGMCET, Nandyal, A.P, India.
*** ECE DEPT., JNTUCE, Pulivendula, A.P, India.
Chenchu Krishnaiah G, T. Jaya Chandra Prasad and Giri Prasad M.N (2009). A Comparative Study on Adaptive and Non- Adaptive Lifting based Wavelet Image Compression using Awic Algorithm,i-manager’s Journal on Software Engineering, 3(3), 64-72. https://doi.org/10.26634/jse.3.3.196

Abstract

The lifting scheme called second generation wavelets, can be designed to represent classical wavelets into lifting steps or to increase the number of vanishing moments of wavelets or to create different types of wavelets including adaptive and non-linear filters. The lifting scheme provides a new spatial intuition into the wavelet transform that simplifies the introduction of adaptivity. The adaptive transform is constructed based on adaptive prediction in a lifting scheme procedure.

In this paper an attempt has been made to compare the proposed adaptive lifting scheme that works better than the Non- adaptive lifting scheme and its ability to achieve balance between image quality and computational complexity by using Adaptive Wavelet Image Compression (AWIC) algorithm. We demonstrate the power of our proposed adaptive lifting scheme with successful applications to image compression problems. Its application lossy compression is used to show the performance of the adaptive lifting scheme.