i-manager's Journal on Software Engineering (JSE)


Volume 5 Issue 2 October - December 2010

Research Paper

Imperfect Debugging Software Reliability Growth Model With Warranty Cost And Optimal Release Policy

Shaik Mohammad Rafi* , Shaheda Akthar**
* Associate Professor, Department of Computer Science and Engineering, SMITW, J.N.T University, Kakinada, A.P., India.
** Associate Professor, Department of Computer Science and Engineering, SMCE, J.N.T University, Kakinada, A.P., India.
Shaik Mohammad Rafi and Shaheda Akthar (2010). Imperfect Debugging Software Reliability Growth Model With Warranty Cost And Optimal Release Policy. i-manager’s Journal on Software Engineering, 5(2),1-9. https://doi.org/10.26634/jse.5.2.1329

Abstract

Reliability is the one of the important quality measure of software product. Testing is one important phase in software development life cycle whose intention is to find the bugs present in the software product. A mathematical model which describes the software testing in software development cycle termed as software reliability growth models. Past few decades many software reliability growth models were proposed. Estimating the accurate release time of software is an important challenging issue of software Development Company. If the software product is released early contain more errors makes software to be less reliable, where as late release will increase the development cost. It is observed that more than half of the total development cost is concentrated during the maintenance phase. Estimating the reliability and accurate development cost of software product by software reliability growth model at maintenance phase is an important issue. Warranty is defined to be an agreement between the customer and vendor of the software to provide the extra protection to the product. Several papers have concentrated on optimal release time by considering the warranty cost. All early proposed NHPP warranty cost models considered a perfect-debugging software reliability growth model while estimating the software maintenance cost. But it was observed that, the software product is influenced by several factors like environment, resources and nature of the faults during the operational phase. In such circumstances it is quite not right to consider a perfect NHPP software reliability growth model in operational phase. In this paper we proposed an imperfect debugging software reliability growth model by combining the cost and reliability at a given warranty period to estimate the optimal release time of software product.

Research Paper

Visualization Of Secured Grid Model For The Distributed Applications In Grid Computing Environment

Venkatesan Thillainayagam* , Harikrishnan Viswanathan**
*,** Department of Computer Applications, AVC College of Engineering, Mannampandal.
Venkatesan Thillainayagam and Harikrishnan Viswanathan (2010). Visualization Of Secured Grid Model For The Distributed Applications In Grid Computing Environment. i-manager’s Journal on Software Engineering, 5(2),10-15. https://doi.org/10.26634/jse.5.2.1330

Abstract

With great computing power comes great responsibility. Security is of utmost importance in a grid environment. Since a grid performs large computations, data is assumed to be available at every node in the processing cycle. This increases the risk of data manipulation in various forms. Also we have to keep in mind what happens to the data when a node fails. An ideal grid will have a small time of convergence and a low recovery time in case of a complete grid failure. By convergence, we mean that each and every processor node will have complete information about each and every other processor node in the grid. Recovery time is the time it takes for the grid to start from scratch after a major breakdown. To put things in the right perspective, a good grid based computing environment will have an intelligent grid administrator to monitor user logs and scheduled jobs and a good grid operating system which will be tailor made to suit the application of the grid. We propose a similarity between the OSI model and a Grid Model to elaborate the functions and utilities of a grid. There have been numerous previous comparisons and each is knowledgeable in its own right. But a similarity with a network model adds more weight since physically a grid is nothing but an interconnection, and interconnection can be best defined in relation to a computer network interconnection. We proposed a secured grid model for the distributed applications in the grid computing environment.

Research Paper

Parallel Singular Value Decomposition Algorithm on Cell Broadband Engine Architecture

Padmaja Kanase* , Ankush Mittal**, Kuldip Singh***
* Department of Electronics and Computer Engineering, Indian Institute of Technology, Roorkee.
** Department of Computer Science, College of Engineering, Roorkee.
*** Department of Electronics and Computer Engineering, Indian Institute of Technology, Roorkee.
Padmaja Kanase, Ankush Mittal and Kuldip Singh (2010). Parallel Singular Value Decomposition Algorithm on Cell Broadband Engine Architecture. i-manager’s Journal on Software Engineering, 5(2), 16-25. https://doi.org/10.26634/jse.5.2.1331

Abstract

The singular value decomposition (SVD) is an important technique used for factorization of a rectangular real or complex matrix.  But computationally SVD is very expensive in terms of time and space. Multicore processors can be used for such type of problems which are computationally intensive. The Cell Broadband Engine is one such multicore processor consisting of a traditional PowerPC based master core meant to run the operating system, and 8 delegate slave processors built for compute intensive processing. This work introduces a modification on the serial singular value decomposition algorithm. It describes parallel implementation of the modified algorithm on Cell BE and issues involved. Exposure of system level optimization features in Cell BE has been employed on algorithm specific operations to achieve improvements to a great extent.  The implementation achieves significant performance, thereby giving about 8 times speedup over sequential implementation.

Research Paper

Performance and Denoising On the Discrete Wavelet Transform For Image Segmentation

Sankar S* , Yuvaraj**
* Assistant Professor, Department of EEE, Panimalar Institute of Technology, Chennai.
** Assistant Professor, Department of Computer Science and Engineering, Panimalar Institute of Technology, Chennai.
S. Sankar and Yuvaraj (2010). Performance and Denoising On the Discrete Wavelet Transform For Image Segmentation. i-manager’s Journal on Software Engineering, 5(2),26-30. https://doi.org/10.26634/jse.5.2.1332

Abstract

Image denoising is a common procedure in digital image processing aiming at the removal of noise, which may corrupt an image during its acquisition or transmission, while retaining its quality. This procedure is traditionally performed in the spatial or frequency domain by filtering. Recently, a lot of methods have been reported that perform denoising on the Discrete Wavelet Transform (DWT) domain. The transform coefficients within the subbands of a DWT can be locally modeled as i.i.d (independent identically distributed) random variables with Generalized Gaussian distribution. Some of the denoising algorithms perform thresholding of the wavelet coefficients, which have been affected by additive white Gaussian noise, by retaining only large coefficients and setting the rest to zero. However, their performance is not sufficiently effective as they are not spatially adaptive. Some other methods evaluate the denoised coefficients by an MMSE (Minimum Mean Square Error) estimator, in terms of the noised coefficients and the variances of signal and noise. The signal variance is locally estimated by a ML (Maximum Likelihood) or a MAP (Maximum A Posteriori) estimator in small regions for every subband where variance is assumed practically constant. These methods present effective results but their spatial adaptivity is not well suited near object edges where the variance field is not smoothly varied. The optimality of the selected regions where the estimators apply has been examined in some research works. This paper evaluates some of the wavelet domain algorithms as far as their subjective or objective quality performance is concerned and examines some improvements.

Research Paper

Network-Level Length-Based Intrusion Detection System

Jeya S* , S. Muthu Perumal Pillai**
* Professor & Head, M.C.A. Department, PET Engineering College, Vallioor, Tirunelveli.
** Assistant Professor, M.C.A. Department, PET Engineering College, Vallioor, Tirunelveli.
Jeya S and S. Muthu Perumal Pillai (2010). Network-Level Length-Based Intrusion Detection System. i-manager’s Journal on Software Engineering, 5(2),31-36. https://doi.org/10.26634/jse.5.2.1333

Abstract

As the transmission of data on the Internet increases, the need to protect connected systems also increases. Most existing network-based signatures are specific to exploit and can be easily evaded. In this paper, we propose generating vulnerability-driven signatures at network level without any host-level analysis of worm execution or vulnerable programs. This implementation considers both temporal and spatial information of network connections. This is helpful for identification of complex anomalous behaviors.  For detecting the unknown intrusions the proper knowledge base is to be formed after preprocessing the packets captured from the network. As the first step, we design a network-based length-based signature generator (LESG) for the worms exploiting buffer overflow vulnerabilities. This work is focused on the TCP/IP network protocols.

Research Paper

Semantic Extraction in PGF using POS tagged data of Sanskrit

Smita Selot* , Neeta Tripathi**, A.S. Zadgaonkar***
* Reader, Department of CASSCET, Bhilai.
** Principal, GD Rungta College, Bhilai.
*** Vice Chancellor, CV Raman University, Bilaspur.
Smita Selot, Neeta Tripathi and A.S. Zadgaonkar (2010). Semantic Extraction in PGF using POS tagged data of Sanskrit. i-manager’s Journal on Software Engineering, 5(2), 37-43. https://doi.org/10.26634/jse.5.2.1334

Abstract

Extraction of semantics from a text is vital application covering various field of artificial intelligence like natural language processing , knowledge representation , machine learning and so forth . Research work is  being carried for languages in international platform like  English , German , Chinese etc .In India also, research associated with Indian languages like Hindi , Tamil , Bengali and other  regional languages is developing faster . In this paper, we have emphasized upon the use of Sanskrit language for semantic extraction . Sanskrit ,being an order free language with systematic grammar gives an excellent opportunity for extracting semantic with higher efficiency .Panini , an ancient grammarian has introduced six Karakas (cases) to identify the semantic role of word in a sentence . These karkas are analyzed and applied for semantic extraction from the Sanskrit text . Input sentences are  first converted into syntactic structure , these syntactic structures are then used for semantic analysis. The syntactic structures are present in Part of Speech (POS) tagged form and various features of this tagged data is analyzed to extract semantic role of the word in a sentence.  .Finally, semantic label of each word of a  sentence are stored in frames called case frames, which act  as a knowledge representation tool . So mapping of input POS tagged data  to semantic tagged data is done and case frames are  generated . Such system are useful in building question-answer based applications , machine learning , knowledge representation and information retrieval.

Research Paper

Reliability of Component Based Software with Similar Software Components – a Review

Chinnaiyan R* , S. Somasundaram**
* Associate Professor, Department of Computer Applications, A.V.C College of Engineering, Tamil Nadu.
** Associate Professor, Department of Mathematics, Coimbatore Institute of Technology, Coimbatore, Tamil Nadu.
R. Chinnaiyan S and Somasundaram (2014). Reliability of Component Based Software with Similar Software Components – A Review.i-manager’s Journal on Electrical Engineering. 5(2), 44-49. https://doi.org/10.26634/jse.5.2.1335

Abstract

This Paper proposes a new approach using Markov model for analyzing the reliability of component based software system with similar software components and similar repair component. The components of the software system under consideration can have two distinct configurations, namely, they can be arranged in either series or in parallel. A third case is also considered, in which the software system is up (good) if k-out-of- n software components are good. For all the three cases, a procedure is proposed for calculating the probability of the software system availability and the duration of the software system to reach the steady state.

Research Paper

Fast Recovery from Topology Changes and Communication Link Failures

S. Vasundra* , Sathyanarayana B**
* Associate Professor, Department of CSE, JNTUA College of Engineering, Anantapur.
** Professor & Chairman BOS, Department of CA & T, S.K. University, Anantapur.
S. Vasundra and B. Sathyanarayana (2010). Fast Recovery from Topology Changes and Communication Link Failures. i-manager’s Journal on Software Engineering, 5(2),50-57. https://doi.org/10.26634/jse.5.2.1336

Abstract

In our everyday communication infrastructure mobile ad hoc networks plays a central role. In mobile ad hoc networks, how to achieve the multicast communication is a challenging task due to the fact that the topology may change frequently and communication links may be broken because of users’ mobility. We introduced MANSI (Multicast for Ad hoe Network with Swarm Intelligence) protocol, which relies on a swarm intelligence based optimization technique to learn and discover efficient multicast connectivity. The proposed protocol instances that it can quickly and efficiently establish initial multicast connectivity and/or improved the resulting connectivity via different optimization techniques. Using a simulation approach, we investigated performance of the proposed algorithm through a comparison with an algorithm previously proposed in the literature. Based on the numerical results, we demonstrate that our proposed algorithm performs well.

Research Paper

Software Model to Create Data Profile for Analysis of Gray Bus Codec

Kamal K Mehta* , Sushil Dubey**, Sharma H.R***
* Associate Professor, Department of Computer Science & Engineering, SSCET, Bhilai.
** MCA, III Semester. Student, SSCET, Bhilai.
*** Director, Chatrapati Shivaji Institute of Technology, Durg (C.G.).
Kamal K Mehta, Sushil Dubey and Sharma H.R (2010). Software Model to Create Data Profile for Analysis of Gray Bus Code C. i-manager’s Journal on Software Engineering, 5(2), 58-62. https://doi.org/10.26634/jse.5.2.1337

Abstract

Low power design has gain attention of designers, since a decade. Various methods have been proposed in literature. One way is to adopt Bus Encoding scheme. It has been further observed that, efficiency proof of proposed bus encoding method does not have concrete base.  Task becomes complex when we talk about implementation on computer system. In this paper, a modeling technique for Bus CODEC is presented. Model is made versatile to use for any type of bus encoding technique. Major focus was towards dynamic power dissipation, which in turn depends on operating frequency, drain to drain supply voltage, capacitance and switching activity. Work has been done to find out optimum value of supply voltage. Strong attempt has been witnessed for dynamic changing frequency. None of the effort has been seen in turns of data profiling. This paper presents a software model to maintain record of data flow across computer channel. A processor of MHz. frequency has been taken into consideration. Database has been created to support variety of CPU available. Developed system enables to set input as per desired sequence. Subsequently transition activity has been calculated. Care has been taken to make model versatile, that we can use it for variety of Bus Encoder/Decoder. The only restriction is, it should not contain memory elements. This paper includes extensive experimental work of Gray Bus Encoding scheme. System is having capability to update database gradually, as we go on implementing our task. Work has been further extended to maintain library having multiple CPU, with different manufacturing scale, supply voltage and operational frequency.  Result has been compiled taking random dataset into consideration.