i-manager's Journal on Software Engineering (JSE)


Volume 4 Issue 4 April - June 2010

Research Paper

A Multi-Agent Hierarchical Fuzzy Signatures Approach To Optimize A Quality Management System

Hajer Ben Mahmoud* , Raouf Ketata**, Taieb Ben Romdhane***, Samir Ben Ahmed****
* Student Researcher, National Institute of Applied Science and Technology (INSAT), Northern Urban Centres Tunis-Tunisia.
**Assistant Professor, Research unit on Intelligent Control Design and Optimisation of Complex System (ICOS), Northern Urban Centres Tunisia.
***Assistant Professor, National Institute of Applied Science and Technology (INSAT).
****Professor, Research unit on Methods and tools for Complex Computer Systems (MOSIC).
Hajer Ben Mahmoud, Raouf Ketata, Taieb Ben Romdhane and Samir Ben Ahmed (2010). A Multi-Agent Hierarchical Fuzzy Signatures Approach To Optimize A Quality Management System. i-manager’s Journal on Software Engineering, 4(4),1-10. https://doi.org/10.26634/jse.4.4.1166

Abstract

This paper proposes a support system for quality management of a company. Moreover, the reproduction of such Quality Management System (QMS) requires both modeling and piloting. For modeling, a Multi-Agent System (MAS) approach using a micro framework between the agents based on the sequence diagram UML (Unified Modeling Language) was proposed. For piloting, the proposed method is the Hierarchical Fuzzy Signatures (HFS) that is one of the best methods adopted for this kind of problems. To validate this developed system, an industrial company which presents a major problem for controlling the quality level of production lines was chosen. The achieved results have shown that HFS concept is effective, efficient and flexible for piloting such model QMS-MAS.

Research Paper

Improvement in Spectral Output and Computational Efficiency of Digital Filter Bank

Ganekanti Hemanj* , K. Satya Prasad**, P. Venkata Subbaiah***
* Research Scholar, Department of ECE., JNT University, Kakinada, A.P., India.
** Professor (ECE) and Director of Evaluation, JNTU, Kakinada. A.P., India.
*** Professor (ECE) cum Principal, ASIST, Paritala, Krishna Dt., A.P., India.
Ganekanti Hemanj, K. Satya Prasad and P. Venkata Subbaiah (2010). Improvement in Spectral Output and Computational Efficiency of Digital Filter Bank. i-manager’s Journal on Software Engineering, 4(4), 66-73. https://doi.org/10.26634/jse.4.4.1183

Abstract

Digital filtering is considered to be crucial operator in reconstruction and visualization of information, besides amounting to increase in computational efficiency. The FIR based digital filter bank is more effective in respect of above advantages. In this paper, we propose a novel approach in Multirate technique through Digital filter bank method, based on Modified Kaiser window. A remarkable spectral output is achieved by way of increase in magnitude, quality of output, better frequency response and computational efficiency. The simulation results are added on account of satisfactory performance and comparison is drawn to enlighten the advantages in the proposed method. This type of filter bank is particularly suitable in typical hearing aid applications to achieve significant merits for improving output quality.

Research Paper

A Quantization based Watermarking Scheme for Image Authentication

Manisha Sharma* , M. Kowar**
*-** Department of Electronics & Telecommunication Engineering, Bhilai Institute of Technology, Bhilai House, Durg, Chhattisgarh, India.
Manisha Sharma and M. Kowar (2010). A Quantization based Watermarking Scheme for Image Authentication. i-manager’s Journal on Software Engineering, 4(4), 74-79. https://doi.org/10.26634/jse.4.4.1184

Abstract

A watermarking technique for image authentication inserts hidden data into an image in order to detect any accidental or malicious alteration. In the proposed work, a watermark in the form of a visually meaningful binary pattern is embedded for the purpose of tamper detection. The watermark is embedded in the discrete cosine transform coefficients of the image by quantizing the coefficients proportionally according to the watermarked bit. Experimental results demonstrate the performance and effectiveness of the scheme for image authentication by maintaining good values for both, the Peak Signal to Noise Ratio (PSNR) and Tamper Assessment Function (TAF).

Research Paper

Broad and Robotic Simulation Modeling Based On Measurements

Sankar S* , G. Gokula Krishnan **
* Senior Lecturer in Department of EEE, Panimalar Institute of Technology, Chennai, TamilNadu, India.
** Lecturer in Department of EEE, Panimalar Institute of Technology, Chennai, TamilNadu, India.
S. Sankar and G. Gokula Krishnan (2010). Broad and Robotic Simulation Modeling Based On Measurements.i-manager’s Journal on Software Engineering, 4(4), 11-17. https://doi.org/10.26634/jse.4.4.1168

Abstract

SoFT is a new method and tool which measures and models linear electrical network components in a wide frequency band with unprecedented accuracy. This is achieved by a special modal based measurement technique in combination with suitable rational fitting and passivity enforcement methods. The models are easily imported into most commonly used simulation software.

This paper demonstrates the SoFT tool computations in a comparison between A) time-domain measurements of a lightning impulse test of a power transformer, B) simulation of the test results using a SoFT model, and C) simulation of the test results using a lumped-element circuit simulation model based on geometrical transformer design information.

Research Paper

H.264 Based Architecture of Digital Surveillance Network in Application to Computer Visualization

Wei Zhao* , Raul Batista**, Jeffrey Fan***, Jichang Tan****
*-***Department of Electrical and Computer Engineering, Florida International University, Miami, USA.
****Department of Computer Science and Information Engineering, St. John's University, Taipei, Taiwan.
Wei Zhao, Raul Batista, Jeffrey Fan and Jichang Tan (2010). H.264 Based Architecture of Digital Surveillance Network in Application to Computer Visualization. i-manager’s Journal on Software Engineering, 4(4),18-26. https://doi.org/10.26634/jse.4.4.1169

Abstract

The majority of today’s video surveillance systems are analog based, requiring human interaction and are susceptible to multiple threats. Utilizing a fully digitized system incorporating the Vector Bank (VB), Laplacian of Gaussian (LoG), and Directional Discrete Cosine Transform (DDCT) to each surveillance camera in a surveillance network, the need for digital processing would be accomplished at each individual camera level, reducing the memory required at the network hub and providing a decentralized secure surveillance network. Detection and tracking of moving objects may be sensed by each independent surveillance camera in the network, processed using VB, LoG, and DDCT to drastically reduce bandwidth, and computation time of processing, ultimately reducing data bus traffic and transmission bandwidth. In this paper, the addition of a hardware based System on Chip (SoC) digital signal processing algorithms interfaced to H.264 architecture for the purpose of a fully redundant and autonomous multi-camera digital video surveillance network is discussed. Experimental results show the performance and its capability to detect, isolate and track the objects, while reducing computational time, channel bandwidth and overhead costs of the digital video surveillance network while increasing security. Such achievements prove to greatly improve computer visualization, and performance of the network.

Research Paper

MONTE Carlo Simulation for Reliability Assessment of Component Based Software Systems

Chinnaiyan R* , S. Somasundaram**
* Assistant Professor, Department of CA, AVCCE, Mannampandal, Tamil Nadu, India.
** Assistant Professor, Department of Mathematics, CIT, Coimbatore, Tamil Nadu, India.
Chinnaiyan R and S. Somasundaram (2010). MONTE Carlo Simulation for Reliability Assessment of Component Based Software Systems. i-manager’s Journal on Software Engineering, 4(4), 27-32. https://doi.org/10.26634/jse.4.4.1171

Abstract

Reliability assessment of component based software system plays a vital role in developing quality software systems using component based development methodology. This can be achieved by Monte Carlo Simulation Method of reliability prediction when software system complexity makes the formulation of exact models essentially impossible. The characteristics of the Monte Carlo method make it ideal for estimating the reliability of component based software systems. Unlike many other mathematical models, software system complexity is irrelevant to the method. Not only can the structure of the software system be dynamic, but the precise structure of the software system need not even be known. Instead, software system components need only be tested for failure during operation, which ensures that software components which are used more often contribute proportionally more to the overall software system reliability estimate. This paper presents a novel Monte Carlo Simulation method for assessing the reliability of component based software systems.

Research Paper

Predictive Analysis in VLSI Design

Tom Page* , Gisli Thorsteinsson**
* Loughborough University, UK.
** University of Iceland.
Tom Page and Gisli Thorsteinsson (2010). Predictive Analysis in VLSI Design. i-manager’s Journal on Software Engineering, 4(4), 33-39. https://doi.org/10.26634/jse.4.4.1172

Abstract

Advances in silicon technology have made possible the design of very large scale integration (VLSI) chips. The size of these designs has reached a level where the design activity is carried out by a team of designers rather than an individual. The complexity of the designs and the merging of functional islands from different designers can lead to mistakes (bugs) in the chip.

This paper details the methods used by the Subsystems Electronics hardware design group (based at IBM Havant) to minimise the possibility of releasing a chip containing bugs.  In most cases the design will have cost and schedule constraints, and there is a trade-off between the amount of time and exhibit expended at the design and simulation phases, and the risk of sending a chip for fabrication before all the bugs have been found. The problem of determining when a chip should be released for fabrication has been addressed by the use of statistical analysis to assess when the simulation is complete or no longer likely to find mistakes.

Research Paper

Defending Against Stealthy Botnets

Bagath Basha* , N. Sankar Ram**, Paul Rodrigues***, L. Ranjith****
* Department of Computer Science & Engineering, Velammal Engineering College, Chennai.
** Department of Computer Science & Engineering, Velammal Engineering College, Chennai.
*** Professor & Dean, Department of Computer Science & Engineering, Hindustan University, Chennai.
**** Assistant Professor, TIFAC CORE, Velammal Engineering College, Chennai.
Bagath Basha, N. Sankar Ram, Paul Rodrigues and Ranjith (2010). Defending Against Stealthy Botnets.i-manager’s Journal on Software Engineering, 4(4), 40-49. https://doi.org/10.26634/jse.4.4.1176

Abstract

Global Internet threats are rapidly evolving from attacks designed solely to disable infrastructure to those that also target people and organizations. This alarming new class of attacks directly impacts the day to-day lives of millions of people and endangers businesses and governments around the world. For example, computer users are assailed with spyware that snoops on confidential information, spam that floods email accounts, and phishing scams that steal identities. At the center of many of these attacks is a large pool of compromised computers located in homes, schools, businesses, and governments around the world.

In this paper, the authors provide a detailed overview of current Botnet technology and defense by exploring the intersection between existing Botnet research, the evolution of botnets themselves, and the goals and perspectives of various types of networks. Authors also describe the invariant nature of their behavior in various phases, how different kinds of networks have access to different types of visibility and its strong impact on the effectiveness of any Botnet detection mechanism. A comprehensive picture of the various Botnet detection techniques that have been proposed is provided. Finally, the paper summarizes the survey and suggests future directions.

Research Paper

Efficient Greedy Algorithm for Multi-Processor Scheduling

Ruwanthini Siyambalapitiya* , M. Sandirigama**
* Department of Statistics and Computer Science, University of Peradeniya, Sri Lanka.
** Department of Computer Engineering, University of Peradeniya, Sri Lanka.
Ruwanthini Siyambalapitiya and M. Sandirigama (2010). Efficient Greedy Algorithm for Multi-Processor Scheduling. i-manager’s Journal on Software Engineering, 4(4),50-54. https://doi.org/10.26634/jse.4.4.1178

Abstract

In this study, we propose some simple greedy algorithms for the multi-processor job scheduling problem. Given list of jobs are arranged according to the time duration for processing. The results of the proposed algorithms are compared with the first-come first-serve (FCFS) job scheduling approach and shown to be superior.

Research Paper

Performance Analysis of Hamming Code for Fault Tolerant 8-bit data bus in VDSM technology

Sathish A* , M. Chennakesavulu**, M. Madhavi Latha***, K. Lal Kishore****
* Associate Professor, Department of ECE, RGMCET, Nandyal, A.P.
**Assistant Professor, Department of ECE, RGMCET, Nandyal, A.P.
*** Professor, Department of ECE, J.N.T. University, Hyderabad, A.P.
**** Professor and Rector, J.N.T. University, Hyderabad, A.P.
Sathish A, M. Chennakesavulu, M. Madhavi Latha and K. Lal Kishore (2010). Performance Analysis of Hamming Code for Fault Tolerant 8-bit data bus in VDSM technology. i-manager’s Journal on Software Engineering, 4(4),55-60. https://doi.org/10.26634/jse.4.4.1180

Abstract

In Very Deep-submicron (VDSM) systems, the scaling of ULSI ICs has increased the sensitivity of CMOS technology to cause various noise mechanisms such as power supply noise, crosstalk noise, leakage noise, etc. In VDSM technology distance between the data bus lines is reduced, so coupling capacitance is dominating factor. The coupling capacitance (CC) is between long parallel wires. The load capacitance (CL) defines the wire-to-substrate capacitance. Unfortunately, in VDSM systems, the coupling capacitance is of magnitude several times larger than the loading capacitance. The coupling capacitance causes logical malfunction, delay faults, and power consumption on long on-chip data buses. An important effect of the coupling capacitance is Cross talk. Crosstalk is mainly dependent on several factors: drive strength, wire length/spacing, edge rate and propagation duration.

The crosstalk noise produces from the coupling capacitance. Such faults may affect data on data bus. The severity of this problem depends on fault duration. To avoid this condition and to guarantee signal integrity on the on-chip communication, a fault tolerant bus can be adopted. This could be achieved by implementing error-correcting codes (ECCs), providing on-line correction and do not require data retransmission.

The 8-bit data bus is implemented in 1200nm, 180nm, 120nm, 90nm and 65nm technologies and simulation results shows that crosstalk increases as the technology scales down. For reliable transmission of the data ECC techniques is placed on the data bus. We employed a Hamming code as ECC for 8-bit fault tolerant data bus. This is implemented in 1200nm, 180nm, 120nm, 90nm and 65nm technology. The simulation results show that the average Power varies from 48.054mW to 0.235m, and Maximum delay varies from 3.437ns to 0.092ns respectively.

Research Paper

To Cope with Misbehavior in Mobile Ad-Hoc Networks and Performance Analysis with DSR Protocol

V. Sumalatha* , Prasanthi**
* Assistant Professor, JNTUACE, Anantapur.
** Associate Professor, JNTUACE, Anantapur.
V. Sumalatha and Prasanthi (2010). To Cope with Misbehavior in Mobile Ad-Hoc Networks and Performance Analysis with DSR Protocol.i-manager’s Journal on Software Engineering, 4(4), 61-65. https://doi.org/10.26634/jse.4.4.1182

Abstract

Ad-Hoc wireless networks have emerged as one of the key growth areas for wireless networking and computing technology. One of the major factors effecting the ad-hoc communication is the misbehaving of nodes. Node misbehavior due to selfish or maliciousness or faults can significantly degrade the performance of mobile ad hoc networks. Most of the routing protocols in wireless ad hoc networks, such as DSR, fail to detect misbehavior and assume nodes are trustworthy and cooperative.

Dynamic Source Routing protocol is modified to cope with misbehavior. It enables nodes to detect misbehavior by first-hand observation and use the second-hand information provided by other nodes. The view a node has about the behavior of another node is captured in a reputation system, which is used to classify nodes as misbehaving or normal. Once a misbehaving node is detected, it is isolated from the network. Reputation systems can, however be tricked by the spread of false reputation ratings, be it false accusations or false praise. To solve this problem, a fully distributed reputation system is proposed that can cope with false information and effectively use second-hand information in a safe way. Approach is based on modified Bayesian estimation and classification procedure for the isolation of malicious and selfish nodes of given network.

In this paper, tests are performed for the network containing normal nodes and misbehaving nodes, the delay plots of original DSR and Modified DSR is compared and performance has been analyzed. The proposed task is implemented in MATLAB for protocol implementation and verification.