Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
The TCP protocol is used by the majority of the network applications on the Internet. TCP performance is strongly influenced by its congestion control algorithms that limit the amount of transmitted traffic based on the estimated network capacity and utilization. Because the freely available Linux operating system has gained popularity especially in the network servers, its TCP implementation affects many of the network interactions carried out today. This study introduces and analyses a class of non-linear congestion control algorithms called Exponential congestion control algorithms. This algorithm provide additive increase using a Exponential of the inverse of the current window size and provide multiplicative decrease using the Exponential of the current window size. They are further parameterized by a and ß. The results of simulation are compared with that of the TCP variants such as TCP, TCP/Reno, TCP/Sack1, TCP/Fack TCP/Vegas and TCP-EXPO. The Comparison shows that Exponential Congestion Control algorithm performs better in terms of throughput.
Intelligent electronic systems provide several facilities such as, capturing, storing and communicating a wide range of sensitive and personal data. Security is emerging as a critical concern that must be addressed in order to enable several current and future applications. This paper presents the design and development aspects of a wireless hub & finger print recognition system for authentication. Since a finger print pattern is unique, the information obtained from the system provides most reliable security.
In recent years, many computer system failures have been caused by software faults that were introduced during the software development process. This is an inevitable problem, since a software system installed in the computer system is an intellectual product consisting of documents and source programs developed by human activities. Then, Total Quality Management (TQM) is considered to be one of the key technologies to produce more high quality software products. Generally, a software failure caused by software faults latent in the system, cannot occur except for certain special occasion when a set of special data is put into the system under a special condition, i.e. the program path including software faults is executed. The quality characteristic of software consistency is, that computer systems can continue to operate regularly without the occurrence of failures on software systems. In this paper, we discuss a quantitative technique for software quality/consistency measurement and assessment, as one of the key software consistency technologies, which is a so-called software consistency model and its applications.
Natural Language Understanding Interface allows the user to interact with the computer in a flexible and friendly manner. This paper describes a neural network model of syntactic parser for a Natural language understanding interface. This intelligent parser performs the parsing in two steps: Parts-of-speech tagging is carried out by a Back Propagation Network that takes an English word as input and generates as output the associated parts-of-speech tags; parse structure generation is done by a similar intelligent network that uses the output of the parts-of-speech tagging network and grammar rules. The proposed Back Propagation Network uses genetic algorithm for weight determination. Being a robust and optimization technique, Genetics Algorithm (GA) outperforms the gradient based conventional training algorithm, giving solution fairly accurately and quickly in less number of iterations.
Deploying new applications and technologies in an enterprise is always a huge task right from the technology officers to administrators. They have to balance productivity with costs, and integrate the new with the old. RFID allows business to operate more efficiently, helps supply chain functions, enables in-time inventory control, allows the use of valuable information to boost revenue and cut costs, and make way for better customer service. RFID also provides real-time status and visibility resulting in reduced inventories, improved service levels, lessened loss and waste, and better safety and security. It is a non-line-of-sight identification technology which can be made to work in a non intervention mode for faster data capture over long distances. This technology is revolutionizing the process of automatic identification of objects, and enterprises enjoy real time supply chain visibility.
Today, complex, high-quality computer based software are required to be built in very periods. So, it has motivated utilizing off-the-shelf software components for rapid development in the field of software development. Computer based software Engineering is a process that emphasizes the design and construction of computer based systems using software components. The goal of component based engineering is to increase the productivity, quality and time to market in software development. Component based software applications are expected to have high reliability as a result of deploying trusted components. In this paper, an approach is presented for system reliability assessment if the component reliability is known. This includes probability of failure of the component and its usage ratio to find the system reliability. The reliability of a software component is a probability prediction for failure free execution of the component based on its usage requirements. The component severity analysis is also done to support a software development organization to obtain the best reliability improvements.
The goal of an ad-hoc wireless network is to enable communication between any two wireless connected nodes in the network. Using intermediate nodes in the network as forwarding agents enables communication between nodes that are beyond direct communication range. Ad hoc wireless networks are also more prone to security threats and misbehaving nodes. One more problem that we cannot predict in the ad hoc wireless network is misbehaving nodes. Generally a good network must contain all the nodes. If any node that has a strong motivation to deny packet forwarding to others, while at the same time using their services to deliver own data, leads to more complication and problems in the ad hoc wireless networks. These nodes are some time called as selfish nodes. In this paper we propose a Path Management Protocol on Ad hoc Wireless Network (PMP-ANT) designed based on the MARI topology, to cope with misbehavior nodes. In this approach we use PMP-ANT protocol to detect misbehaving nodes and to isolate them from the network, so that misbehavior will not pay off but result in denied service and thus cannot continue. PMP-ANT detects misbehaving nodes by means of direct observation and second-hand information about several types of misbehavior, this allowing nodes to route around these misbehaving nodes and to isolate them from the network.
In the proposed approach each node has a monitor for observations, reputation records for first-hand and trusted record for second-hand information about routing and forwarding behavior of other nodes. The trust records are used to identify the trust value given to the received second-hand information, and a path manager to adapt their behavior according to reputation and to take action against misbehaving nodes.
In this work, a new key exchange protocol for IP-based mobile networks is introduced. This protocol is called KEPSOM (Key Exchange Protocol Supporting Mobility and Multihoming). The goals of designing KEPSOM are to develop key exchange protocol proposal characterized by its secrecy, simplicity, efficiency, resistivity, and its ability to support mobility and multihoming. The protocol requires only two roundtrips. The design limits the private information revealed by the initiator. An old security association (SA) can be replaced with a new one by rekeying without the need of restarting the protocol with a new session. On the other hand, the changes in IP address due to mobility or multihoming need not to restart the protocol with a new SA session. The proposed protocol can also support key exchange in hybrid wireless network, in which the mobile node can operate in both Ad Hoc and Base Station-oriented wireless network environments using different transmission modes. KEPSOM has been analyzed and proven secure. Several tests have been done to measure and evaluate the performance of the protocol. In these tests, it is found that the required time for rekeying is about 27% of the total required time for exchanging the keys. And the required time to detect and update the change in IP address, which may occur due to mobility or multihoming, is less than 10% of the total required time to establish a new SA session.
A medium access control (MAC) protocol is developed for wireless multimedia networks based on frequency division duplex (FDD) and widebank code division multiple access (WCDMA). This protocol isolates the communication channel to three distinct channels namely random access channel (RACH) for control packet transmission, dedicated channel (DCH) for point to point data transmission and broadcast control channel (BCCH) for system information transmission. In this protocol, a minimum-power allocation algorithm controls the received power levels of simultaneously transmitting users such that the heterogeneous bit error rates (BERs) of multimedia traffic are guaranteed. With minimum power allocation, a multimedia wideband CDMA generalized processor sharing (GPS) scheduling scheme is proposed. It provides fair queuing to multimedia traffic with different QoS constraints. It also takes into account of the limited number of code channels for each user and the variable system capacity due to interference, experienced by users in a CDMA network. The admission of real-time connections is determined by a new effective bandwidth connection admission control (CAC) algorithm, in which the minimum-power allocation is also considered. Simulation results show that the new MAC protocol guarantees QoS requirements of both real-time and non-real-time traffic in an FDD wideband CDMA network.
Vector quantization (VQ) plays an important role in data compression and it has been successfully used in image compression. In VQ, minimization of Mean Square Error (MSE) between code book vectors and training vectors is a non-linear problem. Traditional LBG types of algorithms used for designing the codebooks for vector quantizer converge to a local minimum, which depends on the initial code book. Genetic algorithms (GAs) are a powerful set of global search techniques that have been shown to produce very good results on a wide class of problems. GAs are capable of exploring and exploiting promising regions of the search space. The genetic algorithm can be applied to generate a better codebook that approaches the global optimal solution of vector quantization. In this paper we present a new approach to image compression based on genetic algorithm for vector quantizer. We also propose a composite crossover operator for generating a codebook. The effectiveness of the proposed crossover operator on the design of a codebook of genetic algorithm based vector quantization is analyzed. Simulations indicate that vector quantization based on genetic algorithm has better performance in designing the optimal codebook for vector quantizer than conventional LBG algorithm. The result also indicate that the performance of the codebook is substantially improved by using the proposed cross over operator. The Peak Signal to Noise Ratio (PSNR) is used as an objective measure of reconstructed image quality.