Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
The paper depicts the Linux Random Number Generator (LRNG), an intra operating system based random number generator, which plays a crucial role in almost any cryptographic protocol in Linux. The Linux kernel is an open source project developed in the last 15 years by group of developers led by Linus Trovalds. The kernel is the common element in all various Linux distributions. Despite the fact that the Linux Random Number Generator (LRNG) is part of an open source project and its source code is less than 2500 lines of code, the algorithm is not documented and the hundreds of code patches during the last five years make it much more cumbersome. Hence, I have used both static analysis of the Linux kernel source and dynamic tracing to reverse engineer the generator algorithm. LRNG design goal is to output only non-predictable bits, which should originate from non-predictable events. For enforcing that, the pool holds a counter for counting the non-predictable bits, which is calculated as a function of the frequencies of the different events. I implemented a user mode simulator of the LRNG as part of our analysis.
Distribution of sensor nodes for efficient sensitivity is analysed in this paper for finding out the Routing Protocol along with the Congestion Control Algorithm that could be used in Wireless Sensor Networks. The analysis is performed by the simulation of wireless sensor networks with two different routing protocols used in mobile ad hoc networks — AODV and DSDV. The Protocol assumed in the Transmission layer is TCP. Four variants of TCP congestion control are assumed. They are TCP/Tahoe (simply called as TCP), TCP/New Reno, TCP/Vegas and TCP/Fast. The Network is simulated using ns2, the event driven network simulator used by most of the network researchers. Five different node distribution patterns are assumed. Their efficiency is compared by evaluating the throughput in each network topology with a single mobile source node, a fixed base station node and sensor nodes in a distribution pattern. The results show that the sensor nodes if implemented with AODV Routing protocol and TCP/Vegas congestion control algorithm perform more efficiently.
Ontology development is widely used for Domain specific Intelligence and Knowledge Management Techniques. A procedural approach is highly needed to model Ontology, since it is an important step in the development of knowledge based systems. At present the construction of Ontologies is very much an art rather than a science. The state needs to be changed, and will be changed only through an understanding of how to go about constructing Ontologies. Thereby in this paper a Nonlinear Spiral model has been suggested for Ontology design. The proposed approach is the combination of Software Engineering methodology tools with Psychoanalytic Probabilistic Conceptual Specialization. The central significant of this approach may help us to re-think to use Ontology for resolving many Risk Analysis Problems in semantic systems thinking.
Image recognition and identification plays a great role in industrial, remote sensing, and military applications. The scope of this paper is to present a novel approach for image identification and labeling using the combination of Wavenet (WN) and the Inverse Discrete Wavelet Transform (IDWT).
The novelty of the approach lies in, the image is divided into (8*8) blocks, then one dimensional Wavenet (WN) transform is computed to get a vector of 12 coefficients corresponding to the dilation, translation and the weight (Four coefficients for each). Finally, the Inverse Discrete Wavelet Transform (IDWT) is obtained for the result as a vector which was used a feature for direct image identification and labeling using the distance measure. This method gave a perfect result of 100% for a database of 100 different images. The algorithm is implemented using MATLAB programminglanguagesversion7.
Quantum key distribution (QKD) is a method using some properties of quantum mechanics to create a secret shared cryptographic key even if an eavesdropper has access to unlimited computational power. All QKD protocols require that the parties have access to an authentic channel. Otherwise, QKD is vulnerable to man-in-the-middle attacks. This paper studies QKD from this point of view, emphasizing the necessity and sufficiency of using unconditionally secure authentication in QKD. In this work, a new technique of using unconditionally secure authentication is proposed for quantum cryptosystems. This technique is based on a hybrid of normal application of authentication codes and the so-called “counter-based” authentication method such that to achieve a better trade off between security and efficiency (in terms of the required size of initially shared secret data). Based on this strategy, an authenticated version of a typical QKD protocol (the well-known BB84 protocol) is described. Some advantages of our protocol in comparison to other proposals are also highlighted.
Ad hoc networks are a new wireless networking paradigm for mobile hosts. Ad hoc wireless networks have emerged as one of the key growth areas for wireless networking and computing technology. Unlike traditional mobile wireless networks, ad hoc networks do not rely on any fixed infrastructure. Instead, hosts rely on each other to keep the network connected. The nodes in ad-hoc networks are battery operated and have limited energy resources, which is indeed a great limitations. Each node consumes a large amount of energy while transmission or reception of packets, among the nodes.
In this paper we have proposed a new topology and its management, called MARI topology. The MARI Topology proposed for power management is novel and is used for the consumption of minimum power in an ad hoc wireless network, at each node. The Protocol groups the network into distinct networks with the selection of MARI nodes and Gateways for efficient packet transmission between any member node pair. The operational cycle at each node is classified into four distinct operations, i.e., transmitting, receiving, idle and sleep cycle, in order to achieve efficient power management in an ad hoc wireless network. In this paper we are concentrating the throughput with respect to the packet size and beacon period duration.
Many applications maintain temporal & spatial features in their databases. These features cannot be treated as any other attributes and need special attention. Temporal data mining has the capability to infer casual and temporal proximity relationships among different components of data. In this work a model is going to be developed which helps in measuring traffic data distributed over a wide area. This model considers the assumption that the data follow an ordered sequence. The area is divided into a set of grid points. Each grid point is identified by a set of coefficients. The traffic data at a particular location is measured. The coefficient at the identified location is mapped to measured traffic data value. Thus coefficient at the measured location is calculated. This coefficient is used to generate coefficient values at the other grid points by tridiagonal matrix algorithm. The procedure is repeated till the values cease to change for a unit time. The procedure is repeated for different intervals of time. Thus traffic data is obtained over the wide area for different times and at different locations.
In this work we design and implement a paging system for controllers which are based on Field Programmable Gate Arrays (FPGAs). A PicoBlaze embedded soft processor within the FPGA is used to monitor the controller and send SMS paging messages when necessary. The messages are transmitted through the GSM network by means of an external modem which is connected serially to the FPGA. The paging system is tested successfully on a speed controller of a servo motor.
In industrial systems, there are a number of devices with low processing capacities called field device. Currently, these devices (sensors and actuators) are connected by dedicated wires, often of a considerable length. The idea of replacing bundles of cables with one field-bus, linking up a set of intelligent field devices seems to be a valid solution to the problems presented by point to point connections.
Usually in fieldbus systems, a variety of different nodes, sensors and local node applications. Differences stem from varying hardware architectures, vendors, and sensor types. In order to guarantee their inter-operability we have to introduce abstraction mechanisms that hide these inherent differences from the other nodes in the network[1].
In fieldbus, the cyclic time-critical traffic generated by the exchange of measurement and control data, normally used by the system, will be accompanied by another kind of traffic (acyclic traffic). Scheduling in fieldbus is characterized by the presence of a processing unit acting as a Link Active Scheduler (LAS), whose task is to manage the bandwidth and distributing it among all the producing devices.
In this paper, recent evolutions of the initial protocol definition concerning transmission of synchronous and asynchronous messages are presented. The performance of the Link Active Scheduler (LAS), is also discussed, which assigns the stations capable of sharing the fieldbus communication capacity, called Link Masters (LMs), the right to transmit by using token mechanism. In the present work, a general semi-Markov model is developed for a field bus system with priority levels associated with the requests. The paper presents a model for handling the on-line scheduling of acyclic processes. Based on the priority scheme the model simulation results provide an optimal allocation method for acyclic traffic.
Internet is evolving into a global commercial infrastructure, growing in size physically and reaching even to the most remote places on earth. Day to day the services provided by the internet and its users are also increasing thereby different kinds of data (e.g. Video, audio, text etc) are to be handled by the network. Thus increasing the data traffic in the network. The unpredictable and burst nature of computer traffic not only prevents the network from making quality assurance, but also creates the problem of congestion. Queue management at the router plays an important role in controlling the congestion. There are already many algorithms proposed (e.g. Drop Tail, RED, Random Early Drop etc). Every queue management scheme has its pros and cons. The choice of algorithm to be chosen depends on the user requirements and demands.
The proposed work analyzes the performance of different queue management schemes. Core Stateless Fair Queuing (CSFQ) guarantees the fair bandwidth allocation with low cost and less complexity. In order to evaluate the CSFQ scheme, simulations are used. In these simulations, the performance of parameters like packet drop and delay are compared with existing schemes such as RED. This work also analyzes the performance of packet marking schemes in differentiated services. These marking schemes provide the differentiated services for different classes of data. Performance analysis of two marking schemes SRTCM and TRTCM is evaluated.
All Simulations are performed in NS-2 on LINUX platform. Performance of schemes is evaluated by varying the parameters such as link bandwidth, link delay and CBR rate.