Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
This paper is a summary of a survey carried out on behalf of the Systems Engineering Research Centre (SERC). This survey was part of the Engineering Applications Software research programme. It covers the software quality assurance tools available and their use and application, in support of software quality, within the engineering research community.
There are hundreds of tools, many of which are methodology dependent, now available to support different phases of software development and they can be considered to contribute, to some degree, to software quality. There is also a class of tools which directly supports the Quality Assurance function and which is applicable, in most cases, irrespective of the development methods. There are the tools for validation, verification and testing of software systems. It is this class of tools that is covered in this paper.
The survey investigated the different tools that are available, their cost, effectiveness and the learning effort required to use them. Furthermore, it considered their use in the engineering environment and the benefits that can be gained as a result.
This paper presents a fully automatic closed domain question answering system designed specifically to improve student's learning experience through E-Learning. The question answering system allows students to access course data available on distance education web sites in a very effective manner by allowing them to ask questions in natural language. It uses various natural language processing tools to give students relevant and quick answers. The question answering system has been designed to fully utilize the domain knowledge specific to a course to improve accuracy and speed of the system. It also utilizes domain knowledge to disambiguate ambiguous terms used in question. The system has been designed considering the requirements of students and can handle variety of questions types generally asked by students. This is achieved by implementing template based approach of answer extraction. Many templates for factual type question have been implemented. The system can be targeted to any domain or subject by providing the domain knowledge in the form of domain keywords at initial setup. Word sense disambiguation algorithm is evaluated on Semcor corpus and has shown significant performance. Results for question answering system have been taken on course material of operating system with various types question including FAQ’s, expert and naïve questions.
Increasing software developers' productivity and reducing the software development process' cycle time are key goals for organizations responsible for building software applications. During the early days of commercial software development, cost and performance were the factors that received the most attention as an area for improvement. In the 1980's quality and productivity have received a great deal of attention. It appears that, in the 1990's reducing software development time will be one of the primary goals of large and small software companies alike. To this end it is appropriate to examine the factors that affect software cycle time. Is it enough to try and improve programmer productivity or are there additional product or process improvements that might be considered? Assuming there are several factors that impact software development time, what are they and how much of an impact does each factor have? This paper examines the software development cycle. It motivates the importance of software cycle time reduction. A definition for software cycle time is proposed. The objective of our research has been to provide decision makers with a model that will enable the prediction of the impact a set of process improvements will have on their software development cycle time. This paper describes our initial results of developing such a model and applying it to assess the impact of software assessments.
A paradigm for uniting the diverse strands of XML-based Web technologies by allowing them to be in-corporated within a single document. This overcomes the distinction between programs and data to make XML truly self-describing.” A proposal for a lightweight yet power- ful functional XML vocabulary called \Semantic f XML” is detailed, based on the well-understood functional program-ming paradigm and resembling the embedding of Lisp di-rectly in XML. Infosets are made \dynamic,” since docu-ments can now directly embed local processes or Web Ser-vices into their Infoset. An optional typing regime for info-sets is provided by Semantic Web ontologies. By regarding Web Services as functions and the Semantic Web as pro-viding types, and tying it all together within a single XML vocabulary, the Web can compute. In this light, the real Web 2.0 can be considered the transformation of the Web from a universal information space to a universal computation space.
A pattern is useful for making the design decision regarding object oriented approach. Among the three classifications of patterns, how the Behavior patterns could influence the Architecture in wireless communication specifically Ad hoc Wireless networks. The efficient key management, Routing information and network topology should be informed and updated dynamically among the set of nodes so that efficient communication will be handled.
Intrusion detection systems monitor computer networks looking for evidence of malicious actions. The attacks detection can be classified into either misuse or anomaly detection. The misuse detection can not detect unknown intrusions whereas the anomaly detection can give false positive. Combining the best feature of misuse and anomaly detection one intelligent intrusion detection system (IIDS) is proposed which is able to detect not only the known intrusions but also the unknown intrusions. For detecting the unknown intrusions the proper knowledge base is to be formed after preprocessing the packets captured from the network. The preprocessing is the combination of partitioning and feature extraction. The partitioning of packets is based on the network services and extraction of attack feature is added to the knowledge base. The preprocessed attacks can be classified by using mining classification which will be given to rule builder. The network intrusion detection system should be adaptable to all type of critical situations arise in network. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols and network based IDS.
The principle of data mining is better to use complicative primitive patterns and simple logical combination than simple primitive patterns and complex logical form. This paper overviews the concept of temporal database encoding, association rules mining. It proposes an innovative approach of data mining to reduce the size of the main database by an encoding method which in turn reduces the memory required. The use of the anti-Apriori algorithm reduces the number of scans over the database. A graph based approach uses Apriori for temporal mining. Also a basic algorithm using pruning facility for identifying potentially frequent and infrequent interesting item sets, and thereafter positive and negative association rule mining are focused. The objective involved is to obtain lower complexities of computations involved, time and space with effective identification of interesting itemsets, association rule mining.
Electrocardiography deals with the electrical activity of the heart. The condition of cardiac health is given by ECG and heart rate. A study of the nonlinear dynamics of electrocardiogram (ECG) signals for arrhythmia characterization is considered. The statistical analysis of the calculated features indicate that they differ significantly between normal heart rhythm and the different arrhythmia types and hence, can be rather useful in ECG arrhythmia detection. The discrimination of ECG signals using non-linear dynamic parameters is of crucial importance in the cardiac disease therapy and chaos control for arrhythmia defibrillation in the cardiac system. The four non-linear parameters considered for cardiac arrhythmia classification of the ECG signals are Spectral entropy, Poincaré plot geometry, Largest Lyapunov exponent and Detrended fluctuation analysis which are extracted from heart rate signals. The inclusion of Artificial Neural Networks (ANNs) in the complex investigating algorithms yield very interesting recognition and classification capabilities across a broad spectrum of biomedical problem domains. ANN classifier was used for the classification and an accuracy of 90.56% was achieved. Linguistic variables (fuzzy sets) are used to describe ECG features, and fuzzy conditional statements to represent the reasoning knowledge and rules. Good results have been achieved with this method and an overall accuracy of 93.13% is obtained.
Internet Protocol (IP) packet traceback system is to identify the origin of sequences of IP packets when the source addresses of these packets are spoofed. IP packet traceback is usually performed with the help of routers and gateways. Several approaches have been proposed to trace IP packets to their origin. The packet marking approach enables routers to probabilistically mark packets with partial path information and tries to reconstruct the complete path from the marked packets. In most of these approaches, routers and victims (affected systems) are considerably overloaded for marking the packet and reconstructing the trace path and also more marked packets are required. This paper focuses on tracing the approximate source of attack instead of traceback the entire path in multi domain system, without computation by victim. It is assumed that Internet topology has been grouped as ISP domains.
Algorithms are the key concepts of computer science. Classical computer science provides a vast body of concepts and techniques which may be reused to great effect in quantum computing. Many of the triumphs of quantum computing have come by combining existing ideas from computer science with the ideas from quantum mechanics. The problem of determining quantum query complexity for obtaining the Radius of a graph is considered both in a classical system and in a quantum system.
Algorithms are the key concepts of computer science. Classical computer science provides a vast body of concepts and techniques which may be reused to great effect in quantum computing. Many of the triumphs of quantum computing have come by combining existing ideas from computer science with the ideas from quantum mechanics. The problem of determining quantum query complexity for obtaining the Radius of a graph is considered both in a classical system and in a quantum system.