Preventing Cyber Attacks using Artificial Intelligence
Exploring Advantages and Challenges of Chatbot Integration in Human-Robot Interaction
Counterfeit Currency Detection using Machine Learning
Artificial Intelligence use in Lung Cancer Screening to Improve Patient Recovery
An Analysis of Various Crypto Coins and their Suitability for Real-Time Applications
Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
One of the recent developments in telecommunication industry is the introduction of communications through internet. Some of the applications of internet are easy communications, vast library of information at one place, electronic business through internet, e learning modules and so on. As the usage is increasing the problem of privacy and security to data is also increasing. In this context, security to the data comes into picture.
Anyone designing a product, that will be connected to the internet should be concerned about network security. T &T algorithm is pre-integrated with a wide range of security mechanisms. The broad range of choices makes it easy for the developer to determine the appropriate level of security necessary for their device and deploy it with virtually no impact to their schedules or time to market.
It is generally agreed that the most powerful tool in providing network security is encryption. In this work, some encryption algorithms are going to be discussed in terms of their code breaking time, computational complexity and response time.
Software Testing is perhaps the least-understood and most critical component of the Software Development process in the Software Development Life Cycle (SDLC) with roughly 40-45% cost associated with it. This article explores the importance of a Testing Engineer towards the completion of any successful software project and throws some light on the hurdles that a Testing Engineer encounters and summarizes effective solutions for the same.
Algorithmic cost modelling uses a mathematical formula to predict project costs based on estimates of the project size, the number of software engineers, and other process and product factors. An algorithmic cost model can be built by analyzing the costs and attributes of completed projects and finding the closest fit formula to actual experience. This paper reflects the use of an algorithmic cost estimation model. We should develop a range of estimates (worst, expected and best) rather than a single estimate and apply the costing formula to all of them. Estimates are most likely to be accurate when we understand the type of software that is being developed, when we have calibrated the costing model using local data, and when programming language and hardware choices are predefined.
This article examines how databases can be integrated into the grid. It also investigates the requirements of grid middleware enabled databases and how the databases can be made available on the grid for access by distributed application. Grid database propose a Service Based Architecture i.e., the database system are wrapped within a Grid enabled service interface that simplifies the task of building applications to access their contents. The paper proposes a framework for federating database servers over the grid, in which service federation middleware connects to the service interfaces of the set of database systems to be federated and creates a “Virtual Database System”. While the paper focuses on federating database system, it also argues that the service based approach will simplify the task of integrating databases with file-based data on the grid where that is required. For example if you need to analyze a lot of data from different computers all over the globe, you could ask the grid to do this. Then the grid could find out where the most convenient source of the data is, without specifying anything and do the analysis on the data wherever it is. The ability to federate data from multiple databases is likely to be a very powerful facility for Grid users wishing to collect and analyze information distributed over the Grid.
The ability to categories the packets into flows in an Internet router is termed as Packet Classification. The packets with same source & destination addresses form a flow which follows a predefined rule and are processed in a similar manner. The packet classification is needed for many sophisticated value added services, such as QoS, load balancing, traffic accounting, etc. Various approaches to packet classification have been studied in the literature with accompanying theoretical bound. In this paper, we present an algorithmic framework for solving the packet classification problem practically. We propose and study a novel approach to packet classification using Multistage classifier Compaction Scheme MCS. Besides high performance, our algorithm preserves average search time and reasonable storage requirements. To evaluate our algorithm, we have developed realistic model of large scale rule bases and use them to drive extensive experimentation. The result shows that the algorithm is very efficient as compared to other techniques in real networks.
In this paper a computer modeling and simulation of piezoelectric transducer system is explored. A model of a piezoelectric transducer using electrical components such as transmission lines and controlled sources is presented. The advantage gained by using electrical components is that the associated electronics can be designed with great ease. The analogy of wave propagation in acoustic media and transmission lines is described as well as the electrical-mechanical transduction in piezoelectric material. The model is validated by comparing its received signal to that of an actual experiment in the time domain.
This paper deals with the analysis of the TCP (Transmission Control Protocol) performance over Mobile Ad Hoc Networks. These networks are assumed with an implementation of a class of non-linear congestion control algorithms called polynomial congestion control algorithms. Two models of these algorithms are introduced in this paper. They generalize the Additive Increase and Multiplicative Decrease (AIMD) algorithms used for the TCP connections. These algorithms provide additive increase using a polynomial of the inverse of the current window size and provide multiplicative decrease using the polynomial of the current window size. There are infinite numbers of TCP-compatible polynomial algorithms by assuming polynomial of different order. This paper compares the performance of the two models for mobile ad hoc TCP networks with the other TCP algorithms. The analysis is based on the simulations. These simulations are performed using ns2, a discrete event network simulator. The effects of varying transmitting antenna power and height are also studied. Effects of signal interference on the proposed models are also studied. The results show that one of the proposed models performs better under various simulated conditions.
This paper describes the development of an experimental model for signal propagation in WLAN environment in order to help the network designer to achieve an efficient network using a simple and more effective design. The strength of the propagating signal between two computers was measured at different points and conditions. The measurements were made using two different approaches; with and without obstacles. The wireless network implemented in this work was based on the use of IEEE 802.11b protocol operating using license-free band at 2.4GHz. Our findings suggest that the signal strength reduces to 69% within less than 20 meters in an open space, and decreases further to 55% with the increase of obstacles (up to 6). However, in all tested cases no loss of data or network disconnection occurred. Conclusions drawn from our findings suggest that an approximate linear model was verified, and it compares well with other empirical models proposed by other researchers. The linear model helps the network designer in the process of the indoor WLAN planning. This work will continue further to investigate new emerging wireless technologies for further improvements of network performance in terms of wider range coverage and functionality.
Digital images indexing and querying techniques have been extensively studied for the last years. But only few systems are dedicated to medical images today, while the need for content based retrieval tools increasing with the growth of digital image databases. This work aims at developing the content based retrieval system for medical images by extracting texture features. We have used Gabor filters which is a popular signal processing approach for extracting texture features. We have compared our retrieval system based on Gabor filter, with another retrieval system based on co-occurrence matrix for extracting texture features.