IoT Assistive Technology for People with Disabilities
Soulease: A Mind-Refreshing Application for Mental Well-Being
AI-Powered Weather System with Disaster Prediction
AI Driven Animal Farming and Livestock Management System
Advances in AI for Automatic Sign Language Recognition: A Comparative Study of Machine Learning Approaches
Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
This paper proposes a support system for quality management of a company. Moreover, the reproduction of such Quality Management System (QMS) requires both modeling and piloting. For modeling, a Multi-Agent System (MAS) approach using a micro framework between the agents based on the sequence diagram UML (Unified Modeling Language) was proposed. For piloting, the proposed method is the Hierarchical Fuzzy Signatures (HFS) that is one of the best methods adopted for this kind of problems. To validate this developed system, an industrial company which presents a major problem for controlling the quality level of production lines was chosen. The achieved results have shown that HFS concept is effective, efficient and flexible for piloting such model QMS-MAS.
Digital filtering is considered to be crucial operator in reconstruction and visualization of information, besides amounting to increase in computational efficiency. The FIR based digital filter bank is more effective in respect of above advantages. In this paper, we propose a novel approach in Multirate technique through Digital filter bank method, based on Modified Kaiser window. A remarkable spectral output is achieved by way of increase in magnitude, quality of output, better frequency response and computational efficiency. The simulation results are added on account of satisfactory performance and comparison is drawn to enlighten the advantages in the proposed method. This type of filter bank is particularly suitable in typical hearing aid applications to achieve significant merits for improving output quality.
A watermarking technique for image authentication inserts hidden data into an image in order to detect any accidental or malicious alteration. In the proposed work, a watermark in the form of a visually meaningful binary pattern is embedded for the purpose of tamper detection. The watermark is embedded in the discrete cosine transform coefficients of the image by quantizing the coefficients proportionally according to the watermarked bit. Experimental results demonstrate the performance and effectiveness of the scheme for image authentication by maintaining good values for both, the Peak Signal to Noise Ratio (PSNR) and Tamper Assessment Function (TAF).
SoFT is a new method and tool which measures and models linear electrical network components in a wide frequency band with unprecedented accuracy. This is achieved by a special modal based measurement technique in combination with suitable rational fitting and passivity enforcement methods. The models are easily imported into most commonly used simulation software.
This paper demonstrates the SoFT tool computations in a comparison between A) time-domain measurements of a lightning impulse test of a power transformer, B) simulation of the test results using a SoFT model, and C) simulation of the test results using a lumped-element circuit simulation model based on geometrical transformer design information.
The majority of today’s video surveillance systems are analog based, requiring human interaction and are susceptible to multiple threats. Utilizing a fully digitized system incorporating the Vector Bank (VB), Laplacian of Gaussian (LoG), and Directional Discrete Cosine Transform (DDCT) to each surveillance camera in a surveillance network, the need for digital processing would be accomplished at each individual camera level, reducing the memory required at the network hub and providing a decentralized secure surveillance network. Detection and tracking of moving objects may be sensed by each independent surveillance camera in the network, processed using VB, LoG, and DDCT to drastically reduce bandwidth, and computation time of processing, ultimately reducing data bus traffic and transmission bandwidth. In this paper, the addition of a hardware based System on Chip (SoC) digital signal processing algorithms interfaced to H.264 architecture for the purpose of a fully redundant and autonomous multi-camera digital video surveillance network is discussed. Experimental results show the performance and its capability to detect, isolate and track the objects, while reducing computational time, channel bandwidth and overhead costs of the digital video surveillance network while increasing security. Such achievements prove to greatly improve computer visualization, and performance of the network.
Reliability assessment of component based software system plays a vital role in developing quality software systems using component based development methodology. This can be achieved by Monte Carlo Simulation Method of reliability prediction when software system complexity makes the formulation of exact models essentially impossible. The characteristics of the Monte Carlo method make it ideal for estimating the reliability of component based software systems. Unlike many other mathematical models, software system complexity is irrelevant to the method. Not only can the structure of the software system be dynamic, but the precise structure of the software system need not even be known. Instead, software system components need only be tested for failure during operation, which ensures that software components which are used more often contribute proportionally more to the overall software system reliability estimate. This paper presents a novel Monte Carlo Simulation method for assessing the reliability of component based software systems.
Advances in silicon technology have made possible the design of very large scale integration (VLSI) chips. The size of these designs has reached a level where the design activity is carried out by a team of designers rather than an individual. The complexity of the designs and the merging of functional islands from different designers can lead to mistakes (bugs) in the chip.
This paper details the methods used by the Subsystems Electronics hardware design group (based at IBM Havant) to minimise the possibility of releasing a chip containing bugs. In most cases the design will have cost and schedule constraints, and there is a trade-off between the amount of time and exhibit expended at the design and simulation phases, and the risk of sending a chip for fabrication before all the bugs have been found. The problem of determining when a chip should be released for fabrication has been addressed by the use of statistical analysis to assess when the simulation is complete or no longer likely to find mistakes.
Global Internet threats are rapidly evolving from attacks designed solely to disable infrastructure to those that also target people and organizations. This alarming new class of attacks directly impacts the day to-day lives of millions of people and endangers businesses and governments around the world. For example, computer users are assailed with spyware that snoops on confidential information, spam that floods email accounts, and phishing scams that steal identities. At the center of many of these attacks is a large pool of compromised computers located in homes, schools, businesses, and governments around the world.
In this paper, the authors provide a detailed overview of current Botnet technology and defense by exploring the intersection between existing Botnet research, the evolution of botnets themselves, and the goals and perspectives of various types of networks. Authors also describe the invariant nature of their behavior in various phases, how different kinds of networks have access to different types of visibility and its strong impact on the effectiveness of any Botnet detection mechanism. A comprehensive picture of the various Botnet detection techniques that have been proposed is provided. Finally, the paper summarizes the survey and suggests future directions.
In this study, we propose some simple greedy algorithms for the multi-processor job scheduling problem. Given list of jobs are arranged according to the time duration for processing. The results of the proposed algorithms are compared with the first-come first-serve (FCFS) job scheduling approach and shown to be superior.
In Very Deep-submicron (VDSM) systems, the scaling of ULSI ICs has increased the sensitivity of CMOS technology to cause various noise mechanisms such as power supply noise, crosstalk noise, leakage noise, etc. In VDSM technology distance between the data bus lines is reduced, so coupling capacitance is dominating factor. The coupling capacitance (CC) is between long parallel wires. The load capacitance (CL) defines the wire-to-substrate capacitance. Unfortunately, in VDSM systems, the coupling capacitance is of magnitude several times larger than the loading capacitance. The coupling capacitance causes logical malfunction, delay faults, and power consumption on long on-chip data buses. An important effect of the coupling capacitance is Cross talk. Crosstalk is mainly dependent on several factors: drive strength, wire length/spacing, edge rate and propagation duration.
The crosstalk noise produces from the coupling capacitance. Such faults may affect data on data bus. The severity of this problem depends on fault duration. To avoid this condition and to guarantee signal integrity on the on-chip communication, a fault tolerant bus can be adopted. This could be achieved by implementing error-correcting codes (ECCs), providing on-line correction and do not require data retransmission.
The 8-bit data bus is implemented in 1200nm, 180nm, 120nm, 90nm and 65nm technologies and simulation results shows that crosstalk increases as the technology scales down. For reliable transmission of the data ECC techniques is placed on the data bus. We employed a Hamming code as ECC for 8-bit fault tolerant data bus. This is implemented in 1200nm, 180nm, 120nm, 90nm and 65nm technology. The simulation results show that the average Power varies from 48.054mW to 0.235m, and Maximum delay varies from 3.437ns to 0.092ns respectively.
Ad-Hoc wireless networks have emerged as one of the key growth areas for wireless networking and computing technology. One of the major factors effecting the ad-hoc communication is the misbehaving of nodes. Node misbehavior due to selfish or maliciousness or faults can significantly degrade the performance of mobile ad hoc networks. Most of the routing protocols in wireless ad hoc networks, such as DSR, fail to detect misbehavior and assume nodes are trustworthy and cooperative.
Dynamic Source Routing protocol is modified to cope with misbehavior. It enables nodes to detect misbehavior by first-hand observation and use the second-hand information provided by other nodes. The view a node has about the behavior of another node is captured in a reputation system, which is used to classify nodes as misbehaving or normal. Once a misbehaving node is detected, it is isolated from the network. Reputation systems can, however be tricked by the spread of false reputation ratings, be it false accusations or false praise. To solve this problem, a fully distributed reputation system is proposed that can cope with false information and effectively use second-hand information in a safe way. Approach is based on modified Bayesian estimation and classification procedure for the isolation of malicious and selfish nodes of given network.
In this paper, tests are performed for the network containing normal nodes and misbehaving nodes, the delay plots of original DSR and Modified DSR is compared and performance has been analyzed. The proposed task is implemented in MATLAB for protocol implementation and verification.