Design and Evaluation of Parallel Processing Techniques for 3D Liver Segmentation and Volume Rendering
Ensuring Software Quality in Engineering Environments
New 3D Face Matching Technique for an Automatic 3D Model Based Face Recognition System
Algorithmic Cost Modeling: Statistical Software Engineering Approach
Prevention of DDoS and SQL Injection Attack By Prepared Statement and IP Blocking
Mining the possible information from the web is one of the popular research topics. This paper comprises Naive Bayes, Back Propagation, and Deep Learning algorithm to describe various ranking scores on the training data set. To construct the extraction model, the authors have proposed a new algorithm for training support vector machines, known as Deep Learning. It requires the solution of a very large Quadratic Programming (QP) optimization problem. Naive Bayes for probabilistic model and back propagation algorithm were evaluated, and the results are compared and proved in an efficient manner. They even exploited various data sets of dissimilar entities which are used to measure the strength of association between words and co-occurrence statistics are also computed from this computation.
The aim of the project is to realize the upcoming concept of Internet of Things where every “Thing” will be connected over a network for data acquisition. The acquired data is then utilized for making better decisions in real-time environment. Early detection of fire accidents is very crucial in preventing heavy casualty and damage in any industry. Present fire detection systems employ the method of raising alarm at the site of detection. A smart system can detect fire accident and also take an action or control the output of the system in real-time based on data acquired from sensors. The role of IoT is not just controlling the output of a system in real-time, but to do the same for multiple systems by connecting them all to a centralized server over a network. Since all data is available in a centralized server, they can be monitored and controlled from remote locations.
In today's advancing world, data is increasing day by day and due to its vast size it has become very difficult to incorporate security to all the data. Interest in security function of data is increasing due to growth of network technology. It is also increasing with development of cloud computing an area which is rich in source for information along with network traffic. This area needs security. Through this paper, the authors propose an overview of big data security problems, which includes its varieties, and challenges. It also discusses privacy concern on big data. The methodology adopted for protection is attribution selection, that is determined by the background of big data. It is also ascertained by data mining and big data features.
Today's web has no usual components to make computerized relics, for example, datasets, code, messages, and pictures, which are precise and stay unaltered. Digital artifacts are not modifiable to unchange, and there is no right technique to implement this perpetuity. These disappointments have a great negative effect on the adaptability to imitate the consequences of procedures that depend on web assets and furthermore it impacts regions like science where reproducibility is basic. To tackle this issue, the authors have proposed trusty URIs containing cryptographic condensations. They have also dealt with how trusty URIs will be utilized for the verification of digital artifacts, in a way that is independent of the distributed design in the instance of organized data records like Nanopublications. They exhibited how the constant of those documents end up noticeably immutable, and additionally conditions to external digital artifacts were added, and along these lies extending the scope of undeniable nature to the entire reference tree. This approach adheres to the center standards of the Web, for example, openness and decentralization, and is totally good with existing gauges and conventions. A transformative model demonstrates that these style objectives are absolutely proficient by their approach, which remains sensible for large documents.
Deep Learning is the trending area of research in Machine Learning and Pattern Recognition. Deep Learning focuses on Machine Learning tools and techniques, and applies them in resolving complications which lacks human or artificial thoughts. Deep Learning is achieved by learning over a cascade of many layers. Deep Learning handles many real world complications, such as Machine translation, Object recognition and Localization, Speech recognition, Image caption generation, Distributed representation for text, Natural Language Processing, Image Classification, etc., with its datadriven representation learning. The traditional computing is facing challenges in dealing with high-dimensional and streaming data, semantic indexing, and scalability of models. The analysis of streaming and fast-moving input data is constructive in tracking tasks such as fraud detection. In view of the above challenges, extensive advanced research work is essential for adaptation of Deep Learning algorithms to deal with issues associated with Big Data. Handling streaming data is essentially adapted by Deep Learning algorithms so as to deal with the increasing huge amounts of uninterrupted input data. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, which makes it a worthy tool for Big Data Analytics, where the input data is mostly unlabeled and un-categorized. This paper provides a detailed survey on semantic indexing techniques which will enable fast search and information retrieval.