DNA cryptography is one of the cryptographic techniques in which DNA is used to store and transmit the information. With the strange arrangement of information in DNA sequences, it is possible to implement data security efficiently. Though a lot of research is done and many algorithms have been developed for hiding the data, DNA sequences based data encryption seems to be an efficient approach for satisfying the present information security needs. In this paper, the traditional Playfair cipher to DNA Cryptography is applied. The key matrix of traditional matrix contains the 25 alphabets of the total 26-alphabets in a 5x5 key matrix based on the secret key. In the present paper, the key matrix uses 64 DNA codons by arranging them into 8x8 matrix so that the security is enhanced. The encryption technique has been modified, but at the same time retains some of the basic rules of the classical Playfair Algorithm. Playfair technique is modified by arranging the DNA Codons into 8x8 matrix to eliminate its limitations and enhance its security features. The proposed algorithm is efficient as it contains a LOOKUP table, which contains 64 values (A..Z,a..z,0..9, ,.) And the LOOKUP table is randomly arranged for each transmission. The proposed method is implemented and also compared with other popular ciphers on the basis of Time Complexity. The main objective of this paper is to provide a security system which uses DNA as a media.
Social image search is becoming popular day by day, lots of research is going on in Image search systems. Some image searching technique matches only textual or visual features of image for searching. Many Image searching methods combine visual features and textual feature of image for better performance. Hypergraph Learning Technique finds more relevant image. Some image search algorithms make use of SIFT features. This paper shows that, the results of Gabor with SVM have better results than SIFT.
Video re-coloring and wavelength adjustment algorithm that have been performed for the anomalous Trichromats subtype Tritanomaly is presented in this paper. A method to handle the weaknesses faced by the anomalous Trichromacy was proposed and finally an interface is built that will bridge the gap for the color blind to view visual media without any hindrance. The RGB to LMS theory has been utilized for the required purpose. The detailed study about the RGB to LMS theory with respect to the required equations and pseudo code pertaining to the type Tritanomaly, i.e. Blue-Yellow weakness has been proposed to be done here using color Compensation Algorithm. Certain merits include Easy Detection, RGB to LMS Mapping, and Mapping of Frames to Easy Access of Videos.
Heart diseases have been the cause of frequent deaths. It is difficult to diagnose heart problems at every medical centre because of lack of technology and the cost to afford it. This problem has been increasing majorly in rural areas. That is why it is very important to develop an affordable and reliable technology. Artificial Neural Networks (ANNs) is intended towards developing such an intelligent system, which can diagnose whether a patient is suffering from a heart disease or not. The dataset is acquired from the UCI Machine Learning Repository. The training dataset was fed into the network. Error Back Propagation algorithm is the learning network used in the analysis. Artificial Neural Network (ANN) is used to classify and distinguish between absence and presence of disease. The performance measure taken into consideration is accuracy. The targets for the neural network have been classified as 0's (Disease is absent) and 1's (Disease is present). The results obtained from back propagation algorithm using varying number of neurons in hidden layer have been compared in this research work. This system has given the best accuracy (at 80.27%) of diagnosing heart disease when the neurons in the hidden layer are kept at four, with high sensitivity and specificity value. This system provides an efficient application of neural networks for detecting heart diseases.
The most common approach for collecting tolls was to have the driver stop and to pay for toll collector sitting in a tollbooth. However, it is now viewed as toll collection process, this is unfeasible principally due to its adverse impact on traffic flow and its high collection costs, not to mention its effects on the environment. Different problems associated with traditional toll collection methods urged sophistication in the approach. After progressive developments resulting mostly from flawed strategies,“Electronic” Toll Collection (ETC) systems proved to suitably deal with the shortcomings. Though many different ETC schemes are in operation across the globe, the fundamental is to be able to automate vehicle identification and assess the tolls requiring no action by the driver. This research was, therefore, focused on studying the varied approaches to electronic tolling while working towards a feasible solution. The approach adopted for implementing the prototype employed Radio Frequency Identification (RFID). The basic idea was to work with RFID chips affixed at a corner of the vehicle's windshield. As the vehicle would pass through the toll junction, the chip would be scanned by RFID readers calibrated to the same frequency as the chips, mounted on either end, and an ID, unique to each chip, would be sent to the server via an on-board WiFi module. This ID would be used as an index to look up the database, fetch the associated user's details, and assess the toll accordingly. The user would also be notified of the transaction via SMS and/or email. This would allow speedy passage of vehicles eliminating the heavy congestion. As an added benefit, this would also eliminate the need for traditional book keeping by permitting authorized personnel to access daily logs anywhere, anytime. This would mean centralized control, improved audit, vehicle tracking, and more. An online portal was also provisioned, allowing users to register themselves, to check their billing history, choose appropriate payment methods, to recharge their accounts or pay their dues.
E-commerce is a widely increasing business in the world with increasing revenues every year by manifold times. However, multiple e-commerce websites have same products and it becomes difficult for the user to select the product from the best e-commerce website as the reviews for each product are more in number. It is difficult for the user to go through these reviews as they are in large number and its time consuming. The machine learning tools and techniques such as classification have now made it easy and simple to process text datasets and find insights from it. In this paper, a simple text mining and processing approach is illustrated, and the sample dataset of product reviews are used from Amazon. For the classification model, a probabilistic based classifier model called Naïve Bayes is used, which will classify the user reviews in three categories, such as positive, neutral, and negative reviews. This will help to analyse the sentiments of the users who have already purchased the product and will help the customer. The final output will consist of overall review and rating of the product which will help the user to get a clear knowledge about the product and whether to make a purchase call or not. By applying the Naïve Bayes technique, the user reviews are classified successfully giving an accuracy of 72.46%.
The enormous development in Information and Communications technology had increased the requirement for digital data to be stock up and communal securely. This immense quantity of data, if publicly available, can be employed for growth and development. However, data in its raw form comprises of sensitive information and advances in data mining techniques have increased the breach of privacy or confidential data. As a consequence, a field of privacy-preserving data mining emerged which deals with efficient conduction and application of data mining functionalities without scarifying the privacy of the data. Nevertheless, data should be mined before preserving privacy and amongst various techniques of data mining, associative classification is favorable for classification purpose. This paper focuses on the study of Associative Classification techniques in addition to privacy preserving techniques along with its pros and cons. In addition, related study of privacy preserving associative classification has been presented with an aim of prolific delve in this area. Furthermore, privacy preserving association classification has been implemented utilizing various datasets considering the accuracy parameter and it has been concluded that as privacy increases, accuracy gets degraded due to data transformation.