i-manager's Journal on Computer Science (JCOM)


Volume 4 Issue 2 June - August 2016

Article

Installation of Alchemi.Net in Computational Grid

Neeraj Kumar Rathore*
Assistant Professor, Department of Computer Science and Engineering, Jaypee University of Engineering & Technology, Guna, Madhya Pradesh, India.
Rathore, N. (2016). Installation Of Alchemi.Net In Computational Grid. i-manager’s Journal on Computer Science, 4(2), 1-5. https://doi.org/10.26634/jcom.4.2.8119

Abstract

The Grid is an increasingly growing virtualization and distribution environment for sharing the resources. This situation imposes greater demands on many techniques, i.e, Load balancing, securities, etc., and fault-tolerance capabilities are one of the demanding criteria. In the fault tolerance mechanisms, different types of schemes and techniques are there, which are beneficial to make the Grid fault tolerant. Checkpointing is one of the methods of fault tolerance. Till now, many middleware/Software in the Grid/Cloud environment is not fully fault tolerant. Different middleware has different levels of fault tolerance. Some of the middleware like Alchemi.NET does not have a robust fault tolerance mechanism. Therefore, in this research work Alchemi.NET has been chosen and a scenario of the checkpointing algorithm has been shown to tolerate the fault in the Grid. The main objective of the paper is to know the installation/computation step of Alchemi.NET for simulation in the field of Grid.

Research Paper

Detection of Anomaly Based Application Layer DDoSAttacks Using Machine Learning Approaches

M.S.P.S. Vani Nidhi* , K. Munivara Prasad**
* PG Scholar, Department of Computer Networks and Information Security, Sree Vidyanikethan Engineering College, Tirupathi, Andhra Pradesh, India.
** Assistant Professor, Department of Computer Science and Engineering, Sree Vidyanikethan Engineering College, Tirupathi, Andhra Pradesh, India.
Nidhi, M.S.P.S.V., and Prasad, K.M. (2016). Detection Of Anomaly Based Application Layer DDos Attacks Using Machine Learning Approaches. i-manager’s Journal on Computer Science, 4(2), 6-13. https://doi.org/10.26634/jcom.4.2.8120

Abstract

DDoS (Distributed Denial of Service) attacks are a major threat to security. These attacks are mainly originated from the network layer or application layer of the compromised systems that are connected to the network. The main intention of these DDoS attacks is to deny or disrupt the services or network bandwidth of the victim or target system. Now-a-days, application layer DDoS attacks are posing a serious threat to the Internet. Differentiating between the legitimate/normal and malicious traffic is a very difficult task. A lot of research work has been done in detecting the attacks using machine learning approaches. In this paper, the authors have proposed the machine learning metrics for detecting the application layer DDoS attacks.

Research Paper

Effective Bug Triage with Software Data ReductionTechniques Using Clustering Mechanism

R.Nishanth Prabhakar* , K.S. Ranjith**
* Postgraduate, Department of Computer Science and Engineering, Sree Vidyanikethan Engineering College, Tirupathi, Andhra Pradesh, India
** Assistant Professor, Department of Computer Science and Engineering, Sree Vidyanikethan Engineering College, Tirupathi, Andhra Pradesh, India.
Prabhakar, R.N., and Ranjith, K.S. (2016). Effective Bug Triage With Software Data Reduction Techniques Using Clustering Mechanism. i-manager’s Journal on Computer Science, 4(2), 14-22. https://doi.org/10.26634/jcom.4.2.8121

Abstract

Bug triage is the most important step in handling the bugs, which occur during a software process. In manual bug triaging process, the received bug is assigned to a tester or a developer by a triager. Hence, the bugs are received in huge numbers, it is difficult to carry out the manual bug triaging process, and it consumes much resources, both in the form of man hours and the economy. Hence, there is a necessity to reduce the exploitation of resources. Hence, a mechanism is proposed which facilitates much better and efficient triaging process by reducing the size of the bug data sets. The mechanism here involves techniques like clustering techniques and selection techniques. This approach proved more efficient than the manual bug triaging process when compared with bug data sets which were retrieved from the open source bug repository called bugzilla.

Research Paper

A Hybrid Clustering with Side Information in Text Mining

T. Naveen Kumar* , Ramadevi**
* PG Scholar, Department of Computer Science and Engineering, Sree Vidyanikethan Engineering College, Tirupati, India.
** Assistant Professor, Department of Computer Science and Engineering, Sree Vidyanikethan Engineering College, Tirupati, India.
Kumar, T.N., and Ramadevi (2016). A Hybrid Clustering With Side Information In Text Mining. i-manager’s Journal on Computer Science, 4(2), 23-30. https://doi.org/10.26634/jcom.4.2.8122

Abstract

In many online forms, lots of side-data or meta information is available. This Meta data consists of different kinds, for example the links present in the file, the user-access performance from blogs, the document origin information, and also other attributes which are surrounded into the content or text document. For the clustering purposes, these Meta attributes contain large amount of information. The Meta data adds the noise to the mining process. So, it is difficult to incorporate into this process. The existing COATES algorithm is created for clustering approach. But, in COATES the kmeans algorithm creates some problems as it is unable to get the quality of clusters better. Because, it leads to the wrong number of clusters, different sized clusters, and empty clusters and outliers. The authors have proposed a Hybrid-COATES algorithm which combines CURE with COATES algorithm for an efficiency and effective clustering approach. To mine text data with the help of Meta data or side information, CURE algorithm is more capable than kmeans algorithm. Hybrid-COATES method is used to attempt to the scalability problem and improve the quality of clustering results.

Research Paper

Optimal Processing Keyword CoverSearch in Spatial Database

K.S. Dadapeer* , S. Salam**, T. V. Rao***
* PG Scholar, Department of Computer Science and Engineering, Sree Vidyanikethan Engineering College, Tirupati, India.
**-*** Professor, Department of Computer Science and Engineering, Sree Vidyanikethan Engineering College, Tirupati, India.
Dadapeer, K.S., Salam, S., and Rao, T.V. (2016). Optimal Processing Keyword Cover Search In Spatial Database. i-manager’s Journal on Computer Science, 4(2), 31-38. https://doi.org/10.26634/jcom.4.2.8123

Abstract

Keywords are used in different approaches. In text editing and DBMS (Data Base Management System), a keyword is used to find certain records. In programming languages, a keyword is the reserved word in a program since it has a meaning. It is also used to search the relevant web pages through a search engine. In spatial database having a set of objects, each object is associated with a keyword such as hotels, restaurants, etc. Here the issue is closest keyword cover search also called as keyword cover, which covers a set of query keyword and minimum inter object distance. At present days we are giving the importance of keyword rating to make a better decision. Later, generic version of the keyword cover called best keyword cover, which the objects cover the inter object distance and keyword rating. Baseline algorithm simulates closest keywords search, combining objects from query keyword to generate a candidate key. In baseline algorithm, the performance decreases because, as the number of query keyword increases the candidates increases, and to overcome this drawback, a scalable algorithm called Keyword Nearest Neighbour Expansion (K-NNE) variation was introduced with previous approach. The keyword Nearest Neighbour Expansion (K-NNE) gradually decreases the candidate key. In previous work with minimal tree cover, the query keyword in future finds the sub graph rather than minimal tree, which is more informative to the users. Nodes in a tree are close to each other, and the other nodes far away from each other has a weak relationship on content nodes in a tree. All keywords having same importance, ie, the result containing strong relationship that is the shortest distance between each pair of nodes selected over weak relationship. Tree base method content and non-content nodes in the tree during the results, hundreds and thousands of nodes in input graph have high time and memory complexity.