i-manager's Journal on Information Technology (JIT)


Volume 2 Issue 1 December - February 2013

Article

Using Data Mining For Prediction: A Conceptual Analysis

Durgesh M. Sharma* , Ashish K. Sharma**, Sangita A. Sharma***
*-**-*** Assistant Professor, Manoharbhai Patel Institute of Engineering & Technology (MIET), Gondia, India.
Sharma, D.M, Sharma, A.K and Sharma, S.A (2013). Using Data Mining For Prediction: A Conceptual Analysis. i-manager’s Journal on Information Technology, 2(1), 1-9. https://doi.org/10.26634/jit.2.1.2136

Abstract

 In today's intensely competitive environment organizations must secure a high customer satisfaction in order to survive. This requires identifying customers and properly understanding their needs. However, targeting the customer is not that easy as the businesses have massive data and this requires properly analyzing and dealing with data. Moreover, old forecasting method appears to be no obvious advantage in try to find the individual for every item and no longer adaptable for any business condition. Data mining is powerful technique which can serve the purpose. Data mining is automated extraction of data from large databases. Data mining offers several techniques such as attribute relevance analysis, decision tree, clustering, prediction, etc for the task and thus to this end, this paper suggests the use of data mining techniques of attribute relevance analysis and decision tree for prediction.

Article

Privacy Between Client And Service Providers

M. Ramesh Reddy* **
* Assistant Professor, Department of Electronics and Communication Engineering, Visvodaya Engineering College.
** Assistant Professor, Department of Electronics and Communication Engineering, PBR VITS.
M. Ramesh Reddy and A. Ramkishore (2013). Privacy Between Client And Service Providers. i-manager’s Journal on Information Technology, 2(1), 10-12. https://doi.org/10.26634/jit.2.1.2140

Abstract

A Web service is a web based application that can be easily reachable programmatically through the web. This web service is enabled by various efforts to discover, describe, advertise and invoke the web service. In this paper web services are generated for different services and that services are composed according to the user’s request. The composition of services is done dynamically to accomplish the users is preserved by using the filters. This privacy preservation is to secure the confidential information of the users from other service providers. All the web service transaction is to be done through mobile.

Research Paper

Sap Hana And Its Performance Benefits

Timur Mirzoev* **
* Professor, Department of Information Technology, Georgia Southern University.
** Systems Engineer, Lockheed Martin.
Timur Mirzoev and Craig Brockman (2013). SAP HANA And Its Performance Benefits. i-manager’s Journal on Information Technology, 2(1), 13-21. https://doi.org/10.26634/jit.2.1.2141

Abstract

In-memory computing has changed the landscape of database technology.  Within the database and technology field, advancements occur over the course of time that has had the capacity to transform some fundamental tenants of the technology and how it is applied.  The concept of database management systems (DBMS) was realized in industry during the 1960s, allowing users and developers to use a navigational model to access the data stored by the computers of that day as they grew in speed and capability. 

This manuscript is specifically examines the SAPHigh Performance Analytics Appliance(HANA) approach, which is one of the commonly used technologies today. Additionally, this manuscript provides the analysis of the first two of the four common main usecases to utilize SAP HANA’s in-memory computing database technology. The performance benefits are important factors for DB calculations.Some of the benefits are quantified and the demonstrated by the defined sets of data.

Research Paper

Open Proxy: A road block for Phishing investigations

Swapan Purkait*
Research Scholar, Vinod Gupta School of Management, Indian Institute of Technology, Kharagpur, India.
Swapan Purkait (2013). Open Proxy: A Road Block For Phishing Investigations. i-manager’s Journal on Information Technology, 2(1), 22-33. https://doi.org/10.26634/jit.2.1.2142

Abstract

When a hacker sends a phishing email or host a phishing website, investigators must locate the source of the communication, they must trace the electronic trail leading from the email or the web server back to the perpetrator. Traceability is a key to the investigation of the cyber crime such as phishing. It is impossible to prevent all internet misuse but may be possible to identify and trace the user, and take appropriate legal action. Different phishing detection mechanisms have been addressed in research papers, but with little being focused on open proxy usage rather, misusages. This paper addresses the security concern for mushrooming of open proxy servers in the globe. This work highlights how anonymity provided to the phisher by an open proxy is becoming a major roadblock for cyber crime investigating agencies in India. We conducted personal interviews with various law enforcement officers involved in Cyber crime cases mainly, Phishing and prepared a flow chart how all these cases getting stalemate because of the open proxy servers. Helpless condition of theses investigating agencies proves that easy availability of free and easy anonymous proxy servers motivates phisher to plan the attack knowing very well that they will not be traced back. In our solution framework we propose two dimensional approach combining technical solution and legal cooperation among international law enforcement agencies. Our technical solution will be able to flag all emails that are using an open proxy, also it will be able to locate any website usage or update through an open proxy.

Research Paper

Principal neighborhood dictionary nonlocal means method for image enhancement and analysis

Naga Raju C * **
C. Nagaraju, U. Rajyalakshmi and A.S. Kavitha Bai. Principal Neighborhood Dictionary Nonlocal Means Method For Image Enhancement And Analysis. i-manager’s Journal on Information Technology, 2(1), 34-38. https://doi.org/10.26634/jit.2.1.2143

Abstract

In this paper a principal neighborhood dictionary nonlocal means method isproposed.As the computational power increases, data-driven descriptions of structure are becoming increasingly important in image processing. Traditionally, many models are used in applications such as denoising and segmentation have been based on the assumption of piecewise smoothness. Unfortunately, these models yields limited performance thus motivated for data driven strategies. One data-driven strategy is to use image neighborhoods for representing local structure and these are rich enough to capture the local structures of real images, but do not impose an explicit model. This representation has been used as a basis for image denoising and segmentation. But the drawback is it gives high computational cost. The motivation of our work is to reduce the computational complexity and higher the accuracy by using nonlocal means image denoising algorithm. This paper will present in-depth analysis of nonlocal means image denoising algorithm that uses principal component analysis to achieve a higher accuracy while reducing computational load. Image neighborhood vectors are projected onto a lower dimensional subspace using PCA. The dimensionality of this subspace is chosen automatically using parallel analysis. Consequently, neighborhood similarity weights for denoising are computed using distances in this subspace rather than the full space. The resulting algorithm is referred to as principal neighborhood dictionary nonlocal means. By implementing the algorithm we will investigateprincipal neighborhood dictionary nonlocal meansmethod’sandNonlocal Means method’s accuracy with respect to the image neighborhood and window sizes. Finally, we will present a quantitative and qualitative comparison of both methods.