i-manager's Journal on Software Engineering (JSE)


Volume 9 Issue 4 April - June 2015

Research Paper

Legality Stable Analysis Pattern

M.E. Fayad* , Siddharth Jindal**
* Professor, Department of Computer Engineering, Charles W. Davidson College of Engineering, San Jose State University, San Jose, USA.
* *Student, Department of Computer Engineering, Charles W. Davidson College of Engineering, San Jose State University, San Jose, USA.
Fayad, M. E., and Jindal, S. J. (2015). Legality Stable Analysis Pattern. i-manager’s Journal on Software Engineering, 9(4), 1-6. https://doi.org/10.26634/jse.9.4.3526

Abstract

Legality is an umbrella term that encompasses every aspect of dealing and working with different entities in a lawful manner. Although, legality finds application across almost every existing system, an explicitly defined pattern do not exist for it even now. Hence, this paper will introduce the process of modeling different kinds of related applications without needing to re-think the problem every time and from scratch. The legality pattern represents the core knowledge of anything that complies with the regulations of its arbitration authority. In addition, this pattern can be reused as part of any new model which deals with legality in some way or the other. The pattern utilizes the concepts defined in the Software Stability Model (SSM) to develop a more stable and generic model. This will help in eliminating the need of repeating the process of separately modeling legality for each related domain.

Research Paper

Regression Testing Using IGTCP Algorithm for Industry Based Applications

Hema Shankari* , R. Thirumalaiselvi**
* Assistant Professor, Department of CSC, Women's Christian College, and Research Scholar, Bharath University, Chennai.
** Assistant Professor, Department of Computer Science, Govt. Arts College, Nandanam, Chennai.
Shankari, K. H., and Selvi, R. T. (2015). Regression Testing Using IGTCP Algorithm for Industry Based Applications. i-manager’s Journal on Software Engineering, 9(4), 7-13. https://doi.org/10.26634/jse.9.4.3527

Abstract

Regression testing is a testing to test the modified software during the maintenance level. Regression testing is a costly but crucial problem in software development. Both the research community and the industry have paid much attention to this problem. The paper try to do the survey of current practice in industry and also try to find out whether there are gaps between them. The observations show that although some issues are concerned both by the research community and the industry. This research discusses the problems about current research on regression testing and quality control in application of regression testing in the engineering practice, and proposes a practical regression method, combing with change-impact-analysis, business rules model, cost risk assessment and test case management. This paper presents an approach to prioritize regression test cases based on the factors such as rate of fault deduction, percentage of fault detected and the risk detection capability. The proposed approach is compared with previous approach using APFD metric. The results represent that propose approach outperforms the earlier approach.

Research Paper

Privacy Preserving Access Control to Incremental Data

V. Ravi Kumar Yadav* , lalitha.B**
* M.Tech Scholar (Software Engineering), CSE Department, JNTUA College of Engineering, Anantapuramu, A.P, India.
** Assistant Professor, CSE Department, JNTUA College of Engineering, Anantapuramu, A.P, India.
Yadav, V. R. K., and Lalitha, B. (2015). Privacy Preserving Access Control to Incremental Data. i-manager’s Journal on Software Engineering, 9(4), 14-19. https://doi.org/10.26634/jse.9.4.3529

Abstract

Data privacy issues are gradually more becoming important for many applications. Usually, database in the area of data safety can be mostly classified into access control research and data confidentiality research. There is little overlie among these two areas. Access Control Mechanisms (ACM) safe the sensitive information from unauthorized users. Even sanctioned users may misuse the data to reveal the privacy of individuals to whom the data. The privacy safety mechanism provides a greater confidentiality for sensitive information to be shared. It is achieved by anonymization techniques [8]. Privacy is achieved by the high accuracy and consistency of the user information, i.e., the precision of user information. In this paper, it offers confidentiality (privacy) preserving access manage mechanism for Incremental relational data. It uses the accuracy forced privacy protected access control mechanism for incremental relational database framework here. It uses the concept of imprecision bound related to access control mechanism for preserving privacy. The imprecision bound is set for all queries. For the privacy protection mechanism, it uses the combination of both the k-anonymity and fragmentation method.

Research Paper

Evaluating the Privacy of User Profiles in Personalized Information Systems

Yobu Uppalapati* , lalitha.B**
* M. Tech Scholar (Artificial Intelligence), CSE Department, JNTUA College of Engineering, Anantapuramu, A.P, India.
** Assistant Professor, CSE Department, JNTUA College of Engineering, Anantapuramu, A.P, India.
Yobu, U., and Lalitha, B. (2015). Evaluating the Privacy of User Profiles in Personalized Information Systems. i-manager’s Journal on Software Engineering, 9(4), 20-24. https://doi.org/10.26634/jse.9.4.3532

Abstract

Collaborative tagging is one of the most well-known and widespread services available online. The key point of collaborative tagging is to distinguish the resources based on user opinion, stated in the form of tags. Collaborative tagging supplies the source for the semantic Web, network will connect all online resources based on their meanings. While this information is a valued source, its total volume limits its value. Most of the research projects and corporations are discovering the use of personalized applications that control this overflow by modifying the information obtainable to individual users. These applications altogether utilize some information about individuals in directive to be active. This zone is generally called user profiling. In this paper some of the most standard techniques for gathering information about users, signifying, and constructing user profiles. This paper mainly focus on measuring the privacy of user profiles through kl divergence and Shannon entropy techniques showing the tag suppression that protects the end user privacy.

Research Paper

Document Summarization using A Hybrid Trace Thrash Data Modeling and Classification Algorithms

S.Dilliarasu* , R. Thirumalaiselvi**
* Research Scholar, Bharath University, Selaiyur, Chennai, India.
** Assistant Professor, Department of Computer Science, Govt. Arts College for Men (Autonomous), Nandanam, Chennai, India.
Arasu, S. D., and Thirumalaiselvi, R. (2015). Document Summarization using A Hybrid TraceThrash Data Modeling and Classification Algorithms. i-manager’s Journal on Software Engineering, 9(4), 25-31. https://doi.org/10.26634/jse.9.4.3531

Abstract

Multi-record rundown is utilized for comprehension and investigation of substantial record accumulations, the significant wellspring of these accumulations are news documents, sites, tweets, website pages, examination papers, web indexed lists and specialized reports accessible over the web and different spots. A few cases of the uses of the Multi-report synopsis are breaking down the web list items for helping clients in further skimming and creating outlines for news articles. Report preparing and synopsis era in an expansive content record gathering is computationally unpredictable errand and in the period of Big Data examination where size of information accumulations is high, there is need of calculations for condensing the huge content accumulations quickly. Here the authors display, a Trace Thrash, A multi archive summarizer is introduced in this work with the assistance of semantic likeness based grouping over the prevalent disseminated figuring system Trace Thrash.

Research Paper

Software Defect Prediction using Average Probability Ensemble Technique

T. Vara Prasad* , C. Silpa**, Srinivasulu Asadi***
* M.Tech Scholar, Department of IT, Sree Vidyanikethan Engineering college (Autonomous), Rangampet, Andhra Pradesh, India.
** Assistant Professor, Department of IT, Sree Vidyanikethan Engineering college (Autonomous), Rangampet, Andhra Pradesh, India.
*** Ph.d Scholar, Jawaharlal Nehru Technological University, Hyderabad, India.
Prasad, T. V., Silpa, C., and Srinivasulu, A. (2015). Software Defect Prediction using Average Probability Ensemble Technique. i-manager’s Journal on Software Engineering, 9(4), 32-39. https://doi.org/10.26634/jse.9.4.3533

Abstract

The present generation software testing plays a major role in defect predication. Software defect data includes redundancy, correlation, feature irrelevance and missing value. It is hard to ensure that the software is defective or nondefective. Software applications on day-to-day businesses activities and software attribute prediction such as effort estimation, maintainability, and defect and quality classification are growing interest from both academic and industry communities. Software defect predication using several methods, in that random forest and gradient boosting are effective. Even though the defect datasets contain incomplete or irrelevant features. The proposed system Average probability ensemble technique used to overcome those problems and gives high classification result to compare another method. Because it has integrated with three algorithms to use classification performance and it gives more accurate result in publicly-available software datasets.