Automated essay grading or scoring systems are not more a myth they are reality. As on today, the human written (not hand written) essays are corrected not only by examiners / teachers also by machines. The TOEFL exam is one of the best examples of this application. The students’ essays are evaluated both by human & web based automated essay grading system. Then the average is taken. Many researchers consider essays as the most useful tool to assess learning outcomes, implying the ability to recall, organize and integrate ideas, the ability to supply merely than identify interpretation and application of data. Automated Writing Evaluation Systems, also known as Automated Essay Assessors, might provide precisely the platform we need to explicate many of the features those characterize good and bad writing and many of the linguistic, cognitive and other skills those underline the human capability for both reading and writing. They can also provide time-to-time feedback to the writers/students by using that the people can improve their writing skill. A meticulous research of last couple of years has helped us to understand the existing systems which are based on AI & Machine Learning techniques and finding the loopholes and at the end to propose a system, which will work under Indian context, presently for English language influenced by local languages. Currently most of the essay grading systems is used for grading pure English essays or essays written in pure European languages. In India we have almost 21 recognized languages and influence of these local languages, in English, is very much here. Newspapers in Hyderabad sometimes print like — “Now the time has come to say ‘albida’ (good bye) to monsoon”. Due to the influence of local languages and English written by nonnative English speakers (ie. Indians) the result of TOEFL exams has shown lower scores against Indian students (also Asian students). This paper focuses on the existing automated essay grading systems, basic technologies behind them and proposes a new framework to over come the problems of influence of local Indian languages in English essays while correcting and by providing proper feedback to the writers.

">

Design Of An Automated Essay Grading (AEG) System In Indian Context

Siddhartha Ghosh*, Sameen S Fatima**
*Associate Professor ,Department of Computerscience &Engineering ,G.Narayannamma Institute of Technology& Sc ,Hydrabad,A.P.India.
**Associate Professor ,BITS Pilani -Dubai Campus ,UAE, on lien from Dept of CSE,Osmania University,Hydrabad,A.P.India.
Periodicity:October - December'2007
DOI : https://doi.org/10.26634/jet.4.3.594

Abstract

Automated essay grading or scoring systems are not more a myth they are reality. As on today, the human written (not hand written) essays are corrected not only by examiners / teachers also by machines. The TOEFL exam is one of the best examples of this application. The students’ essays are evaluated both by human & web based automated essay grading system. Then the average is taken. Many researchers consider essays as the most useful tool to assess learning outcomes, implying the ability to recall, organize and integrate ideas, the ability to supply merely than identify interpretation and application of data. Automated Writing Evaluation Systems, also known as Automated Essay Assessors, might provide precisely the platform we need to explicate many of the features those characterize good and bad writing and many of the linguistic, cognitive and other skills those underline the human capability for both reading and writing. They can also provide time-to-time feedback to the writers/students by using that the people can improve their writing skill. A meticulous research of last couple of years has helped us to understand the existing systems which are based on AI & Machine Learning techniques and finding the loopholes and at the end to propose a system, which will work under Indian context, presently for English language influenced by local languages. Currently most of the essay grading systems is used for grading pure English essays or essays written in pure European languages. In India we have almost 21 recognized languages and influence of these local languages, in English, is very much here. Newspapers in Hyderabad sometimes print like — “Now the time has come to say ‘albida’ (good bye) to monsoon”. Due to the influence of local languages and English written by nonnative English speakers (ie. Indians) the result of TOEFL exams has shown lower scores against Indian students (also Asian students). This paper focuses on the existing automated essay grading systems, basic technologies behind them and proposes a new framework to over come the problems of influence of local Indian languages in English essays while correcting and by providing proper feedback to the writers.

Keywords

TOEFL exam, European languages, Indian languages in English.

How to Cite this Article?

Siddhartha Ghosh and Sameen S Fatima (2007). Design Of An Automated Essay Grading (AEG) System In Indian Context. i-manager’s Journal of Educational Technology, 4(3), 19-26. https://doi.org/10.26634/jet.4.3.594

References

[1]. Bloom, 8.5. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I, Cognitive domain. New York, Toronto: Longmans, Green.
[2]. Burstein, J., Kukich, K., Wolff, S., Chi, L., & Chodorow M. (I 998). Enriching automated essay scoring using discourse marking, Proceedings of the Workshop on Discourse Relations and Discourse Marking, Annual Meeting of the Association of Computational Linguistics, Montreal, Canada.
[3]. Burstein, J., Leacock, C., & Swartz, R. (2001). Automated evaluation of essay and short answers. In M. Danson (Ed,), Proceedings of the Sixth International Computer Assisted Assessment Conference, Loughborough University, Loughborough, UK,
[4]. Christie, J. R. (1999). Automated essay marking-for both style and content~ In M. Danson (Ed.), Proceedings of the Third Annual Computer Assisted Assessment Conference, Loughborough University, Loughborough, UK,
[5]. Christie, J. R. (2003). Email communication with author. I 4th April. Cucchiarelli, A., Faggioli, E. , & Velardi, P (2000). Will very large corpora play for semantic disambiguation the role that massive computing power is playing for other Al-hard problems? 2nd. Conference on Language Resources and Evaluation (LREC), Athens, Greece.
[6]. Deerwester, S. C., Dumais, S. T., Landauer, T. K., Fumas, G . W. , & Harshman R. A. (1 9 90). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 4 I (6), 391-407 .
[7]. de Oliveira, RC.F., Ahmad, K., & Gillam, L. (2002). A financial news summarization system based on lexical cohesion. Proceedings of the International Conference on Terminology and Knowledge Engineering, Nancy, France.
[8]. E.B. Page, "The Use of the Computer in Analyzing Student Essays," Inf'l Rev. Education, Vol. 14, 1968, pp. 210225.
[9]. E.J. Brock et al., "How to Evaluate Your Question Answering System Every Day , . ~ and Still Get Real Work Done," Proc. LREC-2000, Linguistic Resources in Education Conf. , Athens, Greece, 2000.
[10]. Grondlund, N. E. (1985). Measurement and evaluation in teaching, NewYork: Macmillan.
[1 I ]. Hearst, M. (2000). The debate on automated essay grading, IEEE Intelligent Systems, I 5(5), 22-37, IEEE CS Press, Hanan, W. (1999, January 27), High tech comes to the classroom: Machines that grade essay~ New York Times.
[1 2]. Jerrams-Smith, J. , Sch, V , & Cal|ear D. (2001). Bridging gaps in computerized assessment of texts, Proceedings of the International Conference on Advanced Learning Technologies, I 39-140, IEEE,
[13]. Laham, D. & Foltz, R W. (2000). The intelligent essay assessor. In T.K. Landauer (Ed.), IEEE Intelligent Systems, 2000.
[14]. Landauer, T. K., Foltz, P W., & Laham D. {1998). An Introduction to latent semantic analysis. Discourse Processes, 25. Retreived from http://Isa.colorado.edu/ papers/dpl~LSAintro.pdf
[15]. Larkey, L. S. {1998). Automatic essay grading using text categorization techniques. In Proceedings of the 2 I st ACM/SIGIR (SIGIR-98), 90-96 . ACM.
[I 6]. Larkey, L. S. {20031. Email communication with author. I 5th April Mason, 0. & Grove-Stephenson, I. (2002). Automated free text marking with paperless school, In M, Danson (Ed.), Proceedings of the Sixth International Computer Assisted Assessment Conference, Loughborough University, Loughborough, UK,
[17]. Ming, PY., Mikhailov, A.A., & Kuan, T.L. {20001. Intelligent essay marking system. In C. Cheers (Ed~), Learners Together,Feb, 2000, NgeeANN Polytechnic, Singapore. http://ipdweb.np.edu.sg/It/febOO/ inteHigent_essay_marking.pdf
[I 8]. Mitchell, T. , Russel, T. , 8roomhead, P , & Aldridge N. {2002). Towards robust computerized marking of free-text responses.
[I 9]. In M~ Danson {Ed.), Proceedings of the Sixth International Computer Assisted Assessment Conference, Loughboroug University, Loughborouh, UK,
[20]. Page, E.8. {1996). Grading essay by computer: Why the controversy? Handout for NCME Invited Symposium.
[21]. Page, E.8. {1994). New computer grading of student prose, using modern concepts and software, Journal of Experimental Education, 62(2), 127- I 42.
[22]. Palmer, J., Williams, R., & Dreher H. {2002}. Automated essay grading system applied to a first year university subject- How can we do if better. Proceedings of the informing Science and IT Education (InSITE) Conference, Cork, Ireland, I 22 I - I 229~
[23]. Rudner, L.M. & Liang, T. (2002). Automated essay scoring using Bayes' Theorem. The Journal of Technology, Learning and Assessment, I (2), 3- 2 I .
[24]. Siddhartha Ghosh, Sameen S Fatima, {2007), use of local languages in Indian portals, CSi Communication, June'07 issue, pp- 4- I 2.
[25]. Siddhartha Ghosh, Sameen S Fatima, {2007), Retrieval of XML data to support NLP applications, lCAl'07- The 2007 international Cont erence on Artificial Intelligence Monte Carlo Resort, Las Vegas, Nevada, USA ,June 25-28, 2007~
[26]. Siddhartha Ghosh, Sameen S Fatima, {20071, A Web Based English to Bengali Text Converter, will be presented in The 3rd Indian International Conference on Artificial Intelligence (ilCAl-07), Pune , India, December 1 7- I 9, 2007.
[27]. Thompson, C. {2001). Can computers understand the meaning of words? Maybe, in the new of latent semantic analysis. ROB Magazine. Retrieved from http://www,vector7.com/client_sites/ROB_preview/html /thompson,html
[28]. Valenti, S., Cucchiarelli, A., & Pant! M. {2000). Web based assessment of student learning, In A. Aggarwal (Ed.), Web-based Learning & Teaching Technologies, Opportunities and Challenges, I 75- I 97. Idea Group Publishing.
[29]. Valenti, S., Cucchiarelli, A., & Panti, M. {2002). Computer based assessment systems evaluation via the 18091 2 6 qualify model , Journal of Information Technology Education, I (3), I 57- I 75 .
If you have access to this article please login to view the article or kindly login to purchase the article

Purchase Instant Access

Single Article

North Americas,UK,
Middle East,Europe
India Rest of world
USD EUR INR USD-ROW
Online 15 15

Options for accessing this content:
  • If you would like institutional access to this content, please recommend the title to your librarian.
    Library Recommendation Form
  • If you already have i-manager's user account: Login above and proceed to purchase the article.
  • New Users: Please register, then proceed to purchase the article.