References
[1]. Ajiboye, A. R., Abduallah-Arshah, R., Qin, H., & Isah- Kebbe, H. (2015). Evaluating the effect of dataset size on predictive model using supervised learning technique. International Journal of Software Engineering & Computer Sciences, 1, 75-84. https://doi.org/10.15282/ijsecs.1.2015.6.0006
[2]. Barker, T. (2011). An automated individual feedback and marking system: An empirical study. The Electronic Journal of E-Learning, 9(1), 1-14.
[3]. Bereiter, C. (2002). Foreword. In Mark D. Shermis and Jill C. Burstein (Eds.), Automated Essay Scoring: A Cross Disciplinary Perspective. (pp. vii-ix), Mahwah, NJ: Lawrence Erlbaum Associates.
[4]. Boardman, C. A., & Frydenberg, J. (2008). Writing to rd Communicate: Paragraphs and Essays (3 Ed.). New York: Pearson, Longman.
[5]. Chesla, E. (2006). Write Better Essays in Just 20 Minutes nd a Day (2 Ed). New York: LearningExpress, LLC.
[6]. Chien, S. C. (2011). Discourse organization in high school students' writing and their teachers' writing instruction: the case of Taiwan. Foreign Language Annals, 44(2), 417-435. https://doi.org/10.1111/j.1944- 9720.2011.01131.x
[7]. Coffin, C. (2006). Historical Discourse: The Language of Time, Cause and Evaluation. London, England: Continuum.
[8]. de Oliveira, L. C. (2011). Knowing and Writing School History: The Language of Students' Expository Writing and Teachers' Expectations. Charlotte, NC: Information Age.
[9]. Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18(1), 7-24. https://doi.org/10.1016/ j.asw.2012.10.002
[10]. Dörnyei, Z. (2007). Research Methods in Applied Linguistics: Quantitative, Qualitative, and Mixed Methodologies (pp. 95-123). Oxford: Oxford University Press.
[11]. Eckes, T. (2008). Rater types in writing performance assessments: A classification approach to rater variability. Language Testing, 25(2), 155-185. https://doi.org/ 10.1177/0265532207086780
[12]. Heilman, M. & Madnani, N. (2015). The impact of training data on automated short answer scoring th performance. In Proceedings of NAACL HLT: The 10 Workshop on Innovative use of NLP for Building Educational Applications (pp. 81-85). The Association for Computational Linguistics: USA. Retrieved from http://www. cs.rochester.edu/u/tetreaul/bea10proceedings.pdf# page=270.
[13]. Kumar, V., Fraser, S. N., & Boulganger, D. (2017). Discovering the predictive power of five baseline writing competences. The Journal of Writing Analytics, 1(2017), 176-226.
[14]. Martin, J. R. (1992). English Texts: System and Structure. The Netherlands: John Benjamins.
[15]. Mason, O., & Grove-Stephenson, I. (2002). Automated free text marking with paperless school. In M. Danson (Ed.), In Proceedings of the Sixth International Computer Assisted Assessment Conference, Loughborough University, Loughborough, UK.
[16]. Miller, R. T., & Pessoa, S. (2016). Where's your thesis statement and what happened to your topic sentences? Identifying organizational challenges in undergraduate student argumentative writing. TESOL Journal, 7(4), 847- 873. https://doi.org/10.1002/tesj.248
[17]. Minitab 17 Statistical Software (2010). Computer software. State College, PA: Minitab, Inc, Retrieved from www.minitab.com
[18]. Page, E. B. (2002). Project essay grade: PEG. In M. D. Shermis & J. Burstein (Eds.), Automated Essay Scoring: A Cross-Disciplinary Perspective (pp. 43-54). Mahwah, NJ: Lawrence Erlbaum Associates.
[19]. Panhdi, P., Chai, K. M. A., & Ng, H. T. (2015). Flexible domain adaptation for automated essay scoring using correlated linear regression. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (pp. 431-439). The Association for Computational Linguistics: USA. Retrieved from http://www.aclweb.org/ anthology/D15-1049
[20]. Powers, D. E., Escoffery, D. S., & Duchnowski, M. P. (2015). Validating automated essay scoring: A (modest) refinement of the “gold standard”. Applied Measurement in Education, 28(2), 130-142. https://doi.org/10.1080/ 08957347.2014.1002920
[21]. Priest, J. (2018). Terms of time for composition: A materialist examination of contingent faculty labor. Academic Labor: Research and Artistry, 2(5), 41-62.
[22]. Shermis, M. D. & Burstein, J. (2002). Automated Essay Scoring: A Cross Disciplinary Perspective. Mahwah, NJ: Lawrence Erlbaum Associates.
[23]. Shohamy, E., Gordon, C. M., & Kraemer, R. (1992). The effects of raters' background and training on the reliability of direct writing tests. The Modern Language Journal, 76(1), 27-33. https://doi.org/10.1111/j.1540-781.1992.tb02574.x
[24]. Tamanini, K. B. (2008). Evaluating Differential Rater Functioning in Performance Ratings: Using a Goal-Based Approach (Unpublished doctoral dissertation). Ohio University, Athens, OH.
[25]. Uzun, K. (2016). Developing EAP Writing Skills Through Genre-Based Instruction. International Journal of Educational Researchers, 7(2), 25-38. Retrieved from http://dergipark.gov.tr/ijers/issue/24471/259387
[26]. Uzun, K. (2018). Home-grown automated essay scoring in the literature classroom: A solution for managing the crowd? Contemporary Educational Technology, 9(4), 423-436. https://doi.org/10.30935/cet.471024
[27]. Uzun, K. (2019). Genre-Based Instruction and Genre- Focused Feedback: A Multiperspective Study on Writing Performance and the Psychology of Writing (Unpublished PhD Dissertation). Çanakkale Onsekiz Mart University, Çanakkale, Turkey.
[28]. Zheng, C. (2013). A structure analysis of English argumentative writing written by Chinese and Korean EFL learners. English Language Teaching, 6(9), 67-73. https://doi.org/10.5539/elt.v6n9p67
[29]. Zupanc, K., & Bosnić, Z. (2017). Automated essay evaluation with semantic analysis. Knowledge-Based Systems, 120, 118-132. https://doi.org/10.1016/j.knosys. 2017.01.006