Automated Short Answer Grading using Long Short-Term Memory Optimized with Particle Swarm Optimization

Bashir Sulaimon Adebayo*, Nusa Amina Muhammad Kutiriko**, Abdullahi Ibrahim Muhammad***
*, ** Department of Computer Science, Federal University of Technology, Minna, Niger, Nigeria.
*** Department of Computer Engineering, Federal University of Technology, Minna, Niger, Nigeria.
Periodicity:July - December'2023
DOI : https://doi.org/10.26634/jds.1.2.20334

Abstract

Automated Short Answer Grading (ASAG) systems contribute immensely to providing prompt feedback to students, which eases the workload of instructors. This research focuses on the development of an optimized ASAG model using LSTM model and particle swarm optimization techniques to prevent model overfitting. The popular ASAG dataset by Mohler was utilized for the experiment. The dataset contains training samples from Computer Science department of the Federal University of Technology, Minna, Nigeria, with grades between 0 and 5. In order to effectively optimize the LSTM model parameters, which are learning rate and number of neurons in the LSTM layers, four experiments were performed, each with different particle population sizes (5, 10, 15 and 20). The results show that PS5 model produced the lowest RMSE and MAPE of 0.77697 and 44.5356%, respectively. The PS15 model, however, produced the highest RMSE and MAE of 0.80985 and 56.6192%, respectively. In order to validate the developed PSO-LSTM ASAG model, normal LSTM model for ASAG was implemented and tested. The PSO-LSTM has an RMSE value of 0.77687 and MAPE of 44.5356%, as compared with LSTM, which has an RMSE value of 0.9423 and MAPE of 85.73%. The results clearly show the superiority of the developed hybrid model in predicting the scores of short answer grading. The model's performance can be further improved by increasing the sample size and using other optimization algorithms, such as genetic algorithms or ant colony optimization. Further research can also investigate the effect of other variables, such as question complexity and student writing style, on the model's performance.

Keywords

Deep Learning, Automated Short Answer Grading, LSTM Recurrent Neural Network, Long Short-Term Memory, Particle Swarm Optimization.

How to Cite this Article?

Adebayo, B. S., Kutiriko, N. A. M., and Muhammad, A. I. (2023). Automated Short Answer Grading using Long Short-Term Memory Optimized with Particle Swarm Optimization. i-manager’s Journal on Data Science & Big Data Analytics, 1(2), 12-19. https://doi.org/10.26634/jds.1.2.20334

References

[4]. Condor, A., Litster, M., & Pardos, Z. (2021). Automatic short answer grading with SBERT on out-of-sample questions. International Conference on Educational Data Mining (EDM) (pp. 345-352).
[5]. Conneau, A., Schwenk, H., Barrault, L., & Lecun, Y. (2016). Very deep convolutional networks for natural language processing. arXiv, 2(1).
[15]. Sabharwal, N., & Agrawal, A. (2021). Hands-on Question Answering Systems with BERT - Applications in Neural Networks and Natural Language Processing. Hands-on Question Answering Systems with BERT. Apress.
[19]. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in Neural Information Processing Systems, 32.
If you have access to this article please login to view the article or kindly login to purchase the article

Purchase Instant Access

Single Article

North Americas,UK,
Middle East,Europe
India Rest of world
USD EUR INR USD-ROW
Pdf 40 40 300
Online 40 40 300
Pdf & Online 40 40 300

Options for accessing this content:
  • If you would like institutional access to this content, please recommend the title to your librarian.
    Library Recommendation Form
  • If you already have i-manager's user account: Login above and proceed to purchase the article.
  • New Users: Please register, then proceed to purchase the article.