Automated Essay Scoring System using Multi-Model Machine Learning


Wilson Zhu1 and Yu Sun2, 1USA, 2California State Polytechnic University, USA


Standardized testing such as the SAT often requires students to write essays and hires a large number of graders to evaluate these essays which can be time and cost consuming. Using natural language processing tools such as Global Vectors for word representation (GloVe), and various types of neural networks designed for picture classification, we developed an automatic grading system that is more time- and cost-efficient compared to human graders. We applied our application to a set of manually graded essays provided by a previous competition on Kaggle in 2012 on automated essay grading and conducted a qualitative evaluation of the approach. The result shows that the program is able to correctly score most of the essay and give an evaluation close to that of a human grader on the rest. The system proves itself to be effective in evaluating various essay prompts and capable of real-life application such as assisting another grader or even used as a standalone grader.


Automated Essay Scoring System, Natural Language Processing, Multi-Model Machine Learning.

Full Text  Volume 10, Number 12