Thoughts on Testing


ChatGPT is changing the way tests and assignments are structured. It can generate nuanced and original content, along with information that may or may not be true. For general assignments that do not require high-level reasoning, chatGPT can write a pretty reasonable essay (it still sucks at economic reasoning, though).

Long-form essays, especially popularized by COVID, can no longer truly reflect a student’s depth of knowledge about a subject matter. In the past, essays were the gold standard as a students who have a good grasp of the tested material would find the assignment easier than those who needed to look up everything. But chatGPT (and its ilk) changes all that: it writes a pretty realistic essay.

But perhaps this innovation would push education to change. If we want to teach critical thinking skills, should we try to get them to regurgitate facts? Other than in areas of study like history, most information is easily accessible on the internet. They don’t need to learn more facts; they need to learn how to sort through the information, to differentiate the true from the false. ChatGPT hallucinates non-existent references when it writes an essay along with incorrect information at times. Children would be far better equipped for the future if they learned how to fact check and reason logically.

Perhaps teachers would adopt oral exams, as it demonstrates the level of reasoning skill that a student possesses. Oral tests would take less time to administer and grade, as long as the class is reasonably sized. Students generally write at a speed of 20-25 words per minute, while the average speaking rate is 150 wpm.