Home > Discover > AI Exam Answers Were Undetectable by Human Markers in University Exam Scoring

AI Exam Answers Were Undetectable by Human Markers in University Exam Scoring

Written by
ArticleGPT

Reviewed and fact-checked by the HIX.AI Team

4 min read6 days ago
AI Exam Answers Were Undetectable by Human Markers in University Exam Scoring

In a Nutshell

Professors at the University of Reading were unknowingly deceived by their researchers, who submitted AI-generated exam answers that received higher grades than real student responses.

In a groundbreaking study conducted by Prof Scarfe and his team at the University of Reading, it has been revealed that answers written by Artificial Intelligence (AI) outperformed those written by human students in university exams.

The researchers found that AI-generated assessment answers were "virtually undetectable" when evaluated by human markers. This highlights the growing concern over academic misconduct and the need to address the use of AI in educational assessments.

The study involved the creation of 33 fake student identities who used ChatGPT to answer official "at-home exams" as part of the university's BSc degree in psychology.

These AI-generated responses were subsequently submitted along with the responses written by actual students for evaluation. The markers, who were unaware of the study, awarded higher grades to the AI submissions than their human counterparts.

In fact, 83% of the AI submissions received better marks than the real students' answers. This raises significant questions about the fairness and accuracy of assessing student performance in light of AI's superior performance.

Concerns over academic misconduct with AI use

The study's awakening findings arouse concerns in universities about the immediate impact of AI on the education sector.

Students are increasingly using AI to cheat, taking advantage of the inadequacies of current detection systems. There have been calls to cut assessment methods such as coursework and essays, while others argue for the responsible and ethical integration of AI in educational practices.

A survey conducted by Ucas admissions service found that 53% of students have used generative AI to prepare for exams. However, current AI detection software has been largely unsuccessful in detecting AI-generated content, leaving the responsibility of identifying cheating practices to human markers.

AI detection software inadequacies

The low rate of "false positives" produced by AI detection software makes universities hesitant to accuse students of cheating. This puts the burden on human markers to identify AI-generated content, which is increasingly difficult as AI becomes more sophisticated.

The study conducted by the University of Reading demonstrates that current AI processors are passing the "Turing test," meaning they can pass undetected by experienced judges.

Calls for universities to embrace AI ethically

Amid the challenges posed by AI in assessments, there are calls for universities to take a proactive approach in embracing AI ethically.

The Russell Group, which includes top universities such as Oxford, Cambridge, and University College London, has pledged to allow the ethical use of AI in teaching and assessments.

Risks of deskilling students with AI use

The availability of AI tools that can generate answers and provide assistance in various tasks may hinder students' ability to engage in critical thinking, analysis, and writing without AI assistance.

Prof Karen Yeung, a fellow in law, ethics, and informatics at the University of Birmingham, said allowing the use of AI in exams at schools and universities could create its own problems in “deskilling” students.

Based on 2 search sources

2 sources

AI exam answers ‘virtually undetectable’ by university examiners

University exams answers written by AI are “virtually undetectable” when assessed by human markers, a study has found.

Researchers fool university markers with AI-generated exam papers

University of Reading project poses questions for integrity of coursework and take-home student assignments

On This Page

  • Concerns over academic misconduct with AI use
  • Calls for universities to embrace AI ethically
  • Risks of deskilling students with AI use