Out-Law News 2 min. read

Universities urged to protect assessment integrity from AI interference


Higher education institutions (HEIs) should take action now to mitigate the risks posed to the integrity of their academic assessments by ChatGPT and other forms of artificial intelligence (AI), according to two legal experts. 

Julian Sladdin, higher education disputes expert at Pinsent Masons, warned that AI could be exploited by students to gain unfair advantages in assessments and make it difficult for HEI officials to identify cases of academic misconduct. He said: “Since the launch of ChatGPT in November 2022 there has been significant discussion and alarm at the development of the generative chatbot technology and how it has changed the landscape of academic misconduct. Cheats no longer have to rely on traditional plagiarism, essay mills and tutoring websites.”

In recent months, the ChatGPT chatbot has demonstrated the ability to summarise research studies and answer logical questions, and has also been shown to accurately answer questions on business school and medical exams. At the same time, however, experts have warned that the AI can also provide plausible but incorrect answers to some queries that contain serious errors.

Last month, Tokyo’s Sophia University announced an outright ban on the use of ChatGPT by its students. In a statement, the university said the chatbot cannot be used to write assignments, essays, reports, and theses. The University of Tokyo has also published updated guidelines on the use of AI by its students and staff. While it conceded that it was “not realistic” to eliminate use to the technology completely, the university told students that all assessments “must be created by students themselves and cannot be created solely with the help of AI.”

Sladdin said that similar efforts to curtail the use of AI by students were likely at HEIs around the world. “There are significant concerns about how to identify the use of this technology in assessments and examinations. So far, however, there has been slow progress in developing new modes of testing academic competencies, as well as sufficient IT solutions for investigating and determining inappropriate use of AI,” he said.

“While the academic community accepts that there is potential to harness benefits from the technology, and the need to train students to use AI appropriately so that they have the skills required for future careers, it is rightly concerned that the potential for good is carefully balanced against the risks that the technology increases the propensity for students to cheat the system. The response to the challenges posed will, as the Quality Assurance Agency has advised, need to be wide-ranging, targeting a number of areas within an institution,” Sladdin added.

Stephanie Badrock of Pinsent Masons said: “Staff and student education and the promotion of academic integrity will be key, which should act as preventative tools to accompany up-to-date AI systems that can detect cheating. Policies and procedures, including disciplinary processes, might need to be revised and updated in order to deal with the emergence of this AI technology.”

“Institutions should ensure that academic misconduct cases are dealt with efficiently, transparently and with proportionate sanctions if students are found to have cheated. Universities and colleges could implement the use of different assessment techniques to remove the risk of ChatGPT being used by students, making use of video, presentations or oral examinations as alternatives to written assignments. Universities could also consider including declarations on submission of assignments as a useful deterrent,” Badrock added.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.