Can students use AI at school or university? The rules around using tools like ChatGPT for assessments

Watch more of our videos on ShotsTV.com 
and on Freeview 262 or Freely 565
Visit Shots! now
Students using ChatGPT or similar tools to help with assignments need to know the rules 💻
  • Generative AI programmes like ChatGPT and Gemini are now easily accessible to almost anyone, on a range of devices.
  • Ofcom found that 4 out of 5 teens use generative AI tools, as well as 40% of younger children aged 7 to 12.
  • Four out of 5 UK universities have also investigated students for using AI to cheat.
  • The most recent guidance for exam boards says that learning about AI and its uses is both important for pupils and helpful to teachers - but both must be careful not to misuse it.

Today’s students will no doubt need to learn about artificial intelligence at school, including how to use it.

Hide Ad
Hide Ad

AI and other similar, algorithmic systems will likely play a big part in our lives and careers in the future, and young people need to have a robust understanding of their strengths, potential uses, and limitations. They can be a useful tool to support learning. But at the same time, using easily available generative AI software to write essays or answer test questions means students are missing out on valuable learning - as well as dodging assessments that let their teachers get a better idea of areas where they could use more support.

With the rapid development of AI programmes in recent years, policy has been left fighting to keep up. The new Labour government has pledged to take action by creating “binding regulation” on the companies behind some of the most powerful and widely-used AI models, while also promising a “pro-innovation” approach, Time reports.

But is generative AI actually causing a problem in schools and universities? And what does the government’s guidance say they should do about this? Here’s what you need to know:

Hide Ad
Hide Ad

Are students actually using AI to cheat?

Unfortunately, anecdotal reports do seem to suggest that this is becoming an issue for students across a wide age range - something emerging data seems to support as well. When education news service Schools Week investigated earlier this year, reporters found what appear to be numerous posts from British schoolchildren across platforms like Snapchat and TikTok admitting to using generative AI programmes to help with their homework - and sometimes, to cheat.

A recent report by Ofcom - the government’s telecommunications regulator - found that young people are frequent users of generative AI tools. A whopping 79% of 13 to 17-year-old teens said they had used them, as did 40% of children aged seven to 12. At the same time, during the 2023 exam season the number of secondary school pupils caught breaking the rules in their GCSE and A Level exams rose by nearly one-fifth on the year before - an increase of more than 50% on the 2019 exam season. One of the most common forms of cheating was students sneaking phones or other devices into exams, the Daily Mail reports.

Data suggests most UK teens who use the internet also use generative AI tools (Image: National World/Adobe Stock)Data suggests most UK teens who use the internet also use generative AI tools (Image: National World/Adobe Stock)
Data suggests most UK teens who use the internet also use generative AI tools (Image: National World/Adobe Stock)

AI platform AIPRM sent out more than 150 freedom of information requests to the UK’s universities earlier this year, to discover just how many students each had caught using AI to cheat. More than four out of five (82.5%) of the 80 that provided data had investigated students for AI-related cheating, while seven universities had more than a hundred students issued penalties for AI use since 2022. Birmingham City University had penalised the most students - 402 over the past two years.

Hide Ad
Hide Ad

A lack of consistent regulations and difficulties accurately detecting AI-generated work seems to plague tertiary institutions too. An investigation by student accommodation platform StuRents also found that nearly half (49%) of the UK’s 160 universities didn’t have any clear policies and guidance for AI use easily available on their official websites.

The University of Reading also published a study this year, setting up 33 fake student profiles. The ‘students’ submitted unedited answers written by OpenAI’s artificial intelligence chatbot GPT-4 on a range of online tests and exams, with 94% of these AI admissions slipping through unnoticed - and on top of that, most got slightly better grades than real students.

What is the current guidance on AI in schools?

The government’s official guidance on AI in schools says that generative AI can have many uses in the education sector, like reducing teachers’ workloads. Today’s students also need to learn about AI for their own futures, “to become well-informed users of technology and understand its impact on society”.

Hide Ad
Hide Ad

But it warns that teachers and students alike both need to use AI tools with care. When it comes to cheating, the government points educators to the Joint Council for Qualifications (JCQ) - a group which includes secondary and high school exam boards across England, Wales, Scotland, and Northern Ireland. It last updated its guidance on AI use in assessments in February this year.

AI tools are easily accessible to almost anyone with an internet-capable device (Photo: Adobe Stock)AI tools are easily accessible to almost anyone with an internet-capable device (Photo: Adobe Stock)
AI tools are easily accessible to almost anyone with an internet-capable device (Photo: Adobe Stock)

The 21-page document goes into some detail, but says that it has always been the case that teachers should only be accepting work that has actually been written by the student rather than copied from somewhere else - and using AI is no exception. “Students who misuse AI such that the work they submit for assessment is not their own will have committed malpractice,” JCQ writes, and they may receive “severe sanctions”. This can include losing some or all of their marks in an assessment, or even being disqualified from an A Level or GCSE altogether.

The JCQ describes misuse as using one or more AI tools in schoolwork or assessments without acknowledging it. This can include copying or paraphrasing sections of whole responses from an AI generator, as this means that the work submitted for assessment “is no longer the student’s own”. It can also include not acknowledging the use of AI tools correctly, as well as submitting work with misleading references on purpose.

Hide Ad
Hide Ad

Students should all be made aware of the risks of using AI for their schoolwork and assignments, the guide continues, and on what their school or exam board counts as malpractice. Teachers have to investigate if they suspect AI is being used, and students should be able to demonstrate that their work is their own - using methods like the version history tool in their word document, which will show edits and progress over time.

If any part of their work has been reproduced directly from an AI-generated responses students need to identify and reference them, just like they would any other source of information. They should also be fact-checking any information generated by AI, as it might be inaccurate, out of date, from a biased source, or taken out of context.

But just because you’ve referenced AI that you’ve used doesn’t mean that you’re off the hook. Assessors marking student work will check that they have independently met the marking criteria, and they won’t be given their marks if they have not. Schools are also advised to restrict access to AI tools on any of their devices used for student assessments, and to set aside classtime for students to work on big projects under their teacher’s supervision.

Hide Ad
Hide Ad

There is also guidance for teachers using AI programmes to help with things like grading - including a warning than AI should not be the sole marker of a student’s work. JCQ recommends a number of anti-cheating AI programmes which can help to pick up the use of AI in assessments, like Turnitin and GPTZero, but says they are not always accurate - and teachers should also learn signs they need to look out for.

The Department for Education says that teachers using AI tools on student work also need to be aware of their pupil’s data privacy and intellectual property rights. They should make sure none of their students’ work is being used to further train an AI system, unless given permission by them or their parents.

Do you think we need to be doing more to make sure AI isn’t being used to cheat at school, and if so, what? Have your say and make your voice heard by leaving a comment below.

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.