AI makes plagiarism harder to detect, academics argue – in article written by chatbot | chatbots

An academic article titled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT was published this month in an educational journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly around academic honesty. and plagiarism”.

What readers – and indeed the reviewers who cleared it for publication – didn’t know was that the article itself was written by the controversial AI chatbot ChatGPT.

“We wanted to show that ChatGPT is writing at a very high level,” said Professor Debby Cotton, director of academic practice at Plymouth Marjon University, who pretended to be the paper’s lead author. “This is an arms race,” she said. “The technology is improving very quickly and it will be difficult for universities to catch up.”

Cotton, along with two colleagues at the University of Plymouth, who also claimed to be co-authors, notified the journal’s editors Innovations in Education and International Teaching. But the four scholars who reviewed it assumed it was written by those three scholars.

For years, universities have tried to banish the plague of essay factories that sell pre-written essays and other academic papers to any student who tries to game the system. But now academics suspect that even testing factories are using ChatGPT, and institutions admit they are racing to catch up – and catch – anyone who does the popular chatbot’s work as their own.

O Observer spoke to several universities who say they plan to expel students caught using the software.

The peer-reviewed scholarly article that was written by a chatbot appeared this month in Innovations in Education and Teaching International. Photography: Debby RE Cotton

Thomas Lancaster, a computer scientist and expert on contract fraud at Imperial College London, said many universities are “in a panic”.

“If all you have in front of you is a written document, it’s incredibly difficult to prove that it was written by a machine, because the writing pattern is usually good,” he said. “The use of English and the quality of the grammar is usually better than that of a student.”

Lancaster warned that the latest version of the AI ​​model, ChatGPT-4, released last week, should be much better and able to write in a way that feels “more human”.

However, he said academics can still look for clues that a student has used ChatGPT. Perhaps the biggest one is that he doesn’t properly understand academic references – a vital part of written university work – and often uses “suspicious” references or makes them up entirely.

Cotton said that to ensure his scholarly work misled reviewers, references needed to be changed and added.

Lancaster thought ChatGPT, created by San Francisco-based technology company OpenAI, would “probably do a good job with past assignments” in an undergraduate course, but warned that it would let them down in the end. “As your course becomes more specialized, it becomes much more difficult to outsource work to a machine,” he said. “I don’t think I could write your entire dissertation.”

Bristol University is one of several academic institutions that has issued new guidance to staff on how to detect that a student has used ChatGPT to cheat. This can lead to the expulsion of repeat offenders.

Professor Kate Whittington, associate vice-chancellor at the university, said: “It’s not a case of one offense and you’re out. But we made it very clear that we will not accept cheating because we need to maintain standards.”

skip the newsletter promotion

Professor Debby Cotton from Plymouth Marjon University highlighted the risks of AI chatbots helping students to cheat.
Professor Debby Cotton from Plymouth Marjon University highlighted the risks of AI chatbots helping students to cheat. Cinematography: Karen Robinson/The Observer

She added, “If you cheat to a certain extent, you might get a starter job, but you won’t do well and your career won’t progress the way you want.”

Irene Glendinning, Head of Academic Integrity at Coventry University, said: “We are redoubling our efforts to get the message across to students that if they use these tools to cheat, they can be removed.”

Anyone caught would have to undergo training on the proper use of the AI. If they continued to cheat, the university would expel them. “My colleagues are already finding cases and dealing with them. We don’t know how many are missing, but we are catching cases,” she said.

Glendinning urged academics to be on the lookout for language a student would not normally use. “If you can’t hear your student’s voice, that’s a warning,” she said. Another is content with “many facts and little criticism”.

She said students who can’t identify weaknesses in what the bot is producing can slip up. “In my computer science class, AI tools can generate code, but they often contain bugs,” she explained. “You cannot debug a computer program unless you understand the fundamentals of programming.”

With fees of £9,250 a year, students were just kidding themselves, Glendinning said. “They are wasting their money and time if they are not using the university to learn.”

Leave a Comment