Zapping up Turnitin e.a. to Identify ChatGPT Output?

Is ChatGPT rephrasing and summarizing of internet content the new normal in academia? Will the service determine what sources are used, what is said, and how it is said by university students? Professors and tutors should continue to be able to determine how much energy has been invested by their respective students in their drafting assignments.

So far, there is no software nor application capable of identifying to what extent ChatGPT has been used in students’ researching and writing-up tasks and theses. It would, however, be expedient to create a tool for university staff which would track the output of ChatGPT. On the technical side, this would require large server parks and CPU power – independent from the core functions of ChatGPT. But such a tool would be part and parcel in tracking student-created content in a relatively short amount of time.

What is more, there are already some alternatives to ChatGPT, and their number is growing. This makes things more confusing. Broadly, on the economic side, one would have to work alongside all those running tools alike ChatGPT who, certainly, would require fees to cover their huge cost while staying true to privacy considerations, most users of regenerative AI likely being private persons.

There are three basic solutions:

  • A plagiarism checker like Turnitin, Grammarly, and many other proven brands of anti-plagiarism might be tuned up with interfaces to the one ChatGPT – and many do-alikes – to sniff out passages of text created by automatized IT services. It might take a bit longer for any academic supervisor to wait for the results, but this would be worth while. In any case, a new comparative service would prompt students to do their research themselves and rephrase internet texts alongside generative tools’ results to reach good grades. In short, proper initiative would be encouraged.
  • The second viable way would be to create standalone software to identify generative artificial intelligence output claimed to be original ‘human’ content. But as economics teaches, this would not minimize synergy effects, which puts established counter-plagiarism checkers in a good starting position. Additionally, one-does-it-all software would make it less difficult for supervisors to weigh yellow and red flags regarding classical sources and AI output.
  • Finally, in theory, there is a third way: accepting tasks, assignments, and theses simply comparing them to static web content, thereby ignoring generative AI. This way is not to be chosen, because it would level out the students’ playing field on a very low academic level, with no way of determining what proper performance was put on the day for those out to graduate, and what actual achievement young scholars have made to bring about their – so far proper – texts and documents.

Thorsten Koch, MA, PgDip
Policyinstitute.net
2 March 2023

Author: author

Leave a Reply

Your email address will not be published. Required fields are marked *