Students submitted more than 22 million papers that may have used generative AI last year, according to new data from plagiarism detection company Turnitin.
A year ago, Turnitin introduced an AI writing detection tool trained on the many papers written by students and other AI-generated texts. Since then, more than 200 million articles have been reviewed by the detector, mostly written by high school and college students. Turnitin found that 11 percent of content may contain AI-authored language, with 3 percent of the total articles reviewed flagged as having 80 percent or more AI authoring. (Turnitin is owned by Advance, which also owns Condé Nast, publisher of WIRED.) Turnitin says its detector has a false-positive rate of less than 1 percent when analyzing entire documents.
The launch of ChatGPT was met with fears that the English class essay would disappear. The chatbot can synthesize and distill information almost instantly, but that doesn’t mean it’s always right. Generative AI is known to hallucinate, create its own facts and cite academic references that do not exist in reality. Generative AI chatbots have also been caught spitting out biased text sex And race. Despite these shortcomings, students have used chatbots for research, organizing ideas, and as ghostwriters. Traces of chatbots have even been found in peer-reviewed, published academic writing.
Teachers understandably want to hold students accountable for using generative AI without consent or disclosure. But this requires a reliable way to prove that AI was used in a specific assignment. Instructors have sometimes tried to find their own solutions for detecting AI in writing, using messy, untested methods of enforcing rulesand disturbing students. What further complicates the problem is that some teachers are even equal using generative AI in their assessment processes.
Detecting the use of gene AI is difficult. It’s not as simple as flagging plagiarism, because the text generated is still original text. Furthermore, there is nuance in the way students use gen AI; Some may ask chatbots to write their papers for them in large chunks or in full, while others may use the tools as an aid or as a brainstorming partner.
Students are also not seduced by just ChatGPT and similarly large language models. So-called word spinners are another type of AI software that rewrites text, and can make it less clear to a teacher that work has been plagiarized or AI-generated. Turnitin’s AI detector has also been updated to detect word spinners, says Annie Chechitelli, the company’s chief product officer. It can also highlight work that has been rewritten by services like the spell checker Grammarly, which now has its own version generative AI tool. As trusted software adds more and more generative AI components, what students can and cannot use becomes increasingly muddled.
Detection tools themselves carry the risk of bias. English-speaking students are more likely to cause these problems; a 2023 study found a false positive rate of 61.3 percent when evaluating Test of English as a Foreign Language (TOEFL) exams with seven different AI detectors. The study did not examine Turnitin’s version. The company says it has trained its detector to write for both English and native speakers. a study A test published in October found that Turnitin was one of the most accurate of sixteen AI language detectors, with the tool examining undergraduate papers and AI-generated papers.