A student created an application to detect text written by AI: NPR
GPTZero.me/Screenshot by NPR
Teachers concerned about students submitting essays written by a popular artificial intelligence chatbot now have a new tool of their own.
Edward Tian, a 22-year-old senior at Princeton University, created an app to detect if text was written by ChatGPT, the viral chatbot that has sparked fears of potential abuse. ethics in academia.
Tian, a computer science student specializing in journalism, spent part of his winter break creating GPTZero, which he says can “quickly and efficiently” decipher whether a human or ChatGPT is the author of A try.
His motivation for creating the bot was to combat what he sees as an increase in AI plagiarism. Since ChatGPT was released in late November, there have been reports of students using the revolutionary language model to make AI-written assignments look like their own.
“there is so much hype around chatgpt. is this and that written by AI? we as humans deserve to know!” tian wrote in a tweet Meet GPTZero.
Tian said many teachers contacted him after he put his bot online on Jan. 2 and told him about the positive results they saw while testing it.
Within a week of its launch, more than 30,000 people had tried GPTZero. It was so popular that the app crashed. Streamlit, the free platform hosting GPTZero, has since stepped in to support Tian with more memory and resources to handle web traffic.
How GPTZero works
To determine if a fragment was written by a bot, GPTZero uses two flags: “puzzle” and “burst”. Perplexity measures the complexity of the text; if GPTZero is puzzled by the text, it is highly complex and more likely to have been written by a human. However, if the text is more familiar to the bot – because it has been trained on such data – then it will be low in complexity and therefore more likely to be generated by AI.
Separately, the burst compares the variations of the sentences. People tend to write with a larger burst, for example using longer or complex sentences alongside shorter sentences. AI sentences are generally more uniform.
In a demo video, Tian compared the app analysis of a story in the new yorker and a LinkedIn post written by ChatGPT. He was able to distinguish human writing from AI.
Tian acknowledged that his bot is not foolproof, as some users have reported when they put it to the test. He said he was still working to improve the accuracy of the model.
But by designing an app that sheds light on what sets humans apart from AI, the tool contributes to a core Tian mission: to bring transparency to AI.
“For so long, AI has been a black box that we really don’t know what’s going on inside,” he said. “And with GPTZero I wanted to start pushing back and fighting against that.”
The quest to fight AI plagiarism
The college senior is not alone in the race to curb AI plagiarism and tampering. OpenAI, the developer of ChatGPT, is committed to preventing AI plagiarism and other malicious applications. Last month, Scott Aaronson, a researcher currently focusing on AI security at OpenAI, revealed that the company was working on a way to “watermark” GPT-generated text with an “imperceptible secret signal” to identify the source.
The open-source AI community Hugging Face has developed a tool to detect if text was created by GPT-2, an earlier version of the AI model used to create ChatGPT. A South Carolina philosophy professor who knew about the tool said he used it to catch a student submitting an AI-written assignment.
The New York City Department of Education said Thursday it is blocking access to ChatGPT on school networks and devices over concerns about its “negative impact on student learning and concerns about content security and accuracy.”
Tian is not against using AI tools like ChatGPT.
GPTZero is “not intended as a tool to prevent the use of these technologies,” he said. “But with any new technology, we need to be able to use it responsibly, and we need to have precautions.”
Not all news on the site reflects the site’s point of view, but we automatically transmit and translate this news through programmatic technology on the site and not from a human editor.