The content generated by AI such as ChatGPT has common sense and reasoning errors, which can still be identified.
Results of a new study presented at the Association for the Advancement of Artificial Intelligence conference in February showed that it is possible to tell whether content is generated by ChatGPT or written by humans.
A team from the University of Pennsylvania’s School of Engineering and Applied Science has launched the largest artificial intelligence test ever, based on a web-based training game created by the university, Real or Fake Text? To collect data for training so that the AI can determine whether the content is generated by ChatGPT or written by humans.
Dr. Liam Dugan, co-author of the study, explains, as translated by Igeekphone:
Today’s artificial intelligence can already produce very smooth, very grammatical text. But AI can make mistakes.
We have shown that machines make errors such as common-sense errors, correlation errors, reasoning errors, and logical errors, and we have figured out how to find them.
People are anxious about artificial intelligence. Our research may alleviate some of that anxiety. I think right now AI can help us write more imaginative, interesting texts that are best suited for creative collaboration. But the use of AI in news reports, academic papers or legal advice is so bad that we can’t be sure it’s true.