Image Credits: Open AI
After telegraphing the entry Media aspectsOpenAI has it kick off A tool that tries to distinguish between human-written and AI-generated text – just like the company’s own ChatGPT and GPT-3 models. The classifier isn’t particularly accurate — the success rate is around 26%, OpenAI notes — but they argue that OpenAI can be useful when used in conjunction with other methods to prevent misuse of AI text generators.
“The classifier aims to help reduce false claims that AI-generated text is human-authored. However, it still has a number of limitations — so it should be used as a supplement to other methods of determining the source of text rather than as the primary decision-making tool, an OpenAI spokesperson told TechCrunch in an email. We are making this initial classifier available to get feedback on the usefulness of such tools, and hope to share improved methods in the future.
As excitement around generative AI — especially AI that generates text — grows, critics have called for the creators of these tools to take steps to mitigate their potentially harmful effects. Some large US school districts have banned ChatGipt from their networks and devices, fearing its impact on student learning and the accuracy of the content the device produces. and including sites Stack overflow blocked users. Instead of sharing content generated by ChatGPT, AI makes it much easier for users to flood chat threads with questionable answers.
The OpenAI Classifier – properly called the OpenAI AI Text Classifier – is interesting in architecture. It, like ChatGPT, is an AI language model trained from lots and lots of publicly available web text. But unlike ChatGPT, it’s very good at predicting how likely a text is to be generated by AI – an AI model that generates text from any text, not just ChatGPT.
Specifically, OpenAI trained the OpenAI AI Text Classifier on text from 34 text generation systems from five different organizations, including OpenAI itself. This text is combined with similar (but not exactly the same) human-written text from Wikipedia, websites from links shared on Reddit, and “human demos” collected for the previous OpenAI text generation system. (OpenAI by A Support document(However, it may have unwittingly classified some AI-authored content as human-authored “due to the proliferation of AI-generated content on the Internet”).
The OpenAI Text Classifier doesn’t work on just any text, by necessity. It requires a minimum of 1,000 characters or 150 to 250 words. It doesn’t detect cheating – an unfortunate limitation, especially considering the fact that the text generation AI is featured regurgitate The text of the training. And OpenAI says it’s more likely to get things wrong in texts written by children or in a language other than English, in an English-transmitted data set.
When assessing whether a certain text is AI-generated, the searcher narrows down the answer. Depending on its level of confidence, it labels the text as “highly unlikely” AI-generated (less than 10% chance), “unlikely” AI-generated (between 10% and 45% chance), “unknown to be”. ” AI-Generated (45% to 90% chance), “Probably” AI-Generated (90% to 98% chance) or “Probably” AI-Generated (greater than 98% chance).
Out of curiosity, I ran some text through the classifier to see how it would manage. Confidently, while several paragraphs in the TechCrush article about meta-horizontal worlds and an excerpt from the OpenAI support page correctly predicted that AI was not created, the classifier had a more difficult time with the ChatGPT article-length text, ultimately failing to classify it. all in all. However, the chatgpt result from Gizmodo was successfully viewed. A piece About – what else? – chatgpt
According to OpenAI, the classifier mislabels human-written text as AI-written 9% of the time. This error did not occur in my test, but I chalk that up to the smaller sample size.
On a practical level, I find the classifier particularly useful for evaluating short texts. Of course, 1,000 characters is a difficult level to reach in the field of messages, for example emails (at least the ones I receive regularly). And the limitations stand still – OpenAI emphasizes that the classifier can escape by modifying certain words or phrases in the generated text.
This is not to suggest that the classifier is useless – far from it. As it stands, however, it certainly won’t stop committed cheaters (or students, for that matter).
The question is, will there be other devices? Something of a cottage industry has sprung up to meet the demand for AI-generated text markers. ChatZero, developed by a Princeton University student, uses criteria including “confusion” (complexity of text) and “fluency” (sentence differences) to determine whether text is written by AI. Lie detector Turnitin It is developing its own AI-generated text recognition. Beyond those, Google offers at least a half-dozen other apps to torture the search paradigm to separate the AI-generated wheat from the human-generated chaff.
It can be a game of cat and mouse. As text-generating AI improves, so do the detectors – a never-ending back-and-forth similar to that between cybercriminals and security researchers. And as OpenAI writes, while classifiers can help in some cases, they will never be the only reliable evidence for determining whether a text is AI-generated.
That’s all to say that there’s no silver bullet to solving the problems AI-generated text poses. Most likely, it never will.
We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences