Advertisements
OpenAI has published the text-generating AI that was too dangerous to share

The OpenAI research laboratory has released the full version of one text-generating AI system of which experts have warned that they can be used for malicious purposes.

Advertisements

The institute originally announced the system, GPT-2, in February of this year, but remembered the full version of the program for fear that it would be used to spread fake news, spam and disinformation. Since then, it has released smaller, less complex versions of GPT-@ and studied their reception. Others also replicated the work. In a blog post this week, OpenAI now says it has "not seen strong evidence of abuse" and has fully released the model.

GPT-2 is part of a new type of text generation system that has impressed experts with their ability to generate coherent text from minimal prompts. The system is trained on eight million text documents scraped from the internet and responds to text fragments supplied by users. For example, give it a fake head and it will write a news story; give it the first line of a poem and it yields a whole verse.

It is difficult to convey exactly how good the output of GPT-2 is, but the model often produces creepy persuasive writing that can often give the impression of intelligence (although that does not mean that what GPT-2 does includes something that we recognize it as cognition.) But play with the system long enough and its limitations become clear. It is mainly about the challenge of cohesion in the long term; for example, consistently using the names and attributes of characters in a story, or sticking to a single topic in a news article.

The best way to get an idea of ​​the possibilities of GPT-2 is to try it out yourself. You can open a web version at TalkToTransformer.com and enter your own prompts. (A & # 39; transformer & # 39; is part of the machine learning architecture used to create GPT-2 and its fellows.)

Advertisements

Apart from the raw capabilities of GPT-2, the release of the model is remarkable as part of an ongoing debate about the responsibility of AI researchers to limit damage caused by their work. Experts have pointed out that easy access to advanced research can enable malicious actors; a dynamic we have seen when using deepfakes to generate revenge porn, for example, and OpenAI limited the release of its model for these reasons.

Not everyone welcomed this approach. Many experts criticized the decision and said it limited the amount of research that others could do to reduce the damage to the model, and that it caused unnecessary hype about the dangers of artificial intelligence. "The words" too dangerous "were casually thrown away without much thought or experimentation," researcher Delip Rao said The edge back in February. "I don't think (OpenAI) has spent enough time to prove that it was really dangerous."

In its announcement of the full model this week, OpenAI noted that GPT-2 could be abused, citing third-party research that the system could be used to generate "synthetic propaganda" for extreme ideological positions. But it also admitted that his fear that the system would be used to pump out a high volume of coherent spam, overwhelming information systems such as social media, did not materialize.

The laboratory also noted that while in-house investigators had created automatic systems capable of detecting GPT-2 output with ~ 95% accuracy, this figure was not high enough "for self-detection" and meant that any system for detecting fake text combined with human judges. However, this is not particularly unusual for such moderation systems, which usually depend on people to spot fake images and videos.

OpenAI says it will continue to look at how GPT-2 is used by the community and the public and will further develop its policy for the responsible publication of AI research.