Elon Musk-backed AI company releases its text-generating robot, despite concerns that it can be used to create fake news and spam
- The tool can generate human-like text at a single prompt
- Some researchers fear that it can be used to generate fake news and spam
- The tool is no longer available to the public since February
Creators of a text-generating robot have released the tool to the public, despite the initial fear that bad actors might manipulate it.
OpenAI – an Elon Musk-supported artificial intelligence research company – detailed its system, called GPT-2, in February, but did not stop releasing concerns that it could be used to spread spam and fake news.
The AI is able to take a piece of text and extrapolate that little piece of information to a larger document.
Above is an example of a prompt input by MailOnline. The portion in bold is the original entry and the subsequent text is what is generated by the computer
For example, if he gets a fake head, the bot could produce a reasonably convincing fake news based on the prompt.
That ability also includes more creative ways of writing such as poetry.
GPT-2 has been trained with 8 million documents and can generate surprisingly coherent and convincing results.
To see for yourself how skilled the bot is in generating lucid text, you can test a facsimile of OpenAI & # 39; s technology on TalkToTransformer.com.
According to The Verge, OpenAI says that there is & # 39; no strong evidence of abuse & # 39; of GPT-2 and has therefore fully published the model of its program.
The AI can simulate poems and prose. Above is an excerpt from the famous poem by Robert Frost & # 39; The Road Not Tasks & # 39;
Above is an example of what happens when you enter a real headline – in this case, one that has been taken from the scientific section of Daily Mail. The bold text is the original prompt, while the following text is made by GPT-2
Despite OpenAI's claims that artificial intelligence has not been abused, researchers have warned against disclosing systems such as theirs.
In February, after the company announced that it had developed GPT-2, a debate about community AI ethics began to break through and continued to echo in similar areas such as video manipulation tools commonly known as deepfakes.
As the world of AI becomes more sophisticated, researchers and ethicists struggle to develop a set of guidelines for when and how technology is used
These tools are capable of running both audio and video to a degree that is almost indistinguishable from source material and are conceived as a potential tool for misinformation.
OpenAI says it has also developed a tool that can recognize system-generated text with 95 percent accuracy, but as noted by The Verge, that effort should still be linked to human jurors.
The company says it will continue to observe how GPT-2 is being used and is spreading throughout the community.