Advertisements
An artificial intelligence project that could write fake news that & # 39; too dangerous & # 39; was supposed to release to the public, has been recreated by two university students (stock image)
Advertisements

An artificial intelligence project that could write fake news that & # 39; too dangerous & # 39; was supposed to release to the public, has been recreated by two university students.

Open AI, a project founded with the support of Elon Musk, is capable of generating news items from a header or first line of text.

In February, the company released a limited version of its software that other developers could use to explore its potential.

The company, in which Musk is no longer involved, has since launched an updated version of the software with half the power of the full AI.

Advertisements

Now Aaron Gokaslan and Vanya Cohen of the computer science master have shared code for what they believe is the full version.

Scroll down for video

An artificial intelligence project that could write fake news that & # 39; too dangerous & # 39; was supposed to release to the public, has been recreated by two university students (stock image)

An artificial intelligence project that could write fake news that & # 39; too dangerous & # 39; was supposed to release to the public, has been recreated by two university students (stock image)

With this tool you can try the watered version of Open AI of its GPT-2 text-generating AI, which he believes is about half as powerful as the full model

Advertisements

The couple say they don't hope to cause chaos by releasing the code, but want to show that creating this type of software is feasible without the resources of someone like Elon Musk.

They used free cloud computing time provided by Google to academic institutions to complete the project.

In a conversation with Wired, Mr. Cohen said: & # 39; This allows everyone to have an important conversation about security and investigators to help protect against future potential abuses.

& # 39; I have received dozens of messages, and most of them were like: "Going Away". & # 39;

The software, called GPT-2, has been trained with eight million web pages and adjusts the style and content of what it produces according to your input.

Advertisements

Far from the dystopian & # 39; fake news & # 39; generator that the makers have been warned about, the text it generates is incoherent and clearly not the work of a talented author.

When the headline & # 39; Donald Trump declares that he is president for life & # 39; must be given, the latest Open AI version gave the following output: & The announcement, which was made on Twitter on Tuesday evening, came at a time when the Republican billionaire is still going to be elected president.

& # 39; But if he wins the presidency, Trump & # 39; large tax cuts and huge infrastructure expenses & # 39; promised. And this is probably a top priority for his administration.

& # 39; So what would Trump & # 39; s tax plan revealed during his campaign look like?

& # 39; & # 39; There's a little more detail than he is, but he has a lot of information there, "said Matt Kibbe, a spokesperson for the House GOP campaign arm. "There are many details that will be released in the coming weeks". & # 39;

Open AI, a project founded with the support of Elon Musk, is capable of generating news items from a headline or first line of text
Advertisements

Open AI, a project founded with the support of Elon Musk, is capable of generating news items from a headline or first line of text

Open AI, a project founded with the support of Elon Musk, is capable of generating news items from a headline or first line of text

However, experts say that we still have to be careful with the development of such software.

Speak to the BBC, Dave Coplin, founder of AI consultancy the Envisioners, said: & Once the initial – and understandable – concern disappears, a fundamentally crucial debate remains for our society, which is about how we should think about a world where the line between human-generated content and computer-generated content is becoming increasingly difficult to distinguish. & # 39;

OpenAI is a group founded by Elon Musk and supported by heavyweights from Silicon Valley, including Reid Hoffman from LinkedIn.

Advertisements

Musk has been famously an outspoken critic of AI and called it the greatest existential threat to humanity and warned that we could create an immortal dictator from which we would never escape.

The researchers said: & # 39; Due to our concerns about malicious applications of the technology, we are not releasing the trained model.

& # 39; As an experiment in a responsible manner, we are publishing a much smaller model for researchers to experiment with instead.

& # 39; We are not yet at a stage where we say this is a danger. We try to make people aware of these problems and to start a conversation. & # 39;

A TIMETABLE OF ELON MUSK'S COMMENTS ON AI

Musk is a long-standing, and very vocal, condemnator of AI technology and the precautions that people need to take

Musk is a long-standing, and very vocal, condemnator of AI technology and the precautions that people need to take

Advertisements

Musk is a long-standing, and very vocal, condemnator of AI technology and the precautions that people need to take

Elon Musk is one of the most prominent names and faces in the development of technologies.

The billionaire entrepreneur leads SpaceX, Tesla and the Boring company.

But while he is at the forefront of creating AI technologies, he is also keenly aware of the dangers.

Here is an extensive timeline of all Musk's premonitions, thoughts and warnings about AI so far.

Advertisements

August 2014 – & # 39; We must be super careful with AI. Potentially more dangerous than nuclear weapons. & # 39;

October 2014 – & # 39; I think we should be very careful with artificial intelligence. If I would guess what our biggest existential threat is, it's probably that. So we have to be very careful with artificial intelligence. & # 39;

October 2014 – & # 39; We call the demon with artificial intelligence. & # 39;

June 2016 – & # 39; The benign situation with ultra-intelligent AI is that we would be so far in intelligence that we would be like a pet or a domestic cat. & # 39;

July 2017 – & # 39; I think AI is something that is risky at the level of civilization, not just at the individual risk level, and that's why it really requires a lot of security research. & # 39;

Advertisements

July 2017 – & # 39; I am exposed to the very latest AI and I think people should really worry about that. & # 39;

July 2017 – & # 39; I keep sounding the alarm, but until people see how robots take to the streets and kill people, they don't know how to react because it seems so ethereal. & # 39;

August 2017 – & # 39; If you are not worried about AI security, you should. Much more risk than North Korea. & # 39;

November 2017 – & # 39; Maybe there is a 5 to 10 percent chance of success (to make AI safe). & # 39;

March 2018 – & # 39; AI is much more dangerous than nuclear weapons. So why don't we have regulatory supervision? & # 39;

Advertisements

April 2018 – & # 39; (AI is) a very important topic. It will affect our lives in ways that we cannot even imagine. & # 39;

April 2018 – & # 39; (We could create) an immortal dictator from which we would never escape. & # 39;

. (TagsToTranslate) Dailymail (t) sciencetech (t) Elon musk