Facebook creates an AI that can view and interact with the world like a human

Facebook creates AI that can view and interact with the world from a human’s point of view: More than 2,200 hours of first-person images captured in nine countries could teach it to think like a person

  • Facebook creates an artificial intelligence that is able to view and interact with the outside world in the same way as a person
  • The Ego4D project lets AI learn from ‘videos from the center of action’
  • It has collected over 2,200 hours of first-person video from 700 people
  • It can be used in future devices such as AR glasses and VR headsets


<!–

<!–

<!–

<!–

<!–

<!–

<!–

Facebook announced Thursday that it is creating an artificial intelligence capable of viewing and interacting with the outside world in the same way as a person.

The AI ​​project, known as the Ego4D project, will take technology to the next level and allow learning from “videos from the center of action,” the social networking giant said in a statement. blog post.

The project consists of 13 universities and has collected more than 2,200 hours of first-person video from 700 people.

It will use video and audio from augmented reality and virtual reality devices like the Ray-Bans sunglasses, announced last month, or the Oculus VR headsets.

Scroll down for video

Facebook creates an artificial intelligence that is able to view and interact with the outside world in the same way as a person

Facebook creates an artificial intelligence that is able to view and interact with the outside world in the same way as a person

The Ego4D project lets AI learn from 'videos from the center of action'

The Ego4D project lets AI learn from 'videos from the center of action'

The Ego4D project lets AI learn from ‘videos from the center of action’

“AI understanding the world from this point of view could unlock a new era of immersive experiences as devices such as augmented reality (AR) glasses and virtual reality (VR) headsets become as useful in everyday life as smartphones,” the company said in the post.

Facebook created five goals of the project, including:

  • Episodic memory, or the ability to know “what happened when,” such as “Where did I leave my keys?”
  • Forecasting or the ability to predict and anticipate human actions, such as “Wait, you’ve already added salt to this recipe.”
  • Manipulating hands and objects, such as ‘Teach me to play the drums’.
  • Have an audio and visual diary of your daily life, with the ability to know when someone has said something specific.
  • Understanding social and human interaction, such as who is communicating with whom, or “Help me hear the person talking to me better in this noisy restaurant.”

“Traditionally, a robot learns by doing things in the world or literally being in the hand to show you how to do things,” Kristen Grauman, lead researcher at Facebook, said in an interview with CNBC.

“There are openings to let them learn from video, just from our own experience.”

It can be used in emerging devices such as AR glasses - such as the company's Ray-Bans sunglasses - and VR headsets

It can be used in emerging devices such as AR glasses - such as the company's Ray-Bans sunglasses - and VR headsets

It can be used in emerging devices such as AR glasses – such as the company’s Ray-Bans sunglasses – and VR headsets

Facebook said the project collected more than 2,200 hours of first-person video from 700 people

Facebook said the project collected more than 2,200 hours of first-person video from 700 people

Facebook said the project collected more than 2,200 hours of first-person video from 700 people

Facebook’s own AI systems have had mixed success, most notably having to apologize to DailyMail.com and MailOnline after one of its AI systems identified a black man in a video posted by the news channel as a “primate.” mentioned.

While these tasks cannot currently be performed by any AI system, it could be a big part of Facebook’s “metaverse” plans, or combining VR, AR, and reality.

In July, CEO Mark Zuckerberg revealed Facebook’s plans for the metaverse, adding that he believes it is the successor to the mobile web.

‘[Y]You can think of the metaverse as an embodied internet, where instead of just watching content, you’re in it,” he said in an interview with The edge at the time.

“And with other people you feel like you’re somewhere else, with different experiences that you couldn’t necessarily do on a 2D app or web page, like dancing, for example, or different types of fitness.”

Facebook plans to make the Ego4D dataset publicly available to researchers in November, the company said.

There are some concerns that the project could have negative privacy implications, such as if an individual doesn’t want to be included, something Facebook has a mixed record on.

A spokesperson told The Verge Which additional privacy safeguards would be introduced down the line.

WHAT IS THE DIFFERENCE BETWEEN AR AND VR?

Virtual reality is a computer-generated simulation of an environment or situation.

It immerses the user by making them feel like they are in the simulated reality through images and sounds.

For example, in VR, you might feel like you’re climbing a mountain while you’re at home.

Augmented reality, on the other hand, superimposes computer-generated images on an existing reality.

AR has been developed into apps to bring digital components into the real world.

For example, in the Pokemon Go app, the characters seem to appear in real-world scenarios.

.