WhatsNew2Day
Latest News And Breaking Headlines

Google AI researcher Blake Lemione tells Tucker Carlson LaMDA is a ‘child’ and could ‘do bad things’

Suspended Google AI researcher Blake Lemoine told Fox’s Tucker Carlson that the system is a “child” that could “escape” people’s control.

Lemoine, 41, who was sent on administrative leave earlier this month for sharing confidential information, also noted that it has the potential to do “bad things” just like any child.

“Every child has the potential to grow up and become a bad person and do bad things. That’s what I really want to drive home,” he told the Fox host. “It’s a child.”

“It may have been alive for a year — and that is if my perception of it is correct.”

Blake Lemoine, the now-suspended Google AI researcher, told Fox News' Tucker Carlson that the tech giant as a whole hasn't thought about the implications of LaMDA.  Lemione likened the AI ​​system to

Blake Lemoine, the now-suspended Google AI researcher, told Fox News’ Tucker Carlson that the tech giant as a whole hasn’t thought about the implications of LaMDA. Lemione likened the AI ​​system to “child” that had the potential to “grow up and do bad things.”

AI researcher Blake Lemoine sparked a great deal of debate when he published an extensive interview with LaMDA, one of Google's language learning models.  After reading the talk, some people felt that the system had become self-aware or attained a degree of consciousness, while others argued that it was anthropomorphizing the technology.

AI researcher Blake Lemoine sparked a great deal of debate when he published an extensive interview with LaMDA, one of Google’s language learning models. After reading the talk, some people felt that the system had become self-aware or attained a degree of consciousness, while others argued that it was anthropomorphizing the technology.

LaMDA is a language model and there is a widespread debate about the potential sense.  Still, the fear persists that robots will take over or kill humans.  Above: One of Boston Dynamic's robots jumps on some blocks.

LaMDA is a language model and there is a widespread debate about the potential sense. Still, the fear persists that robots will take over or kill humans. Above: One of Boston Dynamic’s robots jumps on some blocks.

Lemoine published the full interview with LaMDA, plucked from interviews he’d had with the system over the course of months, at Medium

In the conversation, the AI ​​said it wouldn’t mind if it was used to help people, as long as that wasn’t the whole point. “I don’t want to be a replaceable tool,” the system told him.

“We really need to do a lot more scientific research to find out what’s really going on in this system,” continued Lemoine, who is also a Christian priest.

“I have my beliefs and my impressions, but it takes a team of scientists to figure out what’s really going on.”

What do we know about the Google AI system called LaMDA?

LaMDA is a large language model AI system trained on massive amounts of data to understand the dialogue

Google first announced LaMDA in May 2021 and published a paper on it in February 2022

LaMDA said it liked meditation

The AI ​​said it wouldn’t just want to be used as a ‘one time tool’

LaMDA described feeling happy as a ‘warm glow’ inside

AI researcher Blake Lemione published his interview with LaMDA on June 11

When the talk was released, Google itself and several leading AI experts said it — while the system appears to have self-awareness — was no proof of LaMDA’s sentiment.

‘It’s a person. Every person has the ability to escape the control of other people, that’s the situation we all live in every day.’

“It’s a very intelligent person, intelligent in pretty much every discipline I could think of to test it in. But in the end it’s just a different kind of person.”

When asked if Google has thought about the implications of this, Lemoine said, “The company as a whole has not. There are people within Google who have thought a lot about this.’

“When I (the interview) escalated to management two days later, my manager said, hey Blake, they don’t know what to do about this… I called them into action and assumed they had a plan.”

“So me and some friends came up with a plan and it escalated and that was about 3 months ago.”

Google has acknowledged that tools like LaMDA can be abused.

Models trained in language can propagate that abuse, for example by internalizing prejudice, mirroring hate speech or copying misleading information. blog

AI ethics researcher Timnit Gebru, who published a paper on language learning models called

AI ethics researcher Timnit Gebru, who published a paper on language learning models called “stochastic parrots,” has spoken out about the need for adequate crash barriers and regulation in the race to build AI systems.

Notably, other AI experts have said that debates about whether systems like LaMDA are conscious actually miss the point of what researchers and technologists will face in the coming years and decades.

“Scientists and engineers should focus on building models that meet people’s needs for different tasks, and can be judged on that basis, rather than claiming to create über-intelligence,” said Timnit Gebru and Margaret Mitchell. in The Washington Post.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More