Take a fresh look at your lifestyle.

Google CEO calls for AI regulation to protect against deep fakes and facial recognition

The CEO of Google has called for international cooperation to regulate artificial intelligence technology to ensure that it is “taken advantage of forever.”

Sundar Pichai said that while regulation by individual governments and existing rules such as GDPR can provide a “solid basis” for AI regulation, a more coordinated international effort is “critical” for global standards to work.

The CEO said that history is full of examples of how “the virtues of technology are not guaranteed” and that with technological innovations come side effects.

These range from internal combustion engines, which allowed people to travel beyond their own areas, but also caused more accidents, to the internet, which helped people connect but also facilitated the spread of erroneous information.

These lessons teach us that “we must have a clear idea of ​​what could go wrong” in the development of technologies based on artificial intelligence, he says.

He referred to the nefarious uses of facial recognition and the proliferation of erroneous information on the Internet in the form of deep falsifications as examples of the possible negative consequences of AI.

Google CEO Sundar Pichai (pictured) has asked governments to take a step forward in how they regulate AI.

Google CEO Sundar Pichai (pictured) has asked governments to take a step forward in how they regulate AI

“Companies like ours cannot simply build promising new technologies and let market forces decide how they will be used,” he said, writing in the Financial times.

‘It is equally important for us to ensure that technology is used for good and available to all.

‘Now there is no doubt in my mind that artificial intelligence needs to be regulated. It is very important not to do so. The only question is how to address it.

Pichai pointed to Google Principles of AI, a framework by which the company evaluates its own research and application of technologies.

The seven principles list helps Google avoid biases, test security and make technology accountable to people, such as consumers.

It also promises not to design or deploy technologies that cause harm, such as autonomous killer weapons or surveillance-surveillance.

To enforce these principles, the company is testing the impartiality of AI decisions and conducting independent human rights assessments of new products.

Last year, Google announced a large set of deepfakes data to help researchers create detection methods.

Last year, Google announced a large set of deepfakes data to help researchers create detection methods.

Last year, Google announced a large set of deepfakes data to help researchers create detection methods.

Pichai, who also became CEO of Google’s parent company, Alphabet, last month, said the international lineup will be critical to ensure the safety of humanity against AI.

‘We want to be a useful and committed partner with regulators while dealing with the inevitable tensions and compensations.

WHAT IS A DEEPENING?

Deepfakes are called that because they are made using deep learning, a form of artificial intelligence, to create fake videos of an objective individual.

They are made by feeding the computer an algorithm or a set of instructions, as well as many images and audio of the target person.

Then, the computer program learns to imitate facial expressions, gestures, voice and inflections of the person.

With enough video and audio from someone, you can combine a fake video of a person with fake audio and have them say whatever you want.

“While work is already being done to address these concerns, there will inevitably be more challenges ahead than any company or industry can solve on its own,” Pichai wrote.

“We offer our experience, experience and tools as we navigate together through these problems.”

Existing rules, such as the General Data Protection Regulation, can also serve as a solid basis for individual governments to enforce technology regulation, he said.

However, the Pichai company does not have a completely clean record in this regard and the first step for Google will be to pay attention to its own advice.

In the last year, the French data regulator CNIL imposed a record fine of 50 million euros on Google for violating the GDPR.

The company also had to suspend its own facial recognition research program after reports emerged that its workers had been taking photos of homeless black people to build their image database.

The burden of responsibility ultimately rests with companies such as Google and to what extent they are willing to go to ensure that AI technologies do not violate privacy laws, spread erroneous information or lead to an era of killer robots that think independently .

Google's search engine uses artificial intelligence and machine learning technologies to return search results

Google's search engine uses artificial intelligence and machine learning technologies to return search results

Google’s search engine uses artificial intelligence and machine learning technologies to return search results

Last year, a Google software engineer expressed fears about a new generation of robots that could carry out “atrocities and illegal killings.”

Laura Nolan, who previously worked on the military drone initiative of the tech giant, Project Maven, called for a ban on all autonomous war drones, since these machines do not have the same common sense or insight as humans.

“What you are seeing are possible atrocities and illegal killings even under the laws of war, especially if hundreds or thousands of these machines are deployed,” said Nolan, who is now a member of the International Robot Weapons Control Committee.

“There could be large-scale accidents because these things will start to behave unexpectedly,” he told the Guardian.

While many of today’s drones, missiles, tanks and submarines are semi-autonomous, and have been for decades, they all have human supervision.

The former Google engineer, Laura Nolan, expressed fears about a new generation of robots that could carry out “atrocities and illegal killings.” The photo shows the war drone MQ-9 Reaper, an unmanned aircraft capable of remotely controlled or autonomous flight operations for the US Air Force. UU.

However, a new crop of weapons developed by nations like the USA. The US, Russia and Israel, called lethal autonomous weapons systems (LAWS), can identify, attack and kill a person on their own, even though there are no international laws governing their use.

Both consumers and companies and independent groups fear that artificial intelligence becomes so sophisticated that it can make fun of or be physically dangerous to humanity, whether programmed or not.

National and global AI regulations have been unsystematic and slow to take effect, although some progress is being made.

Last May, 42 countries adopted the first set of intergovernmental policy guidelines on AI, including countries of the Organization for Economic Cooperation and Development (OECD), United Kingdom, United States, Australia, Japan and Korea.

The OECD Principles on Artificial Intelligence comprise five principles for “responsible deployment of reliable AI” and recommendations for public policies and international cooperation.

But the Principles have no force of law, and the United Kingdom must still apply a specific legal regime that regulates the use of AI.

A report Drone Wars UK also states that the Ministry of Defense is financing multiple AI weapon systems, despite not developing them.

As for the US UU., The Pentagon launched a set of recommendations on the ethical use of AI by the Department of Defense last November.

However, both the United Kingdom and the United States are reportedly among a group of states, which also includes Australia, Israel and Russia, speaking against the legal regulation of killer robots at the UN last March.

Will robots someday get away from war crimes?

If a robot illegally kills someone in the heat of battle, who is responsible for the death?

In a Human Rights Watch report in 2017, they highlighted the rather disturbing response: nobody.

The organization says that something must be done regarding this lack of responsibility, and calls for a ban on the development and use of ‘killer robots’.

Called ‘Mind the Gap: Lack of responsibility for killer robots’, its report details the obstacles of allowing robots to kill without being controlled by humans.

“Not being accountable means deterring future crimes, there are no reprisals for the victims, there is no social condemnation of the responsible party,” said Bonnie Docherty, principal investigator of the HRW Weapons Division and lead author of the report.

.