Home Tech Human misuse will make artificial intelligence more dangerous

Human misuse will make artificial intelligence more dangerous

0 comments
Human misuse will make artificial intelligence more dangerous

Sam Altman, CEO of OpenAI wait AGI, or artificial general intelligence (AI that outperforms humans at most tasks), around 2027 or 2028. Elon Musk’s prediction is: 2025 or 2026and he has reclaimed that he was “losing sleep over the threat of AI danger.” Those predictions are wrong. like him limitations Although today’s AI is becoming clearer, most AI researchers have come to the conclusion that simply building larger, more powerful chatbots will not lead to AGI.

However, in 2025, AI will still pose an enormous risk: not from artificial superintelligence, but from human misuse.

These could be unintentional misuses, such as lawyers relying too much on AI. After the launch of ChatGPT, for example, several lawyers have been sanctioned for using AI to generate erroneous court reports, apparently unaware of chatbots’ tendency to make things up. In British ColumbiaLawyer Chong Ke was ordered to pay opposing counsel’s costs after she included fictitious AI-generated cases in a court file. In New YorkSteven Schwartz and Peter LoDuca were fined $5,000 for providing false subpoenas. In ColoradoZachariah Crabill was suspended for a year for using fictitious court cases generated through ChatGPT and blaming a “legal intern” for the errors. The list is growing rapidly.

Other misuses are intentional. In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms. These images were created using Microsoft’s “Designer” artificial intelligence tool. While the company had guardrails to prevent generating images of real people, misspelling Swift’s name was enough to prevent them. Microsoft since then fixed this error. But Taylor Swift is the tip of the iceberg, and non-consensual deepfakes are proliferating widely, in part because open source tools for creating deepfakes are publicly available. Current legislation around the world seeks to combat deepfakes in the hope of stopping the damage. It remains to be seen whether it is effective.

In 2025, it will be even more difficult to distinguish what is real from what is made up. The fidelity of the AI-generated audio, text and images is remarkable, and video will be next. This could lead to the “liar’s dividend”: those in positions of power repudiate evidence of their misconduct by claiming it is false. In 2023, Tesla argument that a 2016 video of Elon Musk could have been a deepfake in response to accusations that the CEO had exaggerated the safety of Tesla’s Autopilot and caused an accident. An Indian politician claimed that audio clips in which he acknowledged corruption in his political party were manipulated (the audio in at least one of his clips was verified as real by a press medium). And two defendants in the Jan. 6 riots claimed that the videos they appeared in were deepfakes. Both were found guilty.

Meanwhile, companies are exploiting public confusion to sell fundamentally dubious products by labeling them “AI.” This can go very wrong when such tools are used to classify people and make important decisions about them. The recruitment company Retorio, for example, claims that its AI predicts the job suitability of candidates based on video interviews, but a study found that the system can be fooled simply by the presence of glasses or by replacing a plain background with a bookshelf, showing that it is based on superficial correlations .

There are also dozens of applications in healthcare, education, finance, criminal justice, and insurance where AI is currently being used to deny people important opportunities in life. In the Netherlands, the Dutch tax authority used an artificial intelligence algorithm to identify people who committed child welfare fraud. He wrongfully accused thousands of parents, often demanding the return of tens of thousands of euros. As a result, the Prime Minister and his entire cabinet resigned.

In 2025, we expect AI risks to arise not because AI acts on its own, but because of what people do with it. This includes cases where seems works well and is overly trusted (lawyers using ChatGPT); when it works well and is used badly (non-consensual deepfakes and the liar’s dividend); and when it is simply not fit for purpose (denying people their rights). Mitigating these risks is a gigantic task for companies, governments and society. It will be difficult enough without being distracted by science fiction concerns.

You may also like