Sunday, November 24, 2024
Home Tech Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target

Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target

0 comments
Elon Musk's Criticism of 'Woke AI' Suggests ChatGPT Could Be a Trump Administration Target

Mittelsteadt adds that Trump could punish companies in various ways. He cites, for example, the way the Trump administration canceled a major federal contract with Amazon Web Services, a decision likely influenced by the former president’s opinion of the Washington Post and its owner, Jeff Bezos.

It would not be difficult for policymakers to point to evidence of political bias in AI models, even if this goes both ways.

TO study 2023 Researchers from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found a variety of political biases in different large language models. It also showed how this bias can affect the performance of hate speech or misinformation detection systems.

another studyconducted by researchers at the Hong Kong University of Science and Technology, found biases in several open source AI models on polarizing topics such as immigration, reproductive rights and climate change. Yejin Bang, a doctoral candidate involved in the work, says that most models tend to lean toward liberalism and American-centricity, but that the same models can express a variety of liberal or conservative biases depending on the topic.

AI models capture political biases because they are trained on swaths of Internet data that inevitably include all types of perspectives. Most users may not be aware of any bias in the tools they use because the models incorporate guardrails that prevent them from generating certain harmful or biased content. However, these biases can seep in subtly, and the additional training models receive to restrict their output can introduce greater partisanship. “Developers could ensure that models are exposed to multiple perspectives on divisive topics, allowing them to respond with a balanced point of view,” says Bang.

Problem may get worse as AI systems become more widespread, says Ashique KhudaBukhsha computer scientist at the Rochester Institute of Technology who developed a tool called the Toxicity Rabbit Hole Framework, which uncovers the different social biases of large language models. “We fear that a vicious cycle is about to begin as new generations of LLMs are increasingly trained on data contaminated by AI-generated content,” he says.

“I am convinced that such bias within LLMs is already a problem and will probably be even greater in the future,” says Luca Retenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology who conducted an analysis of LLMs for related biases. . to German politics.

Retenberger suggests that political groups may also attempt to influence LLMs to promote their own views over those of others. “If someone is very ambitious and has malicious intentions, it might be possible to manipulate the LLM in certain directions,” he says. “I see manipulation of training data as a real danger.”

Some efforts have already been made to change the balance of biases in AI models. Last March, a programmer developed a more right-wing chatbot in an effort to highlight subtle biases he saw in tools like ChatGPT. Musk himself has promised to make Grok, the AI ​​chatbot created by xAI, “maximally truth-seeking” and less biased than other AI tools, although in practice it is also protective when it comes to difficult political issues. (A staunch Trump supporter and immigration hawk, Musk’s own vision of being “less biased” may also translate into more right-leaning results.)

Next week’s US election is unlikely to resolve the discord between Democrats and Republicans, but if Trump wins, rumors about anti-woke AI could become much louder.

Musk offered an apocalyptic view of the issue at this week’s event, referring to an incident in which Google’s Gemini said a nuclear war would be preferable to misgendering Caitlyn Jenner. “If you have an AI programmed for things like that, it might conclude that the best way to ensure that no one is misgendered is to annihilate all humans, thus making the probability of a future misgendering zero,” he said.

You may also like