1. Jobs
In a section on “labor market risks”, the report warns that the impact on jobs “will probably be deep”, particularly if AI agents, tools that can carry out tasks without human intervention, become highly capable.
“The AI of general use, especially if it continues to advance rapidly, has the potential to automate a very wide range of tasks, which could have a significant effect on the labor market. This means that many people could lose their current jobs, ”says the report.
The report adds that many economists believe that job losses could be compensated with the creation of new jobs or demand for sectors that are not touched by automation.
According to the International Monetary Fund, about 60% of jobs in advanced economies such as the United States and the United Kingdom are exposed to AI and half of these works can be negatively affected. The Tony Blair Institute has said that AI could move up to 3 million jobs in the private sector in the United Kingdom, although the final increase in unemployment will be in the low hundreds of thousands because the growth in technology will create new roles in a Economy transformed by AI.
“These interruptions could be particularly serious if autonomous agents become able to complete longer sequences of tasks without human supervision,” says the report.
He adds that Some experts have indicated to scenarios where work is eliminated “largely.” In 2o23, Elon Musk, the richest person in the world, told the former Prime Minister of the United Kingdom Rishi Sunak that AI could replace all human works. However, the report says that such views are controversial and there is “considerable uncertainty” about how AI could affect labor markets.
2. The environment
The report describes the impact of AI on the environment as a “moderate but quickly growing taxpayer”, since data centers, the central nerve systems of AI models, consume electricity to train and operate technology.
Data centers and data transmission represent approximately 1% of greenhouse gas emissions related to energy, says the report, with AI that constitutes up to 28% of the energy consumption of the data center.
He adds that the models are using more energy as they become more advanced and warn that a “significant portion” of world training of models is based on high carbon energy sources, such as coal or natural gas. The use of renewable energy by AI companies and efficiency improvements have not maintained the rhythm of the growing energy demand, says the report, which also points to technology companies that admit that the development of AI It is harming its ability to meet environmental objectives.
The report also warns that water consumption by AI, used for cooling equipment in data centers, could represent a “substantial threat to the environment and the human right to water.” However, the report adds that there is a shortage of data on the environmental impact of AI.
3. Loss of control
A system of the almighty that evades human control is the central concern of experts who fear that technology can extinguish humanity. The report recognizes those fears, but says that opinion varies “enormously.”
“Some consider it unlikely, some consider that it is likely to occur, and others see it as a modest risk of probability that deserves attention due to its high severity,” he says.
Bengio told The Guardian that AI agents, who perform tasks autonomously, are still developing and so far they cannot carry out the long -term planning necessary for these systems to eradicate the wholesale work, or evade Security guidelines. “If an AI cannot plan on a long horizon, you can barely escape our control,” he said
4. Biweapon
The report establishes that new models can create step -by -step guides to create pathogens and toxins that exceed experience at the doctoral level. However, he warns that there is uncertainty about whether newbies can use them.
There is evidence of progress from a provisional security report last year, experts say, with OpenAi producing a model that could “significantly help experts in operational planning to reproduce known biological threats.”
5. Cybersecurity
A rapid growth threat of AI in terms of cybernetic amateur is that autonomous bots can find vulnerabilities in open source software, the term for the code that is free to download and adapt. However, relative deficiencies in AI agents mean that technology cannot plan and carry out attacks autonomously.
6. Failures
The report lists a variety of known examples of Deepfakes that are used maliciously, including the deceived companies to deliver money and create pornographic images of people. However, the report says that there are not enough data to completely measure the number of Deepfake incidents.
“The reluctance to inform can be contributing to these challenges to understand the total impact of the content generated by AI destined to damage people,” says the report. For example, institutions often hesitate to reveal their struggles with fraud with AI. Similarly, individuals attacked with committed material generated by the on themselves can remain silent by shame and avoid more damage. “
The report also warns that there are “fundamental challenges” to address the Deepfake content, such as the ability to eliminate digital water marks that mark the content generated by AI.