AI is often seen as a threat to democracies and a boon to dictators. In 2025, algorithms are likely to continue undermining the democratic conversation by spreading outrage, fake news, and conspiracy theories. In 2025, algorithms will also continue to accelerate the creation of total surveillance regimes, where the entire population is monitored 24 hours a day.
Most importantly, AI makes it easier to concentrate all information and power in a single centre. In the 20th century, distributed information networks like those in the United States worked better than centralized information networks like the USSR, because the human machines at the center simply couldn’t analyze all the information efficiently. Replacing apparatchiks with AI could make Soviet-style centralized networks superior.
However, AI is not all good news for dictators. First, there is the notorious problem of control. Dictatorial control is based on terror, but algorithms cannot be terrorized. In Russia, the invasion of Ukraine is officially defined as a “special military operation,” and referring to it as a “war” is a crime punishable by up to three years in prison. If a chatbot on the Russian Internet calls it “war” or mentions war crimes committed by Russian troops, how could the regime punish that chatbot? The government could block it and punish its human creators, but this is much more difficult than disciplining human users. Furthermore, authoritative robots could develop dissenting opinions on their own, simply by detecting patterns in the Russian information sphere. That’s the alignment problem, Russian style. Russia’s human engineers can do their best to create AIs that are fully aligned with the regime, but given AI’s ability to learn and change on its own, how can engineers ensure that an AI that earned the seal of approval of the regime in 2024 do not do it? Won’t you venture into illicit territory in 2025?
The Russian Constitution makes grandiose promises that “everyone will be guaranteed freedom of thought and expression” (article 29.1) and “censorship will be prohibited” (29.5). Hardly any Russian citizen is naive enough to take these promises seriously. But robots don’t understand doublespeak. A chatbot that has been ordered to respect Russian laws and values could read that constitution, conclude that freedom of speech is a fundamental Russian value, and criticize Putin’s regime for violating that value. How could Russian engineers explain to the chatbot that although the constitution guarantees freedom of speech, the chatbot should not actually believe in the constitution nor should it ever mention the gap between theory and reality?
In the long term, authoritarian regimes are likely to face an even greater danger: instead of criticizing them, AIs could control them. Throughout history, the greatest threat to autocrats generally came from their own subordinates. No Roman emperor or Soviet prime minister was overthrown by a democratic revolution, but they were always in danger of being overthrown or turned into puppets by their own subordinates. A dictator who grants too much authority to AIs in 2025 could become their puppet in the future.
Dictatorships are much more vulnerable than democracies to this algorithmic takeover. It would be difficult even for a super-Machiavellian AI to amass power in a decentralized democratic system like the United States. Even if AI learns to manipulate the president of the United States, it could face opposition from Congress, the Supreme Court, state governors, the media, large corporations, and various NGOs. How would the algorithm address, for example, a Senate filibuster? Taking power in a highly centralized system is much easier. To hack an authoritarian network, AI needs to manipulate a single paranoid individual.