As public concern about the ethical and social implications of artificial intelligence continues to grow, it may seem time to slow down. But within technology companies themselves, the sentiment is quite the opposite. As Big Tech’s AI race heats up, it would be an “absolutely fatal mistake at this point to worry about things that can be fixed later,” a Microsoft executive wrote in an internal email about generative AI, as The New York Times reported.
In other words, it’s time to “move fast and break things”, to quote Mark Zuckerberg old motto. Of course, if you break things, you may have to fix them later – for a fee.
In software development, the term “technical debtrefers to the implicit cost of creating future solutions as a result of choosing faster, less diligent solutions now. Rushing to market can mean releasing software that isn’t ready, knowing that once it hits the market, you will find out what the bugs are and hopefully can fix them then.
However, negative news stories about generative AI are usually not about these kinds of bugs. Instead, much of the concern is about strengthening AI systems harmful prejudices and stereotypes And students using AI deceptive. We hear about it privacy concerns, people are misled by disinformation, labor exploitation and fears about how soon human jobs can be replaced, to name a few. These problems are not software problems. Realizing that a technology reinforces oppression or bias is very different from learning that a button on a website doesn’t work.
If a technology ethics educator and researcher, I’ve been thinking a lot about this kind of “bugs”. What arises here is not just technical debt, but ethical debt. Just as technical guilt can result from limited testing during the development process, ethical guilt can result from not considering possible negative consequences or societal harm. And especially with ethical debt, the people who incur it are rarely the people who end up paying for it.
On to the races
Once OpenAI’s ChatGPT was released in November 2022the starting gun for today’s AI race, I imagined the debt book starting to fill up.
Within months, Google and Microsoft released their own generative AI programs, which seemed to rush to market to keep up. Google’s stock prices plummeted as its chatbot Bard felt confident given an incorrect answer during the company’s own demo. You’d expect Microsoft to be particularly cautious when it comes to chatbots, given Tay, the Twitter-based bot that shut down almost immediately in 2016 after spouting misogynist and white supremacist talking points. Still, early talks with the AI-powered Bing left some users unsettledand the repeated known misinformation.
Smith Collection/Gado/Archive Photos via Getty Images
When the social guilt comes from these hasty releases, I expect we’ll be reporting any unintended or unexpected consequences. After all, even with ethical guidelines, it’s not like OpenAI, Microsoft, or Google can see the future. How can anyone know what societal problems may arise before the technology is even fully developed?
At the root of this dilemma is uncertainty, a common side effect of many technological revolutions, but increased in the case of artificial intelligence. After all, part of the point of AI is that its actions are not known in advance. AI may not be designed to have negative consequences, but it is designed to produce the unforeseen.
However, it is unfair to suggest that technologists cannot speculate accurately about what many of these consequences might be. There are now countless examples of how AI can reproduce bias and exacerbate social inequality, but these problems are rarely publicly identified by tech companies themselves. It was outside researchers who discovered racial bias in widely used commercials facial analysis systemsfor example, and in one medical risk prediction algorithm that was applied to about 200 million Americans. Academics and advocacy or research organizations such as the Algorithmic Justice League and the Distributed AI research institute do a lot of this work: assess damage afterwards. And this pattern doesn’t seem to change if companies continue to fire ethicists.
Speculate – responsibly
I sometimes describe myself as a technology optimist who thinks and prepares like a pessimist. The only way to reduce the ethical guilt is to take the time to think ahead about things that could go wrong – but this is not something technologists necessarily like learned to do.
Scientist and iconic science fiction writer Isaac Asimov once said that sci-fi authors “foresee the inevitable, and while troubles and catastrophes are inevitable, solutions are not.” Of course, science fiction writers aren’t usually tasked with developing these solutions, but right now, the technologists developing AI are.
So how can AI designers learn to think more like science fiction writers? One of my current research projects focuses on developing ways to support this process of ethical speculation. I don’t mean designing with distant robot wars in mind; I mean the ability to consider future consequences at all, even in the very near future.

Maskot/Getty Images
This is a topic I’ve been exploring some time in my teaching, encouraging students to consider the ethical implications of science fiction technology to prepare them to do the same with technology they might create. An exercise I developed is called the Black Mirror Writer’s Roomwhere students speculating about possible negative consequences of technology such as social media algorithms and self-driving cars. Often these discussions are based on past patterns or the potential for bad actors.
Ph.D. candidate Shamika Classes and I evaluated this teaching assignment in a research study and found that there are pedagogical benefits to encouraging computer students to imagine what might go wrong in the future – and then brainstorm how we can avoid that future in the first place.
However, the goal is not to prepare students for that distant future; it is to teach speculation as a skill that can be applied immediately. This skill is especially important for helping students imagine other people, as technological damage often disproportionately affects marginalized groups that are under-represented in computing professions. The next steps for my research are translating these ethical speculation strategies for real-world technology design teams.
Time to hit pause?
In March 2023, an open letter with thousands of signatures advocating for pausing training AI systems more powerful than GPT-4. Left unchecked, AI development could “eventually outnumber us, outsmart, obsolete and replace us,” or even cause a “loss of control over our civilization,” the writers warned.
If criticism of the letter pointed out that this focus on hypothetical risk disregards the actual damage taking place today. Nevertheless, I think there’s little disagreement among AI ethicists about the need to slow down AI development — that developers who throw their hands up and call it “unintended consequences” won’t make it.
We’re only a few months into the “AI race” that is accelerating significantly, and I think it’s already clear that ethical considerations are left in the dust. But the debt will eventually fall due – and history suggests that Big Tech executives and investors may not be the ones paying for it.