There is no doubt about it: Nadella’s Microsoft is a triumph. Finally, in the 2020s, Microsoft has focused on the most innovative technology since the PC itself. And while revenue from AI products hasn’t begun to offset Microsoft’s huge investments, it has the confidence (and resources) to wait until the products improve and users find them useful.
But can Microsoft really avoid the arrogance that set it back so far? Consider what happened in May of this year with a product called Recall.
The feature was supposed to epitomize Microsoft’s integration of AI into its hardware, software, and infrastructure. The idea was to offer users something like a personal version of the Internet Archive. Recall would constantly capture everything that happens on your machine: what you read, what you type, the images and videos you watch, the sites you visit. Simply describe to your machine what you’re looking for: What were those rug samples I was considering for my living room? Where is that report on the ecology of the Amazon? When did I go to Paris? Those moments would arise as if by magic, as if you had a homunculus who knew everything about you. It sounds scary, sort of like a built-in Big Brother, but Microsoft insisted that users could feel safe. Everything stays on your computer!
Almost immediately, critics slammed it as a privacy nightmare. For one thing, they noted, Recall worked by default and gobbled up your personal information, no matter how sensitive, without asking permission. While Microsoft has emphasized that Recall can only be accessed by the user, security researchers found “Spaces you could fly an airplane through.” as one evaluator put it.
“In about 48 hours, we went from ‘Wow, this is extraordinarily exciting!’ to people who express some reservations,” says Brad Smith. While the press was pouring in, Smith was on a plane to meet Nadella in Washington, DC. When he landed, he thought it would be wise to make Recall work only if users opted in; Nadela agreed. Meanwhile, in Redmond, Microsoft’s top executives were crowding into meeting rooms to see how they could shrink the product. Fortunately, since the feature hadn’t shipped yet, they didn’t have to pull Recall. They postponed the launch. And they would add security features, such as “just-in-time” encryption.
“People pointed out some obvious things that we should have done and that we should have caught,” Nadella says. But their own Responsible AI team missed them too. A “know-it-all” move had led to a product announcement that fell short, indicating that even when led by a supposed empath, Microsoft still retains many of its previous character flaws. Only now it’s a $3 trillion company with blocked access to the operation’s cutting-edge AI products.
“You can think about it in two ways,” says Brad Smith. “One is, ‘Wow, I wish we’d thought of this sooner.’ Hindsight is a great thing. Or two, “Hey, it’s good that we’re using this to make this change; let’s be explicit about why.” “It was really a learning moment for the entire company.”
Alright. After 50 years, though, it’s a lesson Microsoft (and Nadella) should have learned a long time ago.
Fake images (chronology)
Let us know what you think about this article. Send a letter to the editor at email@wired.com.