Apple’s otherworldly, flying-saucer headquarters in Cupertino, California, seemed like a fitting location this week for a bold, futuristic revamp of the company’s most prized products. With iPhone sales slowing and rivals gaining ground thanks to the rise of tools like ChatGPT, Apple offered its own vision of generative artificial intelligence at its Worldwide Developers Conference (WWDC).
Apple has lately been perceived as a laggard when it comes to generative AI. Its WWDC offerings failed to persuade some critics, who have called the WWDC announcements downright boring. But with the focus on infusing existing apps and operating system features with what the company calls “Apple Intelligence,” the big takeaway is that generative AI is a feature rather than a product in itself.
The dazzling capabilities demonstrated by ChatGPT have inspired some startups to try to invent fully dedicated AI hardware, such as the Rabbit R1 and the Human AI Pin—as a means to leverage generative AI. Unfortunately, these devices have been disappointing and frustrating in practice. In contrast, Apple’s vertical integration of generative AI into so many products and different software seems much more likely where AI is headed.
Instead of a standalone device or experience, Apple has focused on how generative AI can improve apps and operating system features in small but meaningful ways. Early adopters have certainly flocked to generative AI programs like ChatGPT for help redrafting emails, summarizing documents, and generating images, but this has usually meant opening another browser window or application, cutting and pasting, and trying to understand the sometimes feverish ramblings of a chatbot. . To be truly useful, generative AI will need to infiltrate the technology we already use in ways we can better understand and trust.
After the WWDC keynote, Apple gave WIRED a demo of what it calls Apple Intelligence, a general name to account for AI running in various applications. The capabilities hardly push the boundaries of generative AI, but they are carefully integrated and perhaps even limited in ways that encourage users to rely more on them.
A feature called Writing Tools will let iOS and MacOS users rewrite or summarize text, and Image Playground will turn sketches and text messages into stylized illustrations. The new company genmoji toolwhich uses generative AI to come up with new emojis from a text message, may prove to be a surprisingly popular integration given how often people throw emojis at each other.
Apple is also giving Siri a much-needed upgrade with generative AI that helps the assistant better understand speech, including pauses and corrections, remembering previous chats for better context awareness, and leveraging data stored in a device’s apps to be more useful. Apple said that Siri will use the App Intents, a framework for developers that can be used to perform actions that involve opening and operating applications. When asked “show me pictures of my cat chasing a toy,” for example, a language model will parse the command and then use the frame to access Photos.
Apple’s Generative AI will primarily run locally on its devices, although the company has developed a technique called Private Cloud Compute to securely send queries to the cloud when necessary. Running AI on a device means it will be less capable than the latest cloud-based chatbot. But this may be a feature rather than a bug, as it also means that a program like Siri is less likely to overextend itself and break. Apple is quite cleverly handing over its most challenging queries to OpenAI’s ChatGPT, with user permission.