However, Google and its hardware partners maintain that privacy and security are a key aspect of Android’s AI approach. Vice President Justin Choi, head of the security team at Samsung Electronics’ mobile eXperience business unit, says its hybrid AI gives users “control over their data and absolute privacy.”
Choi describes how cloud-processed functions are protected by servers governed by strict policies. “Our on-device AI functions provide another element of security by performing tasks locally on the device without relying on cloud servers, storing data on the device, or uploading it to the cloud,” Choi says.
Google says its data centers are designed with robust security measures, including physical security, access controls, and data encryption. When processing AI requests in the cloud, the company says the data stays within Google’s secure data center architecture and the company does not send your information to third parties.
Meanwhile, Galaxy’s AI engines aren’t trained with user data from device features, Choi says. Samsung “clearly indicates” which AI features are running on the device with its Galaxy AI symbol, and the smartphone maker adds a watermark to show when content has used generative AI.
The firm has also introduced a new security and privacy option called Advanced Intelligence Settings to give users the option to disable cloud-based AI capabilities.
Google says it “has a long history of protecting the privacy of user data,” adding that this applies to its on-device and cloud-based AI features. “We use on-device models, where the data never leaves the phone, for sensitive cases like phone call screening,” Suzanne Frey, Google’s vice president of product trust, tells WIRED.
Frey describes how Google’s products are built on its cloud-based models, which she says ensures that “consumer information, such as sensitive information you want to summarize, is never sent to a third party for processing.”
“We remain committed to building AI-powered features that people can trust because they are secure by default and private by design, and most importantly, follow Google’s Responsible AI Principles that were the first to be championed in the industry,” Frey said.
Apple changes the conversation
Rather than simply taking the “hybrid” approach to data processing, experts say Apple’s AI strategy has changed the nature of the conversation. “Everyone was expecting this on-device, privacy-first push, but what Apple really did was say it doesn’t matter what you do in AI (or where), it matters how you do it,” Doffman says. He believes this will “probably define best practices in the smartphone AI space.”
Still, Apple hasn’t won the AI privacy battle yet: The OpenAI deal (which involves Apple unusually opening up its iOS ecosystem to a third-party vendor) could put a dent in its privacy claims.
Apple refutes Musk’s claims that the OpenAI partnership compromises iPhone security, with “built-in privacy protections for users accessing ChatGPT.” The company says you will be asked for permission before sharing your query with ChatGPT, while IP addresses are hidden and OpenAI will not store requests, but ChatGPT’s data usage policies still apply.
Partnering with another company is a “strange move” for Apple, but the decision “would not have been taken lightly,” says Jake Moore, global cybersecurity advisor at security firm ESET. While the exact privacy implications are still unclear, he admits that “some personal data may be collected by both parties and potentially analyzed by OpenAI.”