Google smartphones will no longer require an outstretched arm to fit everyone into a group photo.
Instead, users will now be able to take a photo from behind the camera and simply add themselves using AI.
It’s one of a series of powerful AI-enabled tools that Google announced Tuesday it would include in its latest Pixel 9 range of smartphones.
To use the new “Add Me” tool, users must first choose a person to take the group photo with.
They then pass the phone to another member of the group, who takes a second photo of the same scene, this time with them in it.
Google smartphones will no longer require an outstretched arm to fit everyone into a group photo
Google’s AI then overlays the two images and stitches them together to make it look like everyone was in it from the beginning.
“There is usually a designated photographer who is left out of group photos,” Google said at its Made By Google event in California.
“With Add Me you get a photo with everyone who was there, including the photographer, without having to carry a tripod or ask a stranger for help.”
Google has been increasingly using artificial intelligence to help users edit their photos, such as the Magic Eraser tool introduced last year that allows users to instantly remove people and objects from images.
Another, called Best Take, lets users mix and match people’s expressions in group photos for those who are caught blinking or not smiling.
On Tuesday, Google unveiled four new Pixl smartphones (pictured, the Pixel 9) and announced that it will continue to “infuse AI into everything we do.”
With a rise in AI-generated images fueling misinformation in recent years, concerns have been raised that new tools could pose a danger to the public’s already fragile trust in online content.
On Tuesday, Google announced that it will continue to “infuse AI into everything we do” and revealed that it has also rebuilt its Android operating system to put its Gemini chatbot at its core.
Users will now be able to rely on the chatbot as a human-like personal assistant, he said, capable of “understanding their intent, following their train of thought and completing complex tasks.”
The chatbot’s responses will be based on information extracted from personal data on the user’s phone, such as documents or emails.
A new feature, ‘Gemini Live’, will also allow users to have ‘freewheeling’ conversations with the chatbot throughout the day about ‘whatever’s on their mind’.
Google said: ‘You can even interrupt mid-answer to dig deeper into a particular point or pause a conversation and come back to it later.
“It’s like having a buddy in your pocket with whom you can discuss new ideas or practice an important conversation.”
Users have the option to leave it on constantly in the background, allowing them to chat hands-free “just like they would on a normal phone call,” he added.
Google said the security of this data was paramount and that because of its “all-in-one approach” no third-party AI vendor would have access to the data.
The company said: “Whether your data is processed in the cloud or on-device, it resides within Google’s secure end-to-end architecture, keeping your information safe and private.”
Gemini Live begins rolling out today (Tuesday) to Gemini Advanced subscribers on Android phones, and will expand to iOS in the coming weeks.