A report of the Belgian public broadcaster VRT NWS has revealed how contractors paid for transcribing audio clips collected by Google's AI Assistant can ultimately listen to sensitive information about users, such as names, addresses, and details about their personal lives.
It is the newest story that shows how our interactions with AI assistants are not as private as we may want to believe. Earlier this year, a report from Bloomberg revealed similar details about Amazon's Alexa, and explained how audio clips recorded by Echo devices are sent without human knowledge to human contractors, who overwrite what is said to improve the company's AI systems.
Even worse, these audio clips are often completely accidentally recorded. Usually AI assistants such as Alexa and Google Assistant only start recording audio when they hear their wake-up (eg "Ok Google"), but these reports show that the devices often start recording accidentally.
In the story of VRT NWS, focusing on Dutch-speaking and Flemish-speaking users of Google assistants, the broadcaster has assessed around 1000 recordings, of which 153 were recorded by accident. A contractor told the publication that he transcribes about 1,000 audio clips from Google Assistant every week. In one of the clips he watched, he heard a female voice in need and said that there was & # 39; physical violence & # 39; had been in the game. "And then the real people you listen to don't just vote," the contractor said.
You can view more in the video report below:
Technical companies say that sending audio clips to people who need to be transcribed is an essential process to improve their speech recognition technology. They also emphasize that only a small percentage of the recordings are shared in this way. A spokesperson for Google told Wired that only 0.2 percent of all recordings are transcribed by people and that these audio clips are never presented with identifying information about the user.
These obfuscations can cause legal problems for the company, says Michael Veale, researcher in technological privacy at the Alan Turing Institute in London. He told Wired that this level of disclosure may not meet the standards set out in the EU GDPR rules. "You have to be very specific about what you implement and how," Veale said. "I think Google didn't do that because it looks creepy."
We have contacted Google for comments and will update this story as we hear more.