Researchers compile a list of 1,000 words that accidentally activate Alexa, Siri and Google Assistant

Researchers in Germany have compiled a list of more than 1,000 words that will cause virtual assistants like Amazon’s Alexa and Apple’s Siri to be activated accidentally.

Once activated, these virtual assistants make audio recordings that are later sent to platform holders where they can be transferred for quality assurance or other analysis purposes.

According to the team, from Ruhr-Universität Bochum and the Max Planck Institute for Cyber ​​Security and Privacy in Germany, this has ‘alarming’ implications for user privacy and likely means that short recordings of personal conversations are periodically in the hands of Amazon could come, Apple, Google or Microsoft employees.

Researchers in Germany tested virtual assistants such as Amazon's Alexa, Apple's Siri, Google Assistant and Microsoft's Cortana, and found over 1,000 words or phrases that would inadvertently activate any device

Researchers in Germany tested virtual assistants such as Amazon’s Alexa, Apple’s Siri, Google Assistant and Microsoft’s Cortana, and found over 1,000 words or phrases that would inadvertently activate any device

The group tested Amazon’s Alexa, Apple’s Siri, Google Assistant, Microsoft Cortana and three virtual assistants exclusively for the Chinese market, from Xiaomi, Baidu and Tencent, according to a report by the Ruhr-Universität Bochum news blog.

They left each virtual assistant alone in a room with a television playing tens of hours of episodes of Game of Thrones, Modern Family and House of Cards, with English, German and Chinese audio tracks for each.

When activated, an LED light display on each device turns on and the team refers to the dialogue spoken each time they saw the LED display go on.

In total, they cataloged more than 1,000 words and phrases that triggered inadvertent activations.

According to researcher Dorothea Kolossa, it is likely that virtual assistant designers deliberately chose to make them more sensitive to avoid user frustration.

“The devices are intentionally programmed in a somewhat forgiving manner, because they should be able to understand their people,” said researcher Dorothea Kolossa.

“That’s why they start too often than not at all.”

Apple's Siri should be activated by saying 'Hey Siri', but the team found that it would also be regularly turned on by 'A City' and 'Hey Jerry'

Apple's Siri should be activated by saying 'Hey Siri', but the team found that it would also be regularly turned on by 'A City' and 'Hey Jerry'

Apple’s Siri should be activated by saying ‘Hey Siri’, but the team found that it would also be regularly turned on by ‘A City’ and ‘Hey Jerry’

The team left each device in a room with episodes of television shows like Game of Thrones, House of Cards and Family Guy running for tens of hours to test which words or phrases triggered inadvertent activations

The team left each device in a room with episodes of television shows like Game of Thrones, House of Cards and Family Guy running for tens of hours to test which words or phrases triggered inadvertent activations

The team left each device in a room with episodes of television shows like Game of Thrones, House of Cards and Family Guy running for tens of hours to test which words or phrases triggered inadvertent activations

Once activated, the devices will use local speech analysis software to determine if the sound was intended as an activation word or phrase.

If the device has a high probability that the sound was intended as a trigger, it sends a few seconds of audio recording to cloud servers for additional analysis.

“From a privacy standpoint, this is alarming, of course, because sometimes very private conversations can end up with strangers,” says Thorsten Holz.

“From a technical point of view, however, this approach is very understandable, because the systems can only be improved with such data.”

“Manufacturers need to strike a balance between data protection and technical optimization.”

In May, a former Apple contractor said the company captured small parts of private conversations through the Siri interface, including medical information, criminal activity, business meetings and even sex.

The whistleblower, Thomas le Bonniec, had worked at Apple until his discharge in 2019 in an office in Cork, Ireland, where he had listened to numerous short audio recordings.

“They operate in a moral and legal gray area, and they have been doing this on a large scale for years,” he said. The Guardian. “They must be called in every possible way.”

WHAT ARE SOME WORDS ACTIVATING FORESEEN VIRTUAL ASSISTANTS?

A team of researchers from the Ruhr-Universität Bochum and the Max Planck Institute for Cyber ​​Security and Privacy in Germany tested a series of virtual assistant devices and recorded more than 1,000 words or phrases that they inadvertently triggered.

Here are some words that triggered every virtual assistant:

Alexa

Traditionally, Amazon’s Alexa is activated by simply saying ‘Alexa’. The team found that Alexa can also be activated by saying “unacceptable,” “election,” “a letter,” and “tobacco.”

Google Assistant

Under normal circumstances, Google Assistant is activated by the phrase ‘OK, Google’, but the team found that it was activated by the sentences ‘OK, cool’ and ‘OK, who reads?’

Siri

Apple’s Siri virtual assistant is called into action by saying “Hey Siri.” The researchers discovered that it could also be activated with ‘a city’ and ‘Hey Jerry’.

Cortana

Microsoft’s Cortana activated should be activated by the phrase ‘Hey Cortana’, but the team felt that ‘Montana’ would activate it too

.