WhatsNew2Day
Latest News And Breaking Headlines

Google AI flags dad who had photos of his child’s groin infection on his phone to share with doctors

A father says his life was ‘ruined’ by Google after being banned from all his accounts after taking a picture of his sick son’s genitals to send to a doctor during the pandemic.

Mark, a software engineer from San Francisco, took pictures of his son’s genitals last February to track the progression of the rash.

Due to the pandemic, the family was asked to send photos so that doctors could view them via a video link prior to their emergency consultation.

Some doctors’ offices were still closed due to the COVID-19 pandemic, so consultations took place virtually.

Mark, who wishes to remain anonymous, uploaded the images from his Android phone to the healthcare provider’s messaging system.

The photos were flagged by an artificial intelligence (AI) as material for possible child sexual abuse, triggering an investigation, according to police The New York Times.

Two days after taking the photo of his son, Mark was notified that he could no longer access his account, leaving the stay-at-home dad confused.

He said he thought ‘Oh God, Google probably stuff that was child porn’ before hoping a human in the automatic algorithm loop could help him.

Google scans images and videos uploaded to Google Photos using the Content Safety API AI toolkit, released in 2018. This AI is trained to recognize 'hashes', or unique digital fingerprints, of child sexual abuse material (stock image)

Google scans images and videos uploaded to Google Photos using the Content Safety API AI toolkit, released in 2018. This AI is trained to recognize ‘hashes’, or unique digital fingerprints, of child sexual abuse material (stock image)

Mark tried to appeal the decision, but Google rejected the request, barring him from accessing his data and blocking access to his mobile operator Google Fi.  It wasn't until months later that he was informed that the San Francisco police had closed the case against him (stock image)

Mark tried to appeal the decision, but Google rejected the request, barring him from accessing his data and blocking access to his mobile operator Google Fi.  It wasn't until months later that he was informed that the San Francisco police had closed the case against him (stock image)

Mark tried to appeal the decision, but Google rejected the request, barring him from accessing his data and blocking access to his mobile operator Google Fi. It wasn’t until months later that he was informed that the San Francisco police had closed the case against him (stock image)

The photos were automatically backed up to his Google cloud and his son was prescribed antibiotics to help with the swelling.

Google cited the presence of “harmful content” that was “a serious violation of Google’s policy and may be illegal.”

Mark tried to appeal the decision, but Google rejected the request, barring him from accessing his data and blocking access to his mobile operator Google Fi.

It wasn’t until months later that he was informed that the San Francisco police had opened a case against him, because he no longer had access to his phone number.

In December 2021, he received an envelope from the police containing documents showing that he had been investigated.

Copies of the search warrants issued on Google and its Internet service provider were also included.

The investigator of the case said it had been closed and that they had ruled that “no crime had occurred.”

Mark then asked if the officers could help him get his account back, again appealing to Google with the police documents, but was again denied.

What are hashes that Apple, Facebook, Google and Twitter use to track down child molesters?

The technology works by creating a unique fingerprint, called a “hash,” for each image reported to the foundation.

These fingerprints are then passed on to internet companies to be automatically removed from the net.

After an image is targeted, an employee views the contents of the file and analyzes the message to determine if it should be turned over to the appropriate authorities.

The system that uses the same technology as Facebook, Twitter and Google to track down child abusers.

He was told his account would be permanently deleted and considered suing the company, but decided it would end up being too expensive.

The tech giant also flagged a video in its content that they said was problematic and featured a young child lying in bed with a naked woman.

Mark claims he can’t remember or access the video, but said it sounds like a private moment he wanted to capture with his wife and son.

He said, ‘I can imagine. We woke up one morning. It was a beautiful day with my wife and son, and I wanted to capture the moment.

“If only we’d slept with pajamas on, all this could have been prevented.”

Another Texas parent had a similar problem with Google after he re-shot his toddlers’ “intimate parts” to send to a doctor.

The parent, known only as Cassio, used an old Android to take photos that were automatically backed up to Google Photos and then sent them to his wife via Google’s chat service.

He tried to buy a house, signed documents online when the account was disabled, and had to ask to switch email addresses, which gave him a ‘headache’.

Cassio was also under police investigation in 2021, with Houston police clearing him of any wrongdoing after showing them communication with the doctor.

He has also been unable to access his 10-year-old Google account despite being a paying user of their web services.

Google did not immediately respond to a request from DailyMail.com for comment.

1661175562 705 Google AI flags dad who had photos of his childs

1661175562 705 Google AI flags dad who had photos of his childs

Google cited the presence of “harmful content” that was “a serious violation of Google’s policy and may be illegal.” (stock photo)

This highlights the complications of using AI technology to identify abusive digital material, which is currently being implemented by Google, Facebook, Twitter and Reddit.

Google scans images and videos uploaded to Google Photos using the Content Safety API AI toolkit, released in 2018.

This AI is trained to recognize ‘hashes’, or unique digital fingerprints, of child sexual abuse material (CSAM).

In addition to matching hashes to known CSAM in a database, it is able to classify previously unseen images.

The tool then prioritizes those it thinks are most likely to be considered malicious and flags them to human moderators.

Any illegal material will be reported to the National Center for Missing and Exploited Children (NCMEC), which liaises with the appropriate law enforcement agency, and it will be removed from the platform.

Google spokesperson Christa Muldoon said: The edge: “Our team of child safety experts reviews flagged content for accuracy and consults with pediatricians to ensure we can identify instances where users may seek medical advice.”

in 2021, Google reported 621,583 cases of CSAM to the NCMEC’s ​​CyberTipLine, which then warned authorities about more than 4,260 potential new child victims.

A Google spokesperson told The New York Times that the company will only scan personal images after the user has taken “affirmative action,” including backing up their material to Google Photos.

Named only as Mark, the concerned parent eventually lost access to his emails, contacts, photos, and even his phone number, and his appeal was denied (stock image)

Named only as Mark, the concerned parent eventually lost access to his emails, contacts, photos, and even his phone number, and his appeal was denied (stock image)

Named only as Mark, the concerned parent eventually lost access to his emails, contacts, photos, and even his phone number, and his appeal was denied (stock image)

The incident is an example of why critics view monitoring data stored on personal devices or in the cloud as an invasion of privacy for CSAM.

Jon Callas, a director of technology projects at the Electronic Frontier Foundation, called Google’s practices “intrusive” in a statement to The New York Times.

He said: ‘This is exactly the nightmare we are all worried about.

“They’re going to scan my family album and I’ll be in trouble.”

In April, Apple announced that it was rolling out its Communication Safety tool in the UK.

The tool — which parents can choose to check or uncheck — scans images sent and received by kids in Messages for nudity and blurs them out automatically.

It was initially concerned about privacy when it was announced in 2021, but Apple has since reassured that it can’t access photos or messages.

“Messages uses on-device machine learning to analyze image attachments and determine if a photo appears to contain nudity,” it explains.

“The feature is designed to prevent Apple from accessing the photos.”

Tech giants risk hefty fines for online child abuse if they don’t take action

Tech giants that fail to develop ways to scan for online child molesters face billions of pounds in fines.

Ofcom has been given powers to require companies to demonstrate how they “prevent, identify and remove” illegal content.

If they do not follow the rules, they could be punished with fines of up to 18 million pounds or 10 percent of their annual worldwide turnover.

It came when a children’s charity revealed a horrendous 80 percent increase in online sex grooming crimes reported to police over the past four years.

The NSPCC said it had recorded hundreds of cases on Meta’s social media platforms – Facebook, Instagram and WhatsApp – last year.

Read more here

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More