WhatsNew2Day
Latest News And Breaking Headlines

Microsoft launches anti-grooming tool for online messaging apps

Microsoft launches ‘Artemis’ anti-grooming tool to tackle child sexual exploitation and prevent pedophiles from targeting young people online

  • Tool is free for ‘qualified online service companies that offer a chat function’
  • Scans messages in an app and software gives the conversation an ‘assessment’
  • Apps can determine how severely an alert is triggered for human review
  • Child protection charity workers can then review the conversation and report it to the authorities and law enforcement officers if necessary

Microsoft has released its ‘Artemis’ tool that scans message apps for signs of care activity, in an effort to limit pedophiles and sex offenders online.

The tool, which Microsoft has made free to use, has been tested live on Xbox and can be added to any messaging platform or app.

Artemis automatically scans and assesses the content of conversations and then notifies human moderators in the US National Center for Missing and Exploited Children (NCMEC) if something is not right.

According to Microsoft, moderators will have access to private conversations and, if there is good reason to believe that a child is in danger, the authorities will warn.

Scroll down for video

Microsoft has released its 'Artemis' tool that scans message apps for signs of care activity in an effort to tackle online pedophiles and sex offenders. The tool is made free for use by Microsoft

Microsoft has released its ‘Artemis’ tool that scans message apps for signs of care activity in an effort to tackle online pedophiles and sex offenders. The tool is made free for use by Microsoft

HOW DOES ARTEMIS WORK?

Chat platforms collaborate with Microsoft and install the free software through Thorn – a non-profit organization that protects children from sexual abuse.

Artemis will then automatically scan and ‘assess’ the content of conversations in messages.

Microsoft has not revealed how the software detects or assesses the conversations for fear that predators may bypass Artemis.

When they reach a certain level set by the individual apps, a warning is sent to a human moderator for review.

According to Microsoft, moderators will have access to private conversations and, if there is good reason to believe that a child is at risk, it will alert the police and other appropriate agencies, including child protection services.

Users cannot download the software themselves. Instead, it is up to companies to embrace protective technology.

Microsoft says that only “qualified online service companies that offer a chat function” can use the tool.

Each messaging platform that works with Microsoft and uses Artemis is then responsible for how they use the program.

They will also determine which ‘rating’ leads to human intervention.

Incidents of suspected sexual exploitation of children will be submitted to the NCMEC, as well as to the local law enforcement agency for review.

Although the tool will be ubiquitous for all accounts on the platform, the design means that it will only pick up attempts to care for and manipulate children and not nasty conversations between consenting adults.

Microsoft has stopped Artemis from working to ensure that pedophiles do not find a temporary solution, but it is assumed that triggers involve different combinations of certain messages, sentences and words.

These are calibrated based on known techniques used by sex offenders to catch vulnerable children.

Microsoft says it is unlikely that a human innocent conversation moderator will trigger an alert.

If an innocent conversation slides through the net and is viewed, the human moderator can deal with the situation and no authorities are involved.

If a conversation is marked for human assessment, the participants in the conversation are not notified to prevent the potential pedophile from being deposed.

Artemis automatically scans and evaluates the content of conversations and then warns human moderators for review. According to Microsoft, moderators would have access to private conversations and, if there was good reason to believe that a child was in danger, the authorities would warn

Artemis automatically scans and evaluates the content of conversations and then warns human moderators for review. According to Microsoft, moderators would have access to private conversations and, if there was good reason to believe that a child was in danger, the authorities would warn

Artemis automatically scans and evaluates the content of conversations and then warns human moderators for review. According to Microsoft, moderators would have access to private conversations and, if there was good reason to believe that a child was in danger, the authorities would warn

Courtney Gregoire, Chief Digital Safety Officer of Microsoft, said in a statement: “Microsoft has long been committed to protecting children online.

“In the first place, as a technology company, we have a responsibility to make software, devices and services with safety features built in from the start.

“We use technology in all our services to detect, disrupt and report illegal content, including sexual exploitation of children.

“And we innovate and invest in tools, technology and partnerships to support the global struggle that is needed to tackle online sexual exploitation of children.”

Mrs Gregoire says that Microsoft has been using the technology on its own Xbox platform for several years and has tried it on Skype.

The platform will now be made available for use by third parties tomorrow, with interested companies going through Thorn, a non-profit organization dedicated to preventing sexual child abuse.

Mrs Gregoire says: ‘Project Artemis is an important step forward, but it is by no means a panacea.

‘Sexual exploitation and abuse of children online and tracing online childcare are important problems.

“But we are not deterred by the complexity and complexity of such problems.”

.

Comments
Loading...