Tech giants face scrutiny as e-safety demands action against child abuse

Australia’s eSafety Commissioner has issued legal notices to tech giants including Apple, Google, Meta and Microsoft, requiring the companies to report to the regulator every six months on the measures they have put in place to combat online child sexual abuse.

Issued under Australia’s Online Safety Act, notices have also been sent to services Discord, Snap, Skype and WhatsApp, asking all recipients to explain how they are combatting child abuse material, live-streamed abuse, online grooming, sexual extortion and, where applicable, the production of “synthetic” or fake child abuse material created using generative AI.

For the first time, these notices will require tech companies to report periodically to eSafety over the next two years, with eSafety regularly publishing summaries of the findings to improve transparency, demonstrate security weaknesses and encourage improvements.

Electronic Safety Commissioner Julie Inman Grant said the companies were chosen partly based on answers that many of them provided to eSafety in 2022 and 2023 exposing a series of safety issues when it comes to protecting children from abuse.

“We’re increasing the pressure on these companies to improve their performance,” Inman Grant said. “They’re going to have to report back to us every six months and show us that they’re making progress.

“When we sent notices to these companies in 2022/23, some of their responses were alarming but not surprising as we had long suspected that there were significant gaps and differences between the services’ practices. In our subsequent conversations with these companies, we have yet to see any significant changes or improvements in these identified security gaps.

“Apple and Microsoft said in 2022 that they do not attempt to proactively detect child sexual abuse material stored in their widely used iCloud and OneDrive services. This is despite the fact that these file storage services are well known to provide a haven for child sexual abuse and pro-terrorism content that persists and thrives in the dark.

“We also learned that Skype, Microsoft Teams, FaceTime, and Discord do not use any technology to detect livestreaming of child sexual abuse in video conversations. This is despite evidence of the widespread use of Skype, in particular, for this long-standing and rapidly growing crime.

“Meta also admitted that it does not always share information across its services when an account is banned for child abuse, meaning that offenders banned on Facebook may continue to perpetrate abuse through their Instagram accounts, and offenders banned on WhatsApp may not be banned on Facebook or Instagram.”

eSafety also found that eight different Google services, including YouTube, do not block links to websites known to contain child sexual abuse material. This is despite the availability of databases of such websites known to be sexually abusive, which many services use.

Although eSafety investigators regularly observe the use of Snapchat for grooming and sexual extortion, eSafety found that the service does not use any tools to detect grooming in chats.

“The report also found wide disparities in how quickly companies respond to user reports of child sexual exploitation and abuse on their services. In 2022, Microsoft said it took an average of two days to respond, or up to 19 days when those reports required further review, which was the longest of any provider. Snap, by contrast, said it responded within 4 minutes.

“Speed ​​is not everything, but every minute counts when a child is in danger.

“These reviews will help us understand whether these companies have made improvements to online safety since 2022/3 and ensure that these companies remain accountable for any harm still occurring against children on their services.

“We know that some of these companies have made progress in certain areas. This is an opportunity to show us their progress across the board.”

Key potential security risks addressed in this series of advisories include the ability for adults to contact children on a platform, risks of sexual extortion, and features such as live streaming, end-to-end encryption, generative AI, and recommendation systems.

Transparency notices under Australian law Basic Expectations for Online Safety are designed to work closely with eSafety industry codes and standards that require the online industry to take meaningful steps to combat child abuse and other Class 1 content on their services.

Compliance with a notice is mandatory and financial penalties of up to $782,500 per day may be applied for services that do not respond.

Companies will have until February 15, 2025 to provide their first round of responses.

/Public dissemination. This content from the original organization/authors may be of a timely nature and edited for clarity, style, and length. Mirage.News takes no institutional position or bias, and all views, positions, and conclusions expressed herein are solely those of the author(s). See the full story here.

Leave a Comment