eSafety increases pressure on tech giants over child sexual abuse content

Apple, Meta and other tech giants have been ordered to report twice a year on the steps they are taking to tackle child sexual abuse material on their platforms, as part of an escalation of Australia’s online safety compliance regime.

Electronic Safety Commissioner Julie Inman Grant on Wednesday issued legal notices to eight companies, requiring them to report on their efforts to combat child sexual abuse material in Australia every six months for the next two years.

Apple, Google, Meta (and WhatsApp) and Microsoft (and Skype), as well as the owners of chat platforms Discord and Snapchat, have been targeted by the new reporting regime, partly in response to responses to a previous round of legal notices.

Image: Shutterstock.com/Cristian Dina

Ms Inman Grant said the “alarming but not surprising” responses confirmed what the online safety regulator had long suspected – that there were “significant gaps and differences between the practices of the services”.

“In our subsequent conversations with these companies, we have yet to see any significant changes or improvements regarding these identified security deficiencies,” it said in a statement Wednesday.

Citing the first Basic Expectations for Online Safety (BOSE) transparency report from December 2022, Inman Grant said Apple and Microsoft made no attempt to proactively detect child abuse material on their iCloud and OneDrive platforms.

Although eSafety has since introduced mandatory standards, cloud and messaging service operators will not be required to detect and remove content known to be child abuse until December.

Ms Inman Grant is also concerned that Skype, Microsoft Teams, FaceTime and Discord still do not use technology to detect live streaming of child sexual abuse in video chats.

Information sharing between Meta services is another source of concern, with offenders banned from services like Instagram in some cases continuing to commit abuse on the parent company’s other platforms, WhatsApp or Facebook.

The legal notices ask companies to explain how they are tackling child abuse material, live-streamed abuse, online grooming, sexual exploration and deepfake child abuse material created using generative artificial intelligence.

On Tuesday, Ms Inman Grant said she was in discussions with Delia Rickard, who is reviewing the Online Safety Act, about the need to close “gaps” in existing legislation and codes that currently only extend to abhorrent content and pornography.

“There is a clear gap there and it is now time to think about what kinds of powers we might need to make ourselves more effective at a systemic level,” Grant said.

Another concern is how quickly companies respond to user reports of child sexual exploitation, noting that Microsoft took an average of two days to respond in 2022.

Ms Inman Grant said the new expectations would force companies to “improve their performance” and show they were “making improvements”, with the regulator expected to report regularly on the findings of the reviews.

This mechanism is one of three means available under BOSE to help “lift the veil” on online safety initiatives carried out by social media, messaging and gaming service providers.

“These reviews will help us understand whether these companies have made improvements to online safety since 2022/3 and ensure that these companies remain accountable for any harm still occurring against children on their services,” Ms Inman Grant said.

“We know that some of these companies have made progress in certain areas. This is an opportunity to show us their progress across the board.”

The six companies will have to provide their first round of responses by February 15, or face penalties of up to $782,500 per day.

Do you know more? Contact James Riley by email.

Leave a Comment