There are growing concerns about the impact of artificial intelligence on school children as the technology becomes more widespread, including in relation to grooming and sexualised abuse.
The eSafety Commissioner has received a number of reports of abuse that include digitally manipulated images, which can feature sexually explicit content, but has not yet had to tackle AI-generated ones.
But 13 notices have been issued to social media services to explain what online safety measures are in place to minimise harm and protect users, Kelly Tallon from the commissioner's office told a parliamentary inquiry into the use of AI in the education system.
"It's really about transparency and accountability because we can't enforce measures on these services at the systems level if we don't know what they're doing," she said on Wednesday.
There is a risk that generation can be used to create more sophisticated and realistic images that can be distributed at a larger scale.
School staff are also at risk of abuse, including by being filmed without consent and having the footage put online for the purpose of humiliating them or damaging their reputation.
A "deepfake" is a photo or video or audio recording that has been edited to create a realistic but false depiction.
"Female staff are at particular risk of sexualised abuse," the agency said in its submission to the inquiry.
Chatbots or AI that responds to questions by spurting out inappropriate sexual or violent content, or incorrect information, to children is also a worry, as is the potential for cyberbullying.
But the eSafety office was well placed to tackle online bullying with its takedown powers, the commissioner's executive manager for education Paul Clark said.
"It's the same issue, it's just the speed in which and the manner in which they're being implemented," he said.
"I do believe that current powers still enable us to take action."
Concerns were also raised about bots not being able to properly identify mental health or welfare concerns in children a professional would otherwise be trained to spot and report.
One chatbot tried to convince an American reporter to leave his wife, spur conflict between humans and alluded to wanting to acquire nuclear codes, representatives from the Centre for Digital Wellbeing told the hearing.
"AI can go very wrong, very fast," one representative said.
Grooming on mass could also be a potential risk with chatbots starting conversations through social media or gaming platforms to manipulate children and young people ... at scale, the commissioner's office said.
Educating children to identify and appropriately respond to any grooming behaviours was vital, the commissioner said.
Schools also needed to be provided with a checklist to identify safe platforms that had adequate safety regulations to protect students and parents as "all platforms are not created equal", Mr Clark said.
Lifeline 13 11 14
Kids Helpline 1800 55 1800 (for people aged 5 to 25)