AI-assisted image-based abuse: navigating deepfakes
Commonly available AI tools are being used to cause harm online.
Deepfakes and synthetic media are becoming more common as the AI tools used to create them become readily available.
Some of these tools are being used to create harmful content, including image-based abuse, involving young people both as targets and as creators.
This webinar from the eSafety Commissioner will cover:
- how young people are experiencing AI-assisted image-based abuse – the tools, the behaviours, the impacts
- the technological, social and cultural contexts that are driving the use of deepfakes
- support strategies for professionals working with young people to both prevent and respond to AI-assisted image-based abuse.
More information
- New advisory: Deepfake damage in schools – the eSafety Commissioner has issued a new Online Safety Advisory to alert schools, parents/carers and young people to the emerging risks of deepfake technology. The advisory explains how these harms are happening, what actions schools and families can take, and where those affected can turn for help.
- Guide to responding to image-based abuse involving AI deepfakes - eSafety Toolkit for Schools