Australia’s Deepfake Law: Why Your AI Service Needs Content Moderation

Contents

    Australia is taking a bold step in the fight against AI-generated sexual exploitation. With new legislation set to introduce up to six years of jail time for creating or sharing non-consensual deepfake pornography, the country is signaling a zero-tolerance policy on this form of digital abuse.

    This move isn’t just a wake-up call for potential offenders; it’s a clarion call for companies in the AI sector to step up their content moderation game.

    Concept image of a statue where the subject is nude.

    The new legislation

    Attorney-General Mark Dreyfus has made it clear: the sharing of AI-generated sexually explicit images without consent will soon be a criminal offense punishable by severe penalties​. This legislative push comes in response to the alarming rise in deepfake technology misuse, which has skyrocketed by over 550% in recent years​​.

    While the legislation is new, the problem is far from new. Sexually graphic “deepfake” images of Taylor Swift went viral on social media in early 2024, fuelling widespread condemnation from fans, the general public, and even the White House.

    The role of content moderation services

    These new regulations highlight the critical importance of content moderation for companies operating in AI-generated content. This new legal framework underscores the crucial need for robust content moderation systems. Simply filtering out inappropriate images after they’re created is not enough.

    The key lies in monitoring and managing the prompts—the very inputs that drive the AI to generate content.

    Why prompt moderation matters

    Imagine your AI tool as a powerful but potentially dangerous machine. “With great power comes great responsibility” is not just a proverb popularized by Spider-Man; it has real legal consequences now.

    The outputs are only as safe as the inputs you allow. When actively monitoring prompts, you can catch and prevent harmful requests before they result in illegal or damaging content. This proactive approach helps in legal compliance and builds trust and safety in your service from the ground up.

    Moreover, prompt monitoring can lead to significant cost savings. Generating AI images, especially complex ones, consumes substantial server resources.

    By intercepting inappropriate prompts early, you can prevent unnecessary server load, reducing operational costs. This efficiency not only saves money but also improves the overall performance of your AI service.

    Education and accountability

    Experts like Professor Nicola Henry of RMIT University emphasize the importance of educating users, particularly young people, about the legal and ethical ramifications of deepfake technology​.

    “We need education that teaches about an ‘affirmative’ or ‘positive’ model of consent. Essentially, it goes a step beyond ‘ordinary consent,’” she says.

    Beyond user education, there’s a pressing need to hold social media platforms accountable for the content they host. This approach is not unique to Australia; the European Union’s Digital Services Act also mandates platforms to monitor and remove harmful content, underscoring a global trend towards greater accountability in digital spaces.

    Final thoughts

    Companies in the AI sector must prioritize ethical practices and comprehensive content moderation strategies. This isn’t just about staying on the right side of the law but cultivating a safer, more respectful online environment.

    Check out some of our use cases for even more insights into how companies like Besedo can help with unwanted nudity and NSFW content.

    We are always available for a chat, no strings attached.

    Ahem… tap, tap… is this thing on? 🎙️

    We’re Besedo and we provide content moderation tools and services to companies all over the world. Often behind the scenes.

    Want to learn more? Check out our homepage and use cases.

    And above all, don’t hesitate to contact us if you have questions or want a demo.

    Contents