PUBLICATIONS circle 17 Sep 2024

The rollercoaster of AI, now with guardrails: Australia’s proposed AI regulations aim to tackle high-risk tech

By Mathisha Panagoda, Katherine Jones and Kate Bowes

As AI evolves, legal frameworks lag behind. Existing laws, designed for human decision-making, fail to address AI’s unique risks. On 5 September 2024, Australia released voluntary AI Safety Standards and a proposed mandatory framework to regulate high-risk AI settings, aiming to address risks proactively throughout the AI lifecycle.


In brief

As AI evolves, current legal frameworks are struggling to keep pace. Most existing laws, including those governing privacy, anti-discrimination, and consumer protection, were designed with human decision-making in mind and often fail to address the unique challenges posed by autonomous systems. This disconnect becomes particularly problematic during the development and deployment stages of AI, where risks can escalate quickly and are inadequately regulated. Recognising these gaps, the Australian Government has released two publications on 5 September 2024 - the first is a voluntary AI Safety Standard and the second a proposed risk-based regulatory framework of mandatory safeguards for high-risk AI settings, aiming to proactively address AI’s risks before they manifest into harm.

The voluntary safety standards are said to apply to all organisations throughout the AI life cycle. The proposed mandatory guardrails, which are the same 10 voluntary standards, are proposed to be mandatory for developers and deployers of AI systems in high-risk settings. 

Definition of high-risk AI

The approach to defining high-risk AI focuses on applications with the potential to cause significant harm, and would be particularly relevant in legal, educational, and healthcare settings. 

The principles-based framework is aimed at assessing foreseeable risks that:

  1. Impact human rights, specifically any system that may infringe on rights protected under Australian law, such as anti-discrimination statutes. This framework seeks to regulate or eliminate systems that use predictive policing tools that disproportionately target minority communities.

  2. Pose health and safety concerns, particularly AI applications in healthcare, such as diagnostic tools, which pose high risk if they misinterpret data due to biased training sets.

  3. Create biased outcomes, such as those found in educational settings, where AI-driven grading systems can unfairly penalise students from non-English speaking backgrounds.

  4. Pose legal and defamation risks, such as AI used in legal settings, including tools for predicting case outcomes or assessing risks, which must be closely monitored to prevent unfair legal outcomes or even reputational harm.

  5. Create systemic and societal risks. AI models, particularly general-purpose AI (GPAI), can affect entire groups or society at large. The broad capabilities of GPAI models, like GPT-4, can lead to unforeseen uses, such as generating deepfakes, automating scams, and spreading misinformation to the general public, necessitating a stringent regulatory approach.

Ten Guardrails 

The Australian Government has outlined ten proposed guardrails designed to ensure AI is used responsibly, they are:

  1. Testing and Validation: AI systems will undergo rigorous testing to ensure they meet safety and performance standards. For example, autonomous vehicles will be thoroughly tested in diverse conditions to prevent accidents.

  2. Transparency Requirements: Developers will be required to publish clear information about how AI systems are trained and operate.

  3. Accountability Measures: Clear lines of responsibility will be established, ensuring that developers and deployers are accountable for managing AI-related risks.

  4. Monitoring and Auditing: Continuous oversight mandated, particularly in legal and educational applications, where AI decisions can directly impact lives and careers.

  5. Data Governance: Robust standards for managing AI training data will be introduced to prevent biases that could harm vulnerable populations, including marginalised students or job seekers.

  6. Risk Management Frameworks: Comprehensive guidelines for identifying, assessing, and mitigating AI risks at all stages of development will be introduced.

  7. User Awareness and Consent: Users will be informed when interacting with AI, especially in contexts where decisions affect their legal rights or educational opportunities.

  8. Security Safeguards: Protecting AI systems from cybersecurity threats, such as hacking or tampering, will be essential to maintain public trust and prevent malicious misuse.

  9. Adaptability to Technological Change: Regulations will be flexible to accommodate future AI advancements, such as agentic AI with the potential for autonomous actions.

  10. Compliance and Enforcement Mechanisms: Effective penalties and enforcement strategies will be implemented to support safe AI deployment.

In Australia, we do not have one overarching AI law; instead, AI-related regulations are shaped by a combination of existing laws and principles, like the Australian Privacy Act 1988 and the Australian Human Rights Commission (AHRC) AI Principles and Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. These voluntary safety standards, and proposed mandatory standards, are framed as being part of the broader regulatory approach to AI governance rather than being a single, specific law. 

If the mandatory guardrails are adopted, the Australian Government is considering three main options for implementation:

Option 1: Domain-Specific Adaptation: AI-specific guardrails will be integrated into existing regulations. While this tailored approach aims to address specific industry needs, it may lead to inconsistencies across different sectors.

Option 2: Framework Legislation: The introduction of new legislation to apply across industries, offering unified standards while requiring careful calibration to suit the diverse needs of various sectors.

Option 3: Whole-Economy AI Act: Comprehensive AI legislation, similar to the EU AI Act and Canada's AIDA, to establish clear and consistent regulations across the entire economy, though it may face potential implementation challenges.

What is the impact to you? 

AI’s decision-making capabilities introduce legal complexities that expose businesses to new forms of liability, creating challenges we’ve never faced before. 

Navigating this evolving terrain is not just about understanding the technology; it is about anticipating and mitigating risks that did not exist until now—a challenge that demands human insight, adaptability, and strategic foresight. 

The Australian Government calls for submissions on the proposed guardrails by 4 October 2024. It is inevitable that governance of AI, the rules around it, how it applies to every day life, and the impact on people will be a rapidly evolving landscape in the coming years which will require all organisations to keep up to date with changes.

This is commentary published by Colin Biggers & Paisley for general information purposes only. This should not be relied on as specific advice. You should seek your own legal and other advice for any question, or for any specific situation or proposal, before making any final decision. The content also is subject to change. A person listed may not be admitted as a lawyer in all States and Territories. Colin Biggers & Paisley, Australia 2024

Stay connected

Connect with us to receive our latest insights.