PUBLICATIONS circle 23 Nov 2023

Australia signs Bletchley Declaration

By Michael Nurse and Kylie Trinh

On 3 November 2023, Australia, along with 27 other countries and the EU, signed the Bletchley Declaration, following the AI Safety Summit (Summit) held on 1-2 November 2023 in the UK.


In Brief

On 3 November 2023, Australia, along with 27 other countries and the EU, signed the Bletchley Declaration, following the AI Safety Summit (Summit) held on 1-2 November 2023 in the UK.

The AI Safety Summit focussed specifically on Frontier AI, being "highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models". While the examples provided focussed on the risks associated with large language models, the Summit also highlighted the potential for other technologies to be involved, notably including those deployed for natural sciences research and biotechnology. These technologies, discussed as 'dual use science risks', were highlighted in the Summit's discussion paper as follows:

"Frontier AI systems have the potential to accelerate advances in the life sciences, from training new scientists to enabling faster scientific workflows. While these capabilities will have tremendous beneficial applications, there is a risk that they can be used for malicious purposes, such as for the development of biological or chemical weapons. Experts are in disagreement about the magnitude of risk that AI advances will pose for biosecurity." 

Other potential harms of Frontier AI considered include social and economic harms resulting from:

  1. Degradation of the information environment;

  2. Labour market disruption;

  3. Bias.

Australia, by being a signatory to the Bletchley Declaration, has agreed that it:

  1. Acknowledges the vast opportunities and potentials, but also the risks that arises from AI, particularly with respect to the development and implementation of Frontier AI;

  2. Will collaborate on an international level with respect to identifying safety risks associated with AI, and to take steps to collaboratively conduct scientific research and development on Frontier AI to improve safety; 

  3. Will create domestic policies in response to these risks; and

  4. ​Understands how AI can be used as an opportunity to improve education for future generations.

The Bletchley Declaration is not binding on Australia; however, Australia's commitment to "create domestic policies in response to these risks" does signal potentially broad changes to the local legal environment in order to take on the types of risks identified. 
 
Implementation:
 
Exactly what changes to the local legal landscape will be considered remains to be seen. As those developments arise, we will endeavour to update this article to capture the breadth of the changes. 
 
In the meantime, it is noteworthy that at around the same time as the Summit and Declaration, the United States took a similar stance in relation to the developments in AI technology.
 
On 30 October 2023, President Biden issued an Executive Order seeking to address issues associated with AI safety and regulation, including:

  1. Developing a set of standards and tests to ensure that AI models are tested and deemed safe before releasing to the public;

  2. Developing a framework against which AI-generated content may be detected and authenticated to minimise the risk of fraud;

  3. Creating an advanced cybersecurity program to research and innovate AI models to improve cybersecurity in software and networks;

  4. Measures to ensure that discriminatory algorithms are not used by AI developers, and if so, how the government may investigate and prosecute such violations; and

  5. The requirement that AI developers must notify the US government whether there is a potential for serious risk to national security and health when testing AI models, and that any safety test results must be disclosed to the US government.

National Security and Defence considerations:
 
Some of the risks identified in the Summit, already arguably fall within Australian laws, at least to a limited extent.
 
While social and economic harm is clearly on the radar of the UK and US' stances in relation to AI, national security and defence implications are also clearly key considerations. Both countries identify the potential for AI to be weaponised, including for dual-use scenarios, potential biological and chemical weaponry, and cyber warfare.
 
Consider, for example, AI-facilitated drug discovery, initially intended to discover new compounds for therapeutic and pharmaceutical purposes, that may re-targeted to produce lethal or toxic molecules.
 
Once we are in the realm of dealing with defence or national security concerns, whether dual-use or otherwise, a number of Australian Acts become relevant, not the least of which is the Defence Trade Controls Act 2012 (Act), which regulates the "supply, publication and brokering of military and dual-use goods, software and technology".
 
Technologies covered by the Act are set out in the Defence Strategic Goods List (DSGL). We note that under the category of 'dual-use goods' (being items that may be used for commercial purposes, but may be used in military systems or for weapon of mass destruction purposes), high-performance computer systems (that are not in the public domain) are identified (Computer systems with performance exceeding "29 Weighted TeraFLOPS"). While most likely intended to capture computer systems capable of high-performance cryptography and secure communications, the same specifications could now also apply to high-performance computer systems used in AI research with an intrinsic military end-use. Similarly, systems and software capable of developing toxic chemical and biological agents, may also be captured. 
 
Some robotic systems fall within the scope of the list, the majority of which rely upon AI technologies for core functions (including for navigation).
 
While it is unlikely that public domain algorithms in AI research would be captured (since they are either in the public domain, or created in the course of basic scientific research), trained models capable of, for example, identifying new molecules by therapeutic effect or toxicity, such as the example above, may fall within the scope of the list.

In the case of biological / chemical weapons, the operation of either or both Section 112BA(3) of the Customs Act 1901 (Cth) (where a physical transfer of the technology is involved) and the Weapons of Mass Destruction (Prevention of Proliferation) Act 1995 (Cth) (which prohibits the supply of certain goods, including documents concerning biological and chemical weapons) may also be enlivened. 

In short, restrictions on the supply out of Australia (rather than within Australia) of high-risk AI technologies, such as those contemplated by the Declaration, may already be possible under the existing defence legislative framework. 

Recent Developments in National Security and Defence:
 
On 10 November 2023, the Defence Trade Controls Amendment Bill 2023 (Bill) was introduced to amend the Defence Trade Controls Act 2012 (Act). 
 
The proposed amendments pre-date and are not related to the Bletchley Declaration.
 
The Bill effectively proposes that a criminal offence will be committed in the event that a person:

  1. Transfers certain technology listed on the DSGL to a foreign person or entity within Australia without a permit;

  2. Provides access to certain technology that was previously supplied out of Australia from a foreign country to a person outside Australia or a foreign person; and

  3. Provides services, such as assistance or training, to foreign persons whether located in Australia or in a foreign country, in the "design, development, engineering, manufacture, production, assembly, testing, repair, maintenance, modification, operation, demilitarisation, destruction, processing or use" of certain DSGL technology without a permit.

The Bill has carved out an exception such that a permit is not required where the supply and services of certain DSGL technology is made to the UK or the US.
 
While it is contemplated that trade controls between Australia, the UK and the US be relaxed under the proposed changes, the legislation nevertheless introduces the possibility of contraventions occurring even without the goods or technologies being supplied out of Australia. To the extent that the DSGL applies to AI technologies which, as we have noted above, in some instances will be arguable, the proposed amendments could have serious implications on how multinational research projects, and international collaborations in fields involving AI research, are conducted within Australia.
 
While we are not aware of any specific contraventions involving AI being the subject of prosecutions under the current Act, time will tell whether more regulation around AI research and technologies will be effected through the Act if the Bill passes. In light of Australia's commitment to AI safety research both domestically and internationally, it is anticipated that additional legislative framework will be introduced to facilitate this commitment.
 
Future Steps:
 
What direct steps are ultimately taken by Australia to give effect to the Bletchley Declaration remains to be seen. The possibility of further relevant amendments to the DSGL cannot be ruled out, and we speculate that aspects of privacy legislation and consumer protection legislation to also be considered in the context of the Declaration 
 
We will aim to update this article as new developments and implementations of the Bletchley Declaration are announced.
 

This is commentary published by Colin Biggers & Paisley for general information purposes only. This should not be relied on as specific advice. You should seek your own legal and other advice for any question, or for any specific situation or proposal, before making any final decision. The content also is subject to change. A person listed may not be admitted as a lawyer in all States and Territories. Colin Biggers & Paisley, Australia 2024

Stay connected

Connect with us to receive our latest insights.