book a virtual meeting Search Search
brisbane

one eagle – waterfront brisbane
level 30, 1 eagle street
brisbane qld 4000
+61 7 3235 0400

dandenong

40-42 scott st,
dandenong vic 3175
+61 3 9794 2600

melbourne

level 7, 600 bourke st,
melbourne vic 3000
+61 3 8615 9900

sydney

level 21, 20 bond st,
sydney nsw 2000
+61 2 8298 9533

hello. we’re glad you’re
getting in touch.

Fill in form below, or simply call us on 1800 888 966

AI Security Guidelines agreed on by Australia, USA, UK and 15 other countries

30 November 2023
Jesse Ashburn Mark Metzeling
Read Time 5 mins reading time

On Monday 27 November, 18 nations signed an agreement on AI safety, which were based on the AI Security Guidelines. Led by the UK’s National Cyber Security Centre and the U.S Cybersecurity and Infrastructure Security Agency, it was the first agreement of its kind to address the secure development of AI systems. The full set of guidelines can be accessed from the Australian Signals Directorate.

Aim of the guidelines

The overall aim of these guidelines is to ensure the secure development of AI technology. That means creating an environment where cybersecurity becomes an essential component during the development of AI systems. The secretary of homeland security, Alejandro Mayorkas, described the guidelines as a pathway for AI providers to design, develop, deploy and operate AI in a manner that prioritises cyber security as part of its core. The implementation of these guidelines will help providers realise the opportunities of AI, without revealing sensitive data to unauthorised parties.

What are the guidelines?

The guidelines are broken down into four key areas:

  • Secure design
  • Secure development
  • Secure deployment
  • Secure operation and maintenance

These areas provide specific and applicable guidelines to all four stages of the AI development life cycle:

  • Design
  • Development
  • Deployment
  • Operation and Maintenance.

The introduction of these guidelines is a response to the novel security vulnerabilities and standard cyber security threats that AI is constantly subject to. As the pace of AI development continues to rapidly advance, the threats and vulnerabilities increase, while the consideration of safety becomes secondary.

Key Takeaways for Australian businesses

The AI security guidelines are another form of regulation that will impact key industries that provide AI systems. AI providers will be required to review and apply the relevant guidelines to their entire systems to ensure compliance, and develop internal AI policies that implement these guidelines and any ensuing legislation that is implement by the Federal Government in due course.

We’re here to help

With experience in data security and privacy matters, the Macpherson Kelley IP, Trade and Technology team have significant experience in assisting with compliance and implementation of the AI guidelines to your business. In addition to drafting relevant AI policies for your business’ use, we are also here to advise you provide information to you on how the guidelines will affect your business.

stay up to date with our news & insights

AI Security Guidelines agreed on by Australia, USA, UK and 15 other countries

30 November 2023
Jesse Ashburn Mark Metzeling

On Monday 27 November, 18 nations signed an agreement on AI safety, which were based on the AI Security Guidelines. Led by the UK’s National Cyber Security Centre and the U.S Cybersecurity and Infrastructure Security Agency, it was the first agreement of its kind to address the secure development of AI systems. The full set of guidelines can be accessed from the Australian Signals Directorate.

Aim of the guidelines

The overall aim of these guidelines is to ensure the secure development of AI technology. That means creating an environment where cybersecurity becomes an essential component during the development of AI systems. The secretary of homeland security, Alejandro Mayorkas, described the guidelines as a pathway for AI providers to design, develop, deploy and operate AI in a manner that prioritises cyber security as part of its core. The implementation of these guidelines will help providers realise the opportunities of AI, without revealing sensitive data to unauthorised parties.

What are the guidelines?

The guidelines are broken down into four key areas:

  • Secure design
  • Secure development
  • Secure deployment
  • Secure operation and maintenance

These areas provide specific and applicable guidelines to all four stages of the AI development life cycle:

  • Design
  • Development
  • Deployment
  • Operation and Maintenance.

The introduction of these guidelines is a response to the novel security vulnerabilities and standard cyber security threats that AI is constantly subject to. As the pace of AI development continues to rapidly advance, the threats and vulnerabilities increase, while the consideration of safety becomes secondary.

Key Takeaways for Australian businesses

The AI security guidelines are another form of regulation that will impact key industries that provide AI systems. AI providers will be required to review and apply the relevant guidelines to their entire systems to ensure compliance, and develop internal AI policies that implement these guidelines and any ensuing legislation that is implement by the Federal Government in due course.

We’re here to help

With experience in data security and privacy matters, the Macpherson Kelley IP, Trade and Technology team have significant experience in assisting with compliance and implementation of the AI guidelines to your business. In addition to drafting relevant AI policies for your business’ use, we are also here to advise you provide information to you on how the guidelines will affect your business.