AI Security Guidelines agreed on by Australia, USA, UK and 15 other countries
On Monday 27 November, 18 nations signed an agreement on AI safety, which were based on the AI Security Guidelines. Led by the UK’s National Cyber Security Centre and the U.S Cybersecurity and Infrastructure Security Agency, it was the first agreement of its kind to address the secure development of AI systems. The full set of guidelines can be accessed from the Australian Signals Directorate.
Aim of the guidelines
The overall aim of these guidelines is to ensure the secure development of AI technology. That means creating an environment where cybersecurity becomes an essential component during the development of AI systems. The secretary of homeland security, Alejandro Mayorkas, described the guidelines as a pathway for AI providers to design, develop, deploy and operate AI in a manner that prioritises cyber security as part of its core. The implementation of these guidelines will help providers realise the opportunities of AI, without revealing sensitive data to unauthorised parties.
What are the guidelines?
The guidelines are broken down into four key areas:
- Secure design
- Secure development
- Secure deployment
- Secure operation and maintenance
These areas provide specific and applicable guidelines to all four stages of the AI development life cycle:
- Design
- Development
- Deployment
- Operation and Maintenance.
The introduction of these guidelines is a response to the novel security vulnerabilities and standard cyber security threats that AI is constantly subject to. As the pace of AI development continues to rapidly advance, the threats and vulnerabilities increase, while the consideration of safety becomes secondary.
Key Takeaways for Australian businesses
The AI security guidelines are another form of regulation that will impact key industries that provide AI systems. AI providers will be required to review and apply the relevant guidelines to their entire systems to ensure compliance, and develop internal AI policies that implement these guidelines and any ensuing legislation that is implement by the Federal Government in due course.
We’re here to help
With experience in data security and privacy matters, the Macpherson Kelley IP, Trade and Technology team have significant experience in assisting with compliance and implementation of the AI guidelines to your business. In addition to drafting relevant AI policies for your business’ use, we are also here to advise you provide information to you on how the guidelines will affect your business.
The information contained in this article is general in nature and cannot be relied on as legal advice nor does it create an engagement. Please contact one of our lawyers listed above for advice about your specific situation.
stay up to date with our news & insights
AI Security Guidelines agreed on by Australia, USA, UK and 15 other countries
On Monday 27 November, 18 nations signed an agreement on AI safety, which were based on the AI Security Guidelines. Led by the UK’s National Cyber Security Centre and the U.S Cybersecurity and Infrastructure Security Agency, it was the first agreement of its kind to address the secure development of AI systems. The full set of guidelines can be accessed from the Australian Signals Directorate.
Aim of the guidelines
The overall aim of these guidelines is to ensure the secure development of AI technology. That means creating an environment where cybersecurity becomes an essential component during the development of AI systems. The secretary of homeland security, Alejandro Mayorkas, described the guidelines as a pathway for AI providers to design, develop, deploy and operate AI in a manner that prioritises cyber security as part of its core. The implementation of these guidelines will help providers realise the opportunities of AI, without revealing sensitive data to unauthorised parties.
What are the guidelines?
The guidelines are broken down into four key areas:
- Secure design
- Secure development
- Secure deployment
- Secure operation and maintenance
These areas provide specific and applicable guidelines to all four stages of the AI development life cycle:
- Design
- Development
- Deployment
- Operation and Maintenance.
The introduction of these guidelines is a response to the novel security vulnerabilities and standard cyber security threats that AI is constantly subject to. As the pace of AI development continues to rapidly advance, the threats and vulnerabilities increase, while the consideration of safety becomes secondary.
Key Takeaways for Australian businesses
The AI security guidelines are another form of regulation that will impact key industries that provide AI systems. AI providers will be required to review and apply the relevant guidelines to their entire systems to ensure compliance, and develop internal AI policies that implement these guidelines and any ensuing legislation that is implement by the Federal Government in due course.
We’re here to help
With experience in data security and privacy matters, the Macpherson Kelley IP, Trade and Technology team have significant experience in assisting with compliance and implementation of the AI guidelines to your business. In addition to drafting relevant AI policies for your business’ use, we are also here to advise you provide information to you on how the guidelines will affect your business.