Thinking of implementing generative AI? Think policy and training
In our previous article, we gave an overview of the definition of AI and Generative AI, but what sort of safety measures do you need to have in place if you are looking to implement AI into your day-to-day business practices?
The world of AI poses many opportunities but also potential issues for organisations. Imagine your staff are using Chat GPT to analyse data sets, write emails using client data and produce new product ideas. Who owns that information, and are you in breach of privacy? It is vital that you regulate the use of generative AI in your business by providing training and an AI policy.
Who owns AI-generated inputs and outputs?
This is the top question on everyone’s mind when considering the use of generative AI. While each country has their own legislation on this topic, recent decisions in Australia indicate that the identification of the author of the output is likely to determine whether the data, or computer-generated works, are protectable under intellectual property laws.
There are several schools of thought when it comes to identifying the author (and therefore owner of the output), however, it isn’t provided for in legislation, and the Australian judiciary are yet to provide definitive guidance.
Until then, it is likely AI generated works, including data created by AI, are treated as follows:
- AI is not an inventor, nor an author under Australian law.
- AI cannot own property rights, such as trade marks, copyright, granted patents, or registered designs.
Rolling out generative AI in your business
The creation of policies and procedures around the use of AI in your business is complex due to the evolving nature of AI and its legal implications. You should consider and formalise what generative AI can and cannot be used for in your business.
It is important to remember that generative AI is not all knowing. It is simply a computer program calculating a mathematical probability problem.
If you intend to implement AI in your business, it is recommended that you also implement training around the use of AI, including:
- How to formulate quality user prompts
Teach your employees how to be good at problem formulation. It is important to identify clearly and precisely what the fundamental cause that underlies a problem is. A real example of where this thought process was used is the subarctic oil problem caused by the Exxon Valdez oil spill. The problem was not the oil spill, the problem was that given the environment, the oil would freeze inside the pump.
Oguz A Acar in the Harvard Business Review states: “Prompt engineering focuses on crafting the optimal textual input by selecting the appropriate words, phrases, sentence structures, and punctuation. In contrast, problem formulation emphasizes defining the problem by delineating its focus, scope, and boundaries.”
- How to prevent and minimise bias
Almost every dataset has an inherent bias. By identifying what the bias is in the dataset and user prompt, you can take steps to address it – whether that be altering the training data set or adjusting the user prompt to cater for the known bias.
- How to avoid breaches of confidentiality or loss of privilege
Professionals all have varying degrees of privilege associated with their interactions with clients. Businesses have obligations of confidence due to various contracts they have signed.
What a review of the terms and conditions of many generative AI software programs reveals is that any user input is considered public discloser. To maintain data privacy when using generative AI, some best practises include:
- Anonymisation of user inputs: ensure input prompts don’t contain any personally identifiable or sensitive information. This helps in preventing the identification of individuals and sensitive details from becoming public knowledge or part of the data set of the generative AI.
- Limit the type of access to generative AI: Implement controls around what generative AI can and cannot be used for in your business to minimise the risk of a breach of confidentiality or loss of privilege.
- Regular model audits: Conduct periodic audits of your generative AI models to assess their outputs for any unintentional disclosure of sensitive information.
- Data breach response plan: Have a well-defined data breach response plan in place. This plan should include steps to be taken in case of a confidentiality breach, including notifying affected parties, relevant authorities, and taking appropriate measures to mitigate the impact.
- Maximising your ownership of the output content
If you want to be able to monopolise and/or commercialise the outputs of generative AI, it is recommended you maximise the prospects of your business owning these outputs.
- Assert ownership of the AI generated data on the outputs themselves (by contract and by website, documentation, and other relevant notices e.g. copyright notices).
- Insert IP assignment clauses in any contracts/ agreements and any related methodologies used, vest upon creation in your business’ legal entity (or better still its IP owning entity);
- If you are using Enterprise AI ensure you enter customised contracts with generative AI platforms that stipulate your business’ legal entity as the owner of the generative AI.
- Selection of data – assert in written methodologies and specifications that the way in which the contents of database(s) and dataset(s) concerned are selected and arranged is the product of the author’s own intellectual creation to maximise the likelihood of database copyright availability.
- Ensure the contractual definitions of confidential information, derived data, and intellectual property include trade secrets as part of the definition of each, and are all included in any contract relating to, or which may utilise, generative AI.
Seek out advice from AI experts
Consulting with legal experts who specialise in AI and intellectual property law is crucial to safeguarding your ownership rights effectively and minimising any improper or illegal use of generative AI in your organisation.
To find out more information, please contact our expert team today.
The information contained in this article is general in nature and cannot be relied on as legal advice nor does it create an engagement. Please contact one of our lawyers listed above for advice about your specific situation.
more
insights
stay up to date with our news & insights
Thinking of implementing generative AI? Think policy and training
In our previous article, we gave an overview of the definition of AI and Generative AI, but what sort of safety measures do you need to have in place if you are looking to implement AI into your day-to-day business practices?
The world of AI poses many opportunities but also potential issues for organisations. Imagine your staff are using Chat GPT to analyse data sets, write emails using client data and produce new product ideas. Who owns that information, and are you in breach of privacy? It is vital that you regulate the use of generative AI in your business by providing training and an AI policy.
Who owns AI-generated inputs and outputs?
This is the top question on everyone’s mind when considering the use of generative AI. While each country has their own legislation on this topic, recent decisions in Australia indicate that the identification of the author of the output is likely to determine whether the data, or computer-generated works, are protectable under intellectual property laws.
There are several schools of thought when it comes to identifying the author (and therefore owner of the output), however, it isn’t provided for in legislation, and the Australian judiciary are yet to provide definitive guidance.
Until then, it is likely AI generated works, including data created by AI, are treated as follows:
- AI is not an inventor, nor an author under Australian law.
- AI cannot own property rights, such as trade marks, copyright, granted patents, or registered designs.
Rolling out generative AI in your business
The creation of policies and procedures around the use of AI in your business is complex due to the evolving nature of AI and its legal implications. You should consider and formalise what generative AI can and cannot be used for in your business.
It is important to remember that generative AI is not all knowing. It is simply a computer program calculating a mathematical probability problem.
If you intend to implement AI in your business, it is recommended that you also implement training around the use of AI, including:
- How to formulate quality user prompts
Teach your employees how to be good at problem formulation. It is important to identify clearly and precisely what the fundamental cause that underlies a problem is. A real example of where this thought process was used is the subarctic oil problem caused by the Exxon Valdez oil spill. The problem was not the oil spill, the problem was that given the environment, the oil would freeze inside the pump.
Oguz A Acar in the Harvard Business Review states: “Prompt engineering focuses on crafting the optimal textual input by selecting the appropriate words, phrases, sentence structures, and punctuation. In contrast, problem formulation emphasizes defining the problem by delineating its focus, scope, and boundaries.”
- How to prevent and minimise bias
Almost every dataset has an inherent bias. By identifying what the bias is in the dataset and user prompt, you can take steps to address it – whether that be altering the training data set or adjusting the user prompt to cater for the known bias.
- How to avoid breaches of confidentiality or loss of privilege
Professionals all have varying degrees of privilege associated with their interactions with clients. Businesses have obligations of confidence due to various contracts they have signed.
What a review of the terms and conditions of many generative AI software programs reveals is that any user input is considered public discloser. To maintain data privacy when using generative AI, some best practises include:
- Anonymisation of user inputs: ensure input prompts don’t contain any personally identifiable or sensitive information. This helps in preventing the identification of individuals and sensitive details from becoming public knowledge or part of the data set of the generative AI.
- Limit the type of access to generative AI: Implement controls around what generative AI can and cannot be used for in your business to minimise the risk of a breach of confidentiality or loss of privilege.
- Regular model audits: Conduct periodic audits of your generative AI models to assess their outputs for any unintentional disclosure of sensitive information.
- Data breach response plan: Have a well-defined data breach response plan in place. This plan should include steps to be taken in case of a confidentiality breach, including notifying affected parties, relevant authorities, and taking appropriate measures to mitigate the impact.
- Maximising your ownership of the output content
If you want to be able to monopolise and/or commercialise the outputs of generative AI, it is recommended you maximise the prospects of your business owning these outputs.
- Assert ownership of the AI generated data on the outputs themselves (by contract and by website, documentation, and other relevant notices e.g. copyright notices).
- Insert IP assignment clauses in any contracts/ agreements and any related methodologies used, vest upon creation in your business’ legal entity (or better still its IP owning entity);
- If you are using Enterprise AI ensure you enter customised contracts with generative AI platforms that stipulate your business’ legal entity as the owner of the generative AI.
- Selection of data – assert in written methodologies and specifications that the way in which the contents of database(s) and dataset(s) concerned are selected and arranged is the product of the author’s own intellectual creation to maximise the likelihood of database copyright availability.
- Ensure the contractual definitions of confidential information, derived data, and intellectual property include trade secrets as part of the definition of each, and are all included in any contract relating to, or which may utilise, generative AI.
Seek out advice from AI experts
Consulting with legal experts who specialise in AI and intellectual property law is crucial to safeguarding your ownership rights effectively and minimising any improper or illegal use of generative AI in your organisation.
To find out more information, please contact our expert team today.