book a virtual meeting Search Search
brisbane

one eagle – waterfront brisbane
level 30, 1 eagle street
brisbane qld 4000
+61 7 3235 0400

dandenong

40-42 scott st,
dandenong vic 3175
+61 3 9794 2600

melbourne

level 7, 600 bourke st,
melbourne vic 3000
+61 3 8615 9900

sydney

grosvenor place
level 11, 225 george st,
sydney nsw 2000
+61 2 8298 9533

hello. we’re glad you’re
getting in touch.

Fill in form below, or simply call us on 1800 888 966

AI adoption in business: Unveiling the Senate’s blueprint for regulation

17 December 2024
Mark Metzeling Brigid Clark Sandy Nyunt Cooper Zulli
Read Time 7 mins reading time

At present, Australia lacks any legislation solely focused on regulating AI. Instead, the AI industry is governed by a mix of existing laws and regulations which have been tailored to other specific industries. But sometime soon, it’s likely that will all change. On 26 November 2024, the Senate Select Committee on Adopting Artificial Intelligence (committee) delivered its final report (the Report) producing 13 recommendations. If implemented, the result will signify a pivotal moment in the regulation of AI adoption in Australia – with the likely introduction of the nation’s first whole-of-economy AI legislation.

To mitigate the “risks of AI while fostering its vast potential economic and social benefits”, the standalone legislation will be developed to specifically govern “high-risk” uses of AI.

“High-risk” uses of AI

So then, what is considered “high-risk” use of AI?

The committee have recommended that “high-risk” uses of AI be defined by a principles-based approach, which is likely to incorporate regulations relating to the use of AI in tracking, predicting, or identifying human behaviours through bio-metric information and other personally identifiable information. This high-risk legislation will be supplemented by an explicit, non-exhaustive list (which includes general-purpose AI models, or large language models like Chat-GPT). While the Australian Government can choose to ignore the recommended formulation, whatever the Government classifies as “high-risk” will be subject to stringent regulation.

AI Systems Management Plan

The best way for business to minimise the risk of adverse events arising from AI adoption is to formulate a robust AI Systems Management Plan. Key elements of this plan are:

  1. AI Policy

The Report makes it evident that the transparency of AI systems to the end user is of paramount importance in development and use of AI systems by business. It is ‘a key principle at the highest levels of international governance and for industry when it comes to responsible AI adoption’. As with the issue of bias and discrimination, the issue of transparency is particularly significant in the context of Automated Decision Making or AI-assisted decision making.

In essence, businesses are likely to be required to understand the nature of the data, connections, algorithms and computations that generate a system’s behaviour including its techniques and logic. i.e. how the AI system generates the output and what potential biases may be present. This will then need to be communicated to its clients/customers as part of the transparency to the public, through an AI Policy.

  1. AI, privacy, and data breaches

The adoption of AI systems requires organisations to have robust data security policies and practices in place, both to protect data from external threats, as well as internal employees who should not have access to the data stored and generated by these systems.

If not managed correctly, there is an increased risk associated with data breaches and privacy violations, data manipulation…and regulatory compliance risks. Accordingly, any business implementing AI Systems will need to ensure it has cybersecurity measures (including end to end encryption), a compliant Privacy Policy, and a Data Breach Response plan in place to minimise these risks.

Further, any IT contracts entered by the business will also need to ensure that these matters are dealt with in the terms and conditions with appropriate warranties and indemnities provided.

  1. Internal use protocols

While the above to policies/documents are integral to the implementation of AI Systems from a client/customer perspective, business also needs to be aware of the use of AI Systems by employees and in relation to employees.

The former is best dealt with by utilising an internal addendum to the AI Policy that will focus on procedures and protocols for your employees in using and proposing the use of AI Systems. By establishing set parameters around these issues, businesses can responsibly guide employees in the use of AI in your business.

The latter is dealt with in the next section.

AI adoption in the workplace

One area intrinsic to all business that received increased focus from the Committee is the use of AI in the Workplace. Three of the 13 recommendations from the committee’s report relate to this area and are set out below along with our practical considerations for businesses in the process of adopting AI in the workplace, whether this be in relation to hiring, firing, disciplinary matters, workforce consultation or workplace surveillance.

Recommendation 5: That the Australian Government ensure that the final definition of high-risk AI clearly includes the use of AI that impacts on the rights of people at work, regardless of whether a principles-based approach to the definition is adopted.

Practical considerations for businesses

No matter what the proposed legislation might say, employers will be required to proactively identify and minimise potential risks posed by the use of AI, including impacts of AI on workers’ rights and working conditions. This is particularly important where AI systems are used for workforce planning, management and surveillance in the workplace.

Employers should also be aware of potential bias and other inaccuracies in AI design which could lead to discrimination where human decision-making is informed by AI. This is not only in respect of “recruitment and hiring, promotions, transfers, pay and termination”, but in all aspects, even as far as payroll processes.

Recommendation 6: That the Australian Government extend and apply the existing work health and safety legislative framework to the workplace risks posed by the adoption of AI.

Practical considerations for businesses

Risk management and safety is an integral consideration for businesses looking to introduce or increase the use of AI in the workplace. Therefore, where employers are using AI to replace or take on the role a human was once was tasked with, the regulation of this category of AI Systems should be approached in a risk management framework. A risk management framework considers proactive control measures to eliminate or minimise risks associated with the use of AI in the workplace.

Recommendation 7: That the Australian Government ensure that workers, worker organisations, employers and employer organisations are thoroughly consulted on the need for, and best approach to, further regulatory responses to address the impact of AI on work and workplaces.

Currently, there is no nationally consistent framework for ensuring transparency, consultation and negotiation with workers in relation to the use and management of AI in the workplace. On this basis, the committee has identified a significant need for consultation with workforces in relation to proposed uses of AI in the workplace.

Practical considerations for businesses

Employers should maintain open lines of communication and prioritise consultation with workers about the use of AI. As part of the consultation process, employers should ensure transparency where the implementation of AI could lead to changes to existing roles or redundancies.

Additionally, it is recommended that employers’ policies are reviewed and updated to include robust provisions that ensure workers are consulted prior to any redundancies or restructures arising from the introduction of AI into the workforce.

While these recommendations are only proposed reforms, they will be highly influential in the process of creating AI specific regulations in Australia, alongside the Proposed Mandatory Guardrails and Voluntary AI Safety Standard.

Accordingly, Australian organisations should be aware that the use of AI in and with respect to the workplace, will not relinquish their obligations but rather, it seems there will be an increased responsibility for employers to consult employees, and monitor and control the use of AI that impacts on the rights of people at work.

Staying ahead of the curve

To ensure future compliance, Australian organisations which use or intend to use AI should be aware of these recommendations and consider:

  • the potential future obligations arising from them, and
  • how to establish a plan to appropriately address these issues now, and into the future.

Macpherson Kelley’s Employment and Safety teams have extensive experience with employment and WHS obligations, and our IP, Trade & Technology team is ready to assist with developing personalised policies to support AI adoption and preparatory plans for organisations to improve their AI governance in light of upcoming changes.

If you are concerned about your organisation’s exposure to compliance with current law or the impending regulations, or wish to discuss the above matters further, our teams are available to help.

The information contained in this article is general in nature and cannot be relied on as legal advice nor does it create an engagement. Please contact one of our lawyers listed above for advice about your specific situation.

stay up to date with our news & insights

AI adoption in business: Unveiling the Senate’s blueprint for regulation

17 December 2024
Mark Metzeling Brigid Clark Sandy Nyunt Cooper Zulli

At present, Australia lacks any legislation solely focused on regulating AI. Instead, the AI industry is governed by a mix of existing laws and regulations which have been tailored to other specific industries. But sometime soon, it’s likely that will all change. On 26 November 2024, the Senate Select Committee on Adopting Artificial Intelligence (committee) delivered its final report (the Report) producing 13 recommendations. If implemented, the result will signify a pivotal moment in the regulation of AI adoption in Australia – with the likely introduction of the nation’s first whole-of-economy AI legislation.

To mitigate the “risks of AI while fostering its vast potential economic and social benefits”, the standalone legislation will be developed to specifically govern “high-risk” uses of AI.

“High-risk” uses of AI

So then, what is considered “high-risk” use of AI?

The committee have recommended that “high-risk” uses of AI be defined by a principles-based approach, which is likely to incorporate regulations relating to the use of AI in tracking, predicting, or identifying human behaviours through bio-metric information and other personally identifiable information. This high-risk legislation will be supplemented by an explicit, non-exhaustive list (which includes general-purpose AI models, or large language models like Chat-GPT). While the Australian Government can choose to ignore the recommended formulation, whatever the Government classifies as “high-risk” will be subject to stringent regulation.

AI Systems Management Plan

The best way for business to minimise the risk of adverse events arising from AI adoption is to formulate a robust AI Systems Management Plan. Key elements of this plan are:

  1. AI Policy

The Report makes it evident that the transparency of AI systems to the end user is of paramount importance in development and use of AI systems by business. It is ‘a key principle at the highest levels of international governance and for industry when it comes to responsible AI adoption’. As with the issue of bias and discrimination, the issue of transparency is particularly significant in the context of Automated Decision Making or AI-assisted decision making.

In essence, businesses are likely to be required to understand the nature of the data, connections, algorithms and computations that generate a system’s behaviour including its techniques and logic. i.e. how the AI system generates the output and what potential biases may be present. This will then need to be communicated to its clients/customers as part of the transparency to the public, through an AI Policy.

  1. AI, privacy, and data breaches

The adoption of AI systems requires organisations to have robust data security policies and practices in place, both to protect data from external threats, as well as internal employees who should not have access to the data stored and generated by these systems.

If not managed correctly, there is an increased risk associated with data breaches and privacy violations, data manipulation…and regulatory compliance risks. Accordingly, any business implementing AI Systems will need to ensure it has cybersecurity measures (including end to end encryption), a compliant Privacy Policy, and a Data Breach Response plan in place to minimise these risks.

Further, any IT contracts entered by the business will also need to ensure that these matters are dealt with in the terms and conditions with appropriate warranties and indemnities provided.

  1. Internal use protocols

While the above to policies/documents are integral to the implementation of AI Systems from a client/customer perspective, business also needs to be aware of the use of AI Systems by employees and in relation to employees.

The former is best dealt with by utilising an internal addendum to the AI Policy that will focus on procedures and protocols for your employees in using and proposing the use of AI Systems. By establishing set parameters around these issues, businesses can responsibly guide employees in the use of AI in your business.

The latter is dealt with in the next section.

AI adoption in the workplace

One area intrinsic to all business that received increased focus from the Committee is the use of AI in the Workplace. Three of the 13 recommendations from the committee’s report relate to this area and are set out below along with our practical considerations for businesses in the process of adopting AI in the workplace, whether this be in relation to hiring, firing, disciplinary matters, workforce consultation or workplace surveillance.

Recommendation 5: That the Australian Government ensure that the final definition of high-risk AI clearly includes the use of AI that impacts on the rights of people at work, regardless of whether a principles-based approach to the definition is adopted.

Practical considerations for businesses

No matter what the proposed legislation might say, employers will be required to proactively identify and minimise potential risks posed by the use of AI, including impacts of AI on workers’ rights and working conditions. This is particularly important where AI systems are used for workforce planning, management and surveillance in the workplace.

Employers should also be aware of potential bias and other inaccuracies in AI design which could lead to discrimination where human decision-making is informed by AI. This is not only in respect of “recruitment and hiring, promotions, transfers, pay and termination”, but in all aspects, even as far as payroll processes.

Recommendation 6: That the Australian Government extend and apply the existing work health and safety legislative framework to the workplace risks posed by the adoption of AI.

Practical considerations for businesses

Risk management and safety is an integral consideration for businesses looking to introduce or increase the use of AI in the workplace. Therefore, where employers are using AI to replace or take on the role a human was once was tasked with, the regulation of this category of AI Systems should be approached in a risk management framework. A risk management framework considers proactive control measures to eliminate or minimise risks associated with the use of AI in the workplace.

Recommendation 7: That the Australian Government ensure that workers, worker organisations, employers and employer organisations are thoroughly consulted on the need for, and best approach to, further regulatory responses to address the impact of AI on work and workplaces.

Currently, there is no nationally consistent framework for ensuring transparency, consultation and negotiation with workers in relation to the use and management of AI in the workplace. On this basis, the committee has identified a significant need for consultation with workforces in relation to proposed uses of AI in the workplace.

Practical considerations for businesses

Employers should maintain open lines of communication and prioritise consultation with workers about the use of AI. As part of the consultation process, employers should ensure transparency where the implementation of AI could lead to changes to existing roles or redundancies.

Additionally, it is recommended that employers’ policies are reviewed and updated to include robust provisions that ensure workers are consulted prior to any redundancies or restructures arising from the introduction of AI into the workforce.

While these recommendations are only proposed reforms, they will be highly influential in the process of creating AI specific regulations in Australia, alongside the Proposed Mandatory Guardrails and Voluntary AI Safety Standard.

Accordingly, Australian organisations should be aware that the use of AI in and with respect to the workplace, will not relinquish their obligations but rather, it seems there will be an increased responsibility for employers to consult employees, and monitor and control the use of AI that impacts on the rights of people at work.

Staying ahead of the curve

To ensure future compliance, Australian organisations which use or intend to use AI should be aware of these recommendations and consider:

  • the potential future obligations arising from them, and
  • how to establish a plan to appropriately address these issues now, and into the future.

Macpherson Kelley’s Employment and Safety teams have extensive experience with employment and WHS obligations, and our IP, Trade & Technology team is ready to assist with developing personalised policies to support AI adoption and preparatory plans for organisations to improve their AI governance in light of upcoming changes.

If you are concerned about your organisation’s exposure to compliance with current law or the impending regulations, or wish to discuss the above matters further, our teams are available to help.