Government Regulation of AI in Australia

The Australian Government has published its interim response on proposals for regulation of Artificial Intelligence (AI) to ensure that the development and deployment of AI is safe and responsible.

AI is defined as “an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming. AI systems are designed to operate with varying levels of automation. “

The Government’s response is targeted towards the use of AI in high-risk settings, where harms could be difficult to reverse, while ensuring that the vast majority of low risk AI use continues to flourish largely unimpeded.

The Government recognises that many applications of AI do not present risks that require a regulatory response. For example, AI can help monitor and measure biodiversity or help automate internal business processes.

For the purpose of discussion ‘high-risk’ is defined as impacts that are ‘systemic, irreversible or perpetual’. These could include:
• certain critical infrastructure (water, gas, electricity)
• medical devices
• systems determining access to educational institutions or recruiting people
• systems used in law enforcement, border control and administration of justice
• biometric identification
• emotion recognition.

AI risks include:
• inaccuracies in model inputs and outputs
• biased or poor-quality model training data
• model slippage over time
• discriminatory or biased outputs
• a lack of transparency about how and when AI systems are being used.

“Mandatory guardrails” to promote the safe design, development and deployment of AI systems will be considered, including possible requirements relating to:

  • Testing – testing of products to ensure safety before and after release.
  • Transparency – transparency regarding model design and data underpinning AI applications; labelling of AI systems in use and/or watermarking of AI generated content.
  • Accountability –training for developers and deployers of AI systems, possible forms of certification, and clearer expectations of accountability for organisations developing, deploying and relying on AI systems.

If you found this article helpful, then subscribe to our news emails to keep up to date and look at our video courses for in-depth training. Use the search box at the top right of this page or the categories list on the right hand side of this page to check for other articles on the same or related matters.

David Jacobson

Author: David Jacobson
Principal, Bright Corporate Law
Email:
About David Jacobson
The information contained in this article is not legal advice. It is not to be relied upon as a full statement of the law. You should seek professional advice for your specific needs and circumstances before acting or relying on any of the content.

Print Friendly, PDF & Email
 

Your Compliance Support Plan

We understand you need a cost-effective way to keep up to date with regulatory changes. Talk to us about our fixed price plans.