The Australian Government has published a Proposals Paper for Introducing mandatory guardrails for AI in high-risk settings. It has also published a Voluntary AI Safety Standard.
It is proposed that in designating an AI system as high-risk due to its use, regard must be given to:
a. The risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations;
b. The risk of adverse impacts to an individual’s physical or mental health or safety;
c.The risk of adverse legal effects, defamation or similarly significant effects on an individual;
d. The risk of adverse impacts to groups of individuals or collective rights of cultural groups;
e. The risk of adverse impacts to the broader Australian economy, society, environment and rule of law;
f. The severity and extent of those adverse impacts outlined in principles (a) to (e) above.
Proposed mandatory guardrails for high-risk AI
Organisations developing or deploying high-risk AI systems are required to:
1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
2. Establish and implement a risk management process to identify and mitigate risks;
3. Protect AI systems, and implement data governance measures to manage data quality and provenance;
4. Test AI models and systems to evaluate model performance and monitor the system once deployed;
5. Enable human control or intervention in an AI system to achieve meaningful human oversight;
6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
7. Establish processes for people impacted by AI systems to challenge use or outcomes;
8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
9. Keep and maintain records to allow third parties to assess compliance with guardrails;
10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails.
The Voluntary Standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain:
1. Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
2. Establish and implement a risk management process to identify and mitigate risks.
3. Protect AI systems, and implement data governance measures to manage data quality and provenance.
4. Test AI models and systems to evaluate model performance and monitor the system once deployed.
5. Enable human control or intervention in an AI system to achieve meaningful human oversight.
6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
7. Establish processes for people impacted by AI systems to challenge use or outcomes.
8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
9. Keep and maintain records to allow third parties to assess compliance with guardrails.
10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.
If you found this article helpful, then subscribe to our news emails to keep up to date and look at our video courses for in-depth training. Use the search box at the top right of this page or the categories list on the right hand side of this page to check for other articles on the same or related matters.
Author: David Jacobson
Principal, Bright Corporate Law
Email:
About David Jacobson
The information contained in this article is not legal advice. It is not to be relied upon as a full statement of the law. You should seek professional advice for your specific needs and circumstances before acting or relying on any of the content.