Australia’s National Artificial Intelligence Centre has published the Guidance for AI Adoption which sets out 6 essential practices for responsible AI governance and adoption.
It recommends identifying who is accountable, testing and monitoring and maintaining human control.
The guidance includes an AI screening tool, AI policy guide and template and an AI register template.
The Guidance identifies the following risks:
- AI system not sufficiently secure;
- Misleading outputs / statements;
- Harmful outputs;
- Misuse of data or infringement of model or system;
- Bias, incorrect or poor-quality output;
- AI system not accessible to individual or group;
- Engagement with others in the AI supply chain.
If you found this article helpful, then subscribe to our news emails to keep up to date and look at our video courses for in-depth training. Use the search box at the top right of this page or the categories list on the right hand side of this page to check for other articles on the same or related matters.

Author: David Jacobson
Principal, Bright Corporate Law
Email:
About David Jacobson
The information contained in this article is not legal advice. It is not to be relied upon as a full statement of the law. You should seek professional advice for your specific needs and circumstances before acting or relying on any of the content.
