The Privacy Commissioner has published a case study of what can go wrong when using publicly available GenAI tools like ChatGPT in the workplace.
In the fictional case study a car insurance company permits its employees to use individual accounts on publicly available GenAI products to assist in performing routine tasks, such as summarising documents and producing simple reports.
Against CarCover’s policy, an employee uploaded a customer’s financial hardship application, including information about their health and family circumstances, to ChatGPT, to generate a summary report.
By uploading the application to ChatGPT, CarCover not only disclosed the customer’s sensitive information without consent, but the summary report generated minimised relevant and key aspects of the customer’s application.
On assessment of the summary report, CarCover refused the customer’s hardship application, resulting in significant financial and emotional distress to the customer.
To reduce the likelihood of such a scenario, the OAIC recommends businesses develop an internal policy on the use of GenAI tools and discuss it at staff training sessions.
If you found this article helpful, then subscribe to our news emails to keep up to date and look at our video courses for in-depth training. Use the search box at the top right of this page or the categories list on the right hand side of this page to check for other articles on the same or related matters.

Author: David Jacobson
Principal, Bright Corporate Law
Email:
About David Jacobson
The information contained in this article is not legal advice. It is not to be relied upon as a full statement of the law. You should seek professional advice for your specific needs and circumstances before acting or relying on any of the content.
