If you haven’t heard of ChatGPT then you are living under a rock. This artificial intelligence chatbot has taken the world by storm and is being experimented with by millions of users. Developed by OpenAI, ChatGPT is a natural language processing tool that is conducted by AI technology and trained to have human-like conversations.
By writing a prompt, ChatGPT will generate a detailed response to your question by using various information data available on the internet. This writing tool has even been used by people to assist them in drafting their emails and essays.
Yet the risk of a landmark defamation lawsuit against ChatGPT is unavoidable. This is because a primary difficulty with this technology is that the chatbot is not programmed to differentiate between the truth and any inaccurate data. OpenAI has cautioned its users about the limitations of the chatbot with a disclaimer indicating that it may generate responses that are ‘plausible sounding but incorrect or nonsensical answers’.

Recently, ChatGPT has erroneously identified the whistleblower, Brian Hood, as a perpetrator who was ‘involved in the payment of bribes to officials in Indonesia and Malaysia’ and sentenced to prison for bribery and corruption. To say that this artificial intelligence conclusion drawn was wrong is an understatement. It was beyond merely incorrect, but ridiculous.
Brian Hood, now a Victorian Mayor, expressed his view that the false claims by ChatGPT had a highly negative impact on his reputation as a prominent figure in the community. In response, his legal representative sent a concerns’ notice on 21 March 2023, a now essential initial step before plaintiffs can commence defamation proceedings.
Should this matter go to court, Hood would need to prove that OpenAI was the publisher of the defamatory material and that it caused or was likely to cause serious harm to his reputation.
This would, we expect be a complex case, given that other binding precedent from the High Court of Australia found Google to not be a publisher of the webpages it links to.[1] It was, the Court held, simply providing access to the contents of another’s web page, and that this does not constitute participation in the communication process between that page and the user.[2]

It would be interesting to see whether this reasoning could similarly apply to ChatGPT, which generates responses from different information sources available on the internet. Is the situation different legally when the AI forms conclusions, including obviously nonsensical ones like mixing up the whistleblower as the perpetrator? Does an artificial intelligence operating with ‘a mind’ of sorts, change the legal nature of the role played by some social media platform dissemination? The role of the old noticeboard understanding, may not, it could be argued, apply given the formation of new content by the AI ‘publisher’. The information is not merely being spread – rather it is being created anew.
If heard in court, this would compel judges to evaluate whether AI bot operators can be held responsible for any potentially defamatory statements produced.
Whether or not this case becomes Australia’s first landmark defamation case involving AI technology, similar issues are certain to arise. It is likely that significant reforms will need to be implemented to regulate the way that AI chatbots are managed and controlled appropriately.
[1] Google v Defteros [2022] HCA 27.
[2] Ibid [53] (Kiefel CJ, and Gleeson J).
*Disclaimer: This is intended as general information only and not to be construed as legal advice. The above information is subject to changes over time. You should always seek professional advice before taking any course of action.*