Eugene Volokh, University of California, Los Angeles, School of Law, has published Large Libel Models? Liability for AI Output at 3 Journal of Free Speech Law 489 (2023). Here is the abstract.
“Large Language Model” AI programs routinely invent false and defamatory allegations, complete with invented quotes and invented newspaper articles. Indeed, lawsuits have already been filed over alleged libels created by ChatGPT and Bing. Should such AI programs’ creators be liable for defamation, based on their programs’ output? In this article, I begin by analyzing this question under the current rules of U.S. defamation law. I will tentatively argue that, when the “actual malice” standard applies, the standard might be satisfied if an AI company has received actual notice of particular spurious information being produced by its software but has refused to act. This would in practice require such companies to implement a “notice-and-blocking” system, loosely similar to “notice-and-takedown” systems required under the DMCA as to copyright and trademark infringements. And I will also discuss the possibility of negligence liability, when such liability is authorized under libel law, by analogy to negligent design product liability. To be sure, allowing such liability could yield substantial costs. That is particularly so since it may require lay judges and juries to evaluate complicated technical claims about which designs are feasible. (Such concerns of course mirror the concerns about legal liability as to other products, such as pharmaceuticals or cars, or as to services, such as surgical procedures.) Part II will tentatively discuss some arguments for why the law might be changed, whether by courts, by legislatures, or by administrative agencies. Finally, Part III will offer some similarly tentative thoughts about how this might apply to other claims, such as false light, disclosure of private facts, the right of publicity, or negligence.
Download the article from SSRN at the link.