Sandra Wachter, Brent Middelstadt, and Chris Russell, all of the Oxford Internet Institute, have published Do large language models have a legal duty to tell the truth? Here is the abstract.
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and shared social truth in democratic societies. LLMs produce responses that are plausible, helpful, and confident, but that contain factual inaccuracies, misleading references, and biased information. These subtle mistruths are poised to cumulatively degrade and homogenise knowledge over time. This article examines the existence and feasibility of a legal duty for LLM providers to create models that “tell the truth.” We argue that LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. We define careless speech against “ground truth” in LLMs and related risks including hallucinations, misinformation, and disinformation. We assess the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act, Digital Services Act, Product Liability Directive, and Artificial Intelligence Liability Directive. Current frameworks contain limited, sector-specific truth duties. Drawing on duties in science and academia, education, archives and libraries, and a German case in which Google was held liable for defamation caused by autocomplete, we propose a pathway to create a legal truth duty for providers of narrow- and general-purpose LLMs.
Download the article from SSRN at the link.
Comments
You can follow this conversation by subscribing to the comment feed for this post.