Mark A. Lemley and Bryan Casey, both of Stanford Law School, have published You Might Be a Robot. Here is the abstract.
As robots and artificial intelligence (AI) increase their influence over society, policymakers are increasingly regulating them. But to regulate these technologies, we first need to know what they are. And here we come to a problem. No one has been able to offer a decent definition of robots and AI — not even experts. What’s more, technological advances make it harder and harder each day to tell people from robots and robots from “dumb” machines. We’ve already seen disastrous legal definitions written with one target in mind inadvertently affecting others. In fact, if you’re reading this you’re (probably) not a robot, but certain laws might already treat you as one. Definitional challenges like these aren’t exclusive to robots and AI. But today, all signs indicate we’re approaching an inflection point. Whether it’s citywide bans of “robot sex brothels” or nationwide efforts to crack down on “ticket scalping bots,” we’re witnessing an explosion of interest in regulating robots, human enhancement technologies, and all things in between. And that, in turn, means that typological quandaries once confined to philosophy seminars can no longer be dismissed as academic. Want, for example, to crack down on foreign “influence campaigns” by regulating social media bots? Be careful not to define “bot” too broadly (like the California legislature recently did), or the supercomputer nestled in your pocket might just make you one. Want, instead, to promote traffic safety by regulating drivers? Be careful not to presume that only humans can drive (as our Federal Motor Vehicle Safety Standards do), or you may soon exclude the best drivers on the road. In this Article, we suggest that the problem isn’t simply that we haven’t hit upon the right definition. Instead, there may not be a “right” definition for the multifaceted, rapidly evolving technologies we call robots or AI. As we’ll demonstrate, even the most thoughtful of definitions risk being overbroad, underinclusive, or simply irrelevant in short order. Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots: embrace their ineffable nature. We offer several strategies to do so. First, whenever possible, laws should regulate behavior, not things (or as we put it, regulate verbs, not nouns). Second, where we must distinguish robots from other entities, the law should apply what we call Turing’s Razor, identifying robots on a case-by-case basis. Third, we offer six functional criteria for making these types of “I know it when I see it” determinations and argue that courts are generally better positioned than legislators to apply such standards. Finally, we argue that if we must have definitions rather than apply standards, they should be as short-term and contingent as possible. That, in turn, suggests regulators—not legislators—should play the defining role.
Download the article from SSRN at the link.
Comments
You can follow this conversation by subscribing to the comment feed for this post.