The field of personal robotics holds enormous promise, but robot-related injuries have already begun to occasion litigation. In 1999, for instance, a woman settled a lawsuit with her employer when a mail delivery robot allegedly pinned her against a wall, fracturing her toe and causing other injuries. Such incidents will multiply as we approach the millions of personal robots UN and other statistical models anticipate within the next few years. Lawsuits will invariably name robotics manufactures as defendants and early plaintiffs are likely to include highly sympathetic populations such as the elderly.
As with personal computers, personal robots will be designed to be versatile. Many robots under development are essentially platforms that can be programmed, instructed, or remotely operated; others learn from their environment or respond to context; still others run on open source (i.e., multiple contributor) software. This blending of control means that there is no obvious way to apply standard concepts of tort – such as foreseeability, misuse, disclosure, and intentionality – to some of the most likely scenarios involving robot-related harm. The sheer complexity of robotics systems also make it possible that safety measures could prevent harm in some contexts but help cause it in others.
The resulting legal uncertainty could discourage the flow of capital into robotics or narrow robot functionality, placing the United States behind other countries with a higher bar to litigation (and a head start). This essay explores the best legal infrastructure to prioritize safety and compensate victims while preserving the conditions for innovation and investment. The essay tentatively proposes a system modeled on our experience with the Internet. The generalized immunity enjoyed by websites under the Communications Decency Act for user content and filtering has permitted web services to proliferate and thrive. A “section 230 for personal robotics” could limit legal risk to the manufacturer where the owner programmed, instructed, or “taught” the robot to take the action at issue or where the harm resulted from a valid safety mechanism.