Autopilot Verdicts and the Question of Control
We have all seen the recent verdicts and settlements involving Tesla’s “Autopilot” feature. A natural question follows: how do those outcomes square with fully autonomous taxi services such as Waymo or Zoox (Amazon’s entry into the market)? The answer largely turns on how much assistance—or control—the vehicle’s systems exercise over the driving task.
The most recent verdict out of Miami arose from a 2023 accident and resulted in total damages of $240 million, with the jury allocating 67% of the fault to the driver and 33% to Tesla. Notably, although the crash occurred in 2023, the vehicle involved was a 2019 model. That distinction matters, because the level of “Autopilot” functionality available depends heavily on both software and hardware. Older Tesla models vary significantly in capability based on which upgrades have been installed. In fact, some older Teslas are sought after by enthusiasts precisely because they lack more advanced automation features.
As Autopilot evolves, it moves along a spectrum—from driver assistance toward increasingly autonomous operation. That progression is where the legal tension lies. (See NPR, Tesla Autopilot Crash Jury Awards $240 Million, Aug. 2, 2025.)
Driver Assistance vs. Autonomous Operation Under the Law
Most states still require that a motor vehicle be operated by a properly licensed human driver, not a machine. While several states have begun pilot programs for fully autonomous vehicles, they do so with carefully crafted statutory definitions.
For example, California Vehicle Code § 38750(2)(B) expressly defines what an autonomous vehicle is not, stating that it does not include vehicles equipped solely with driver-assistance features such as blind spot monitoring, automated emergency braking, adaptive cruise control, lane-keeping assistance, or similar systems that enhance safety but are not capable—individually or collectively—of driving the vehicle without active human control or monitoring.
California law also defines an “operator” as “the person who is seated in the driver’s seat, or, if there is no person in the driver’s seat, causes the autonomous vehicle to engage.” Cal. Veh. Code § 38750(4).
Fully Autonomous Vehicles and Emerging Liability Issues
Some autonomous taxi services are developing purpose-built vehicles that do not even include a driver’s seat. These vehicles are generally intended for low-speed, intra-city transportation. That raises a critical question: who is the “operator” when such a vehicle is engaged—the passenger requesting the ride, the service provider operating the fleet, or the manufacturer that programmed the vehicle?
In the recent Florida Tesla verdict, the vehicle owner was behind the wheel, and the Autopilot system was never intended to operate the vehicle without an attentive driver. To date, Tesla’s exposure may be heightened by its choice to market the system as “Autopilot.” As Tesla continues to expand the system’s capabilities and assume greater control over vehicle operation, it is reasonable to expect that juries will allocate a larger share of fault to the manufacturer.
As for liability in the context of fully autonomous vehicles, the law remains unsettled. Under California’s statutory framework, one could argue that the customer who orders a ride “causes” the vehicle to engage. Yet the customer’s interaction is typically limited to a few taps on a smartphone. Meanwhile, autonomous vehicles still struggle with real-world complexities such as construction zones, accident scenes, and directions from law enforcement.
Ultimately, liability should follow control—who has it and who exercises it. For now, however, the case law simply does not exist, leaving these questions to be resolved in future litigation.
If you have questions about this article, please contact Ryan Keesee (rkeesee@setlifflaw.com) at (804) 377-1268 or Steve Setliff (ssetliff@setlifflaw.com) at (804) 377-1261.
© 2026 Setliff Law, P.C.| View Our Disclaimer | Privacy Policy