Insurance Technology Featured Article

Blame the Car: Self-Driving Cars Mean Liability Shift

September 09, 2015

There's an old song out there about how a poor musician blames his instrument, and this is often adapted to other fields such that a poor workman blames his tools, or a poor driver blames his car. However, the rise of the self-driving car has made that blame stick just a little better than it ever has, and that means a sea change afoot in determining liability in accidents involving self-driving cars.

Admittedly, the idea of an accident involving a self-driving car sounds like a remote possibility at best. Self-driving cars are specifically set up so as to prevent accidents; the cars are designed to know where other vehicles are, and how those locations relate to a current location so as not to try and occupy the same space at the same time. Speed is constant, acceleration smooth, and every traffic law no matter how arcane is part of the vehicle's basic operations system.

But self-driving vehicles are subject to some unusual hazards; faults in the software, or even an outright hacking attempt, can leave the vehicle performing improperly, and that in turn opens up the possibility of accidents. Since the vehicle is doing the driving, it's impossible to pin liability down on the human element, which is now basically just a passenger. So insurers are looking at new models to determine liability in the event of accident.

For instance, some models are considering requiring manufacturers to monitor vehicles after the sale to help spot a hacking in progress. Other models will continue to point to the car's owner as the ultimate liable force, though this may be hard to make stick in an environment where the owner can't access software and ensure its security. Increasing automation may ultimately impact a driver's ability to maintain vigilance—why bother watching where the car is going when the car is doing the watching itself—and thus prevent accidents.

Insurers are responding with new programs that take advantage of the technology, like insurance based on miles driven or on the display of positive driving behaviors. These programs require a certain degree of monitoring to put in place, however, and customers are proving somewhat resistant to that idea.

Indeed, half the point of a self-driving car is that the car is the one doing the driving, so blaming a human in this case is almost like blaming a baby in a car seat for a freeway pile-up. But here, some would retort that the human is supposed to be the control mechanism of last resort, and that the human behind the wheel—even if the wheel is mainly moved by automated systems—is where the buck stops, so to speak. But if the human takes over for the car, has the human suddenly assumed liability in attempting to prevent a wreck? Or is the human better off with a hands-off approach, allowing the manufacturer to take the liability bullet instead?

These are questions that will all need to be answered, and in short order, before too much more time has passed and the arrival of the automated car on the market renders the questions moot. Big changes are likely coming to the insurance market, and soon.




Edited by Stefania Viscusi

Article comments powered by Disqus




Insurance Technology Homepage ››