ChrisWeigant.com

Who Will Pay For Self-Driving Cars' Accidents?

[ Posted Thursday, January 25th, 2018 – 16:27 UTC ]

We're all on the brink of entering a brave new world of self-driving cars, but what few have bothered to point out is that we're going to have to come up with an equally brave new world of legal liability in order to do so. Because nobody's really got an answer to a very basic legal question: if a self-driving car causes an accident, who gets sued? Who pays for damages and injuries? These are basic questions, but the answers are going to get complicated pretty fast.

The word "automobile" is a linguistic mashup that has never quite lived up to its own definition. The word was coined to indicate a vehicle that was "self-moving" -- as opposed to relying on a horse or locomotive to get around. But two separate language bases were used, which must have bugged semantic purists of the day no end. If you go with a Greek base, the term really should be "autokineton," but if you go with Latin then you'd wind up with an "egomobile." [Feel free to insert your own joke about owning, say, a Jaguar as an example of an "egomobile."] All kidding aside, self-driving (or autonomous) cars will finally achieve the fullest definition of automobile, because they truly will move themselves.

But up until now, the entire legal and auto insurance system is set up on a principle that will no longer apply -- the driver that is found to be at fault in any accident is the one who is liable for damages and injury costs. Responsible drivers buy insurance so that when such an accident happens, the insurance company pays these costs. This simple equation, however, will not work when you take the driver (and the driver's responsibility) out of the picture. If your new car has no steering wheel or pedals, then the "driver" is nothing more than another passenger -- and passengers are almost never at fault in an accident. So who is going to pay for the bent fender?

The most obvious answer is the company that built and sold the car. If they are certifying that their car is fully autonomous, then they are assuming the responsibility when something goes wrong, right? But perhaps the company as a whole isn't responsible -- perhaps it is the project manager who was in charge of the design and production of that particular vehicle. Or the software programmers who wrote the code the car uses to drive. Or maybe even the software quality assurance engineer whose job it was to find all the bugs in the self-driving software. [Full disclosure: I used to work in Silicon Valley as a bug-hunting S.Q.A. engineer myself, in a former career.] What would happen if an accident was caused by a software bug that had been identified but that was not fixed before the software was released? That sort of decision happens every day in the computer industry. "Bug-free software does not exist" is another way to put it.

The issue is in the news currently because of two very flawed test cases out of California. In one, a motorcycle and a self-driving car collided. In another, a semi-autonomous Tesla smacked into a parked emergency vehicle while in "autodrive" mode. But, as I said, both cases are pretty legally flawed. In the latter case, the Tesla is not supposed to be fully autonomous, even though some drivers treat it as if it had this capability. And in this particular case, the guy was drunk as a skunk. In the former case, the motorcyclist was actually cited for being at fault, even though he is challenging the police report's conclusion. The accident occurred due to a bizarre law in California that allows motorcycles to do what is called "lane-splitting," which is illegal most everywhere else (and for good reason). If freeway traffic comes to a halt (or even a slow crawl), then motorcyclists start weaving between the lines of cars -- putting three vehicles into the space legally allocated for two (two lanes, two cars, and a bike splitting the lanes in between the two cars). What happened recently was the autonomous vehicle made a move to change lanes, the motorcycle sped up to take its place, but then the car decided the lane change was unsafe and swerved back into the lane it had been leaving. The bike and car collided, although neither was going over 20 miles per hour.

Neither of these is a good legal test case, due to extraneous factors muddying the waters of who should be liable for an accident. Being drunk or lane-splitting introduces tangents to the basic legal question of who is responsible for an accident, in other words. But sooner or later a true test case is going to come along without these distracting issues.

The purest test case would be if two self-driving cars (from different manufacturers) smacked into each other. This would likely generate an enormous lawsuit between two giant corporations with much to lose, meaning no legal effort would be spared in the fight. But until we get to that point, it really would behoove us all to think about how the law needs changing to accommodate self-driving cars into the legal liability system.

Proponents of self-driving cars always use generalized statements to push the idea. Self-driving cars would be better than people-driven cars because software never gets sleepy, drunk, inattentive, or emotional, they claim. This means software would be a better driver than a human, overall. Overall, they argue, accident rates would go down and the roads would be safer and lives would be saved, so let's hurry up and get a bunch of self-driving cars on the road. But "overall" doesn't mean that horrific accidents won't happen due to self-driving cars. "Overall road deaths declined last year due to self-driving cars" isn't much solace if your loved one was killed by a self-driving car, to put this as starkly as possible.

The issue isn't as clear-cut as the layman might believe, either. Computers only do what they're told to do, after all -- by humans. Claiming computers are better than humans ignores the fact that all computers are programmed in advance -- by less-than-perfect humans. Software designers and programmers have to anticipate every single situation that the software will ever face. They have to think about what the correct thing to do is, and then program the software to direct the hardware (the car itself) to perform these tasks. But what happens when a situation pops up that the programmer didn't think about? Again, this happens all the time in the software industry. It's very hard to anticipate each and every circumstance the software will ever face, even for software that isn't at risk of harming or killing people. Even software designed for very simple tasks can have bugs, and driving a car is an incredibly complex task. So what does the car do when it faces a situation that it has not been adequately programmed to deal with?

There's an even more serious problem to contemplate as well. This is actually a classic ethics question, although it is usually described using a railroad (with a switch that only allows two choices of tracks). Removing the tracks from the equation, what should a self-driving car do if it comes whipping around a curve and has only two viable options to choose from -- it can swerve to avoid running over a baby carriage, but making such a swerve means it will run over a group of five adults standing on the sidewalk. Given those two choices and none other (hitting the brakes won't stop the car in time, in other words), which should the car choose? Or you can frame it slightly differently: if the car rounds a curve while driving next to a cliff and finds the only two choices are to hit a baby carriage or to drive over the cliff (probably killing the driver and all passengers), which should it choose?

These are not scenarios that haven't been considered by the programmer, I should point out. These are scenarios where the programmer has to make an ethical choice -- who to put at serious risk of injury or death? The baby? The adults? The driver? That choice must be made in advance and programmed into the car. So when this rare event actually happens, whoever dies can claim in court that the programmer was recklessly indifferent to human life. The only real way for the programmer to get around this conundrum would be to build in software preferences that each individual driver could adjust. That would put the onus of the legal liability on the owner of the car, not the programmer or the company that built it. But this would mean sitting down in your new car and having to go through a list of possible tragedies and setting preferences for who lives and who dies -- which isn't a very pleasant prospect.

There are other risks to allowing autonomous vehicles on the road that also haven't been given nearly the attention they deserve. Some companies which have been doing on-road testing have done so in locales that don't have much in the way of weather -- San Francisco and Phoenix, for instance. Self-driving cars are completely dependent on a network of sensors to inform the computer what is going on around it. What happens when the car hits a blizzard, and the sensors get covered up with snow every 15 minutes? What happens when freezing rain distorts the data the sensors are monitoring (imagine a thin sheet of ice over a lens)? Even just driving through mud could knock out vital sensors' ability to collect data. And this is before even contemplating computer misbehavior -- what happens when the chip wigs out and needs to be rebooted, if the car is traveling at highway speeds?

Don't get me wrong, I am not arguing for a Luddite solution of never allowing these cars on the road. They're going to happen, whether the legal system is ready for them or not, that much seems certain at this point. But in their eagerness to accommodate the corporations who have been testing such vehicles, state lawmakers have all but ignored the fact that our liability laws are going to need some serious upgrades to answer these questions sooner or later. The official California DMV website already has a list of over 50 collisions involving self-driving cars, and that's just in the testing phase. Few of these accidents have gotten much media attention so far (most were minor, and most were actually the fault of the other car's driver), but it is inevitable that sooner or later one will, in a big way.

There are a number of categories that new laws really should be drafted to address. What happens when "an act of God" causes an accident between two self-driving cars (such as them both hitting a sheet of black ice and losing all control of the car's motion)? What happens when a car is in a situation that the programmers didn't plan for and it does something unexpected? What happens when a car is in a situation that was foreseen, and someone dies as a result of an intentional choice a programmer made years earlier? What happens when the computer freezes or malfunctions, causing an accident? The bottom line for all of these question is: who is going to pay damages? Who will be found "at fault"? Will the automotive insurance industry just cease to exist when all cars are self-driving, because there will be no actual drivers left to insure? Or will you have to buy insurance directly from the manufacturer of these cars, who will foot the costs for accidents? This could be incredibly disruptive to two rather large industries, in other words.

These are just some of the questions state lawmakers should be considering, when deciding whether to allow self-driving cars on the road. But, unfortunately, most state legislators don't really understand computer programming and don't really understand how the law should treat these issues. So, for the most part, necessary laws will likely not be in place when legal test cases arise from such accidents. In the brave new world of self-driving cars, who is going to pay when things go wrong? That's the basic question that really needs addressing before allowing such cars to be sold to consumers, but to date I haven't seen it adequately discussed much, if at all.

-- Chris Weigant

 

Follow Chris on Twitter: @ChrisWeigant

 

38 Comments on “Who Will Pay For Self-Driving Cars' Accidents?”

  1. [1] 
    Chris Weigant wrote:

    Credit where credit is due:

    The "autokineton/egomobile" mismatch was pointed out by Robert Anton Wilson, in The Historical Illuminatus Chronicles. No, I didn't think that one up on my own!

    :-)

    -CW

  2. [2] 
    John M wrote:

    1) In the scenarios with the baby carriage, I can see where a case can be made where the logical choice would be to choose to program whatever option would lead to the least amount of deaths or injuries to the greatest amount of people. In other words, a group of adults would be chosen to be saved from harm or injury over only one single baby in a carriage every time. Or, to put it in Star Trek terms, the needs of the many outweigh the needs of the one.

    2) Similarly, I think in terms of liability, who is held financially responsible, I think the majority of that is increasingly going to fall on whoever actually owns and operates the car, probably a transport company. Actual individual car ownership is going to decline, and in fact already is among the younger generation, as more people turn to leasing cars, car or ride sharing, Uber or Lyft services, etc. People in the cars are going to be more and more seen only as passengers, contracting for the use of the car for limited times for specific purposes. More and more people will no longer actually own an individual car that merely sits in their driveway when not in use, especially in the big cities.

  3. [3] 
    John M wrote:

    Recently we have seen in many locations the major conflict being between Uber drivers and Taxi companies.

    3) But what happens if the major players start being the car rental companies like Hertz with fleets of self driving cars available at a moments notice from an app on your smart phone? They would also have the deep pockets to handle the liability issues through blanket insurance policies purchased in bulk discounts.

  4. [4] 
    C. R. Stucki wrote:

    Or, maybe the whole cockamamie idea will fade away, and common sense will prevail. Personally, I'd much prefer that all those bored high-tech guys would work on developing self-picking raspberries!

  5. [5] 
    John M wrote:

    [5] Don Harris

    "And what happens if whatever these self driving are linked to (like GPS or whatever) goes down and all the cars stop working with thousands or millions of people that have not driven for years or ever driven having to try to drive their cars home, assuming that would be an option?"

    I would imagine the same thing that happens now when say, a blizzard shuts down a major interstate highway. People abandon their cars and walk if they are able, or simply hunker in place.

    "In my opinion they should require that manufacturers of self driving vehicles should be required to pay for building their own roads and they should not be allowed on roads with driver controlled vehicles, just like there are roads that trucks can't drive on."

    I don't think that would be viable. And in many cases usually trucks are only restricted to certain lanes. Or, if they are not, there is still a public truck route alternative which trucks and cars share, along with the car only option, that is still paid for with the same publicly taxed transportation dollars.

    I am also thinking that some kind of programming like Asimov's Three Laws of Robotics might be possible for self driving cars.

    1) The welfare of one individual may not take precedence over the welfare of two or more individuals.

    2) One individual may not be permitted to come to harm except where it conflicts with the First law.

    3) ??? Any suggestions?

  6. [6] 
    John M wrote:

    [8] Don Harris

    "He said our friend that is an aggressive driver was complaining that when he signaled to change lanes and his car decided that there was not enough room to change lanes that the car would not let him change lanes. Sounds good until you hear his solution.

    He no longer signals to change lanes so the car doesn't know in time to stop him."

    The obvious solution to that is to take away the ability of the person in the car to intervene entirely, or at least to severely restrict the ability to override the car's automatic programming except for very specific reasons.

  7. [7] 
    John M wrote:

    I just thought of a possible Law # 3

    3) An individual car's automatic programming may not be overridden by an individual occupant where doing so would conflict with either of the First Two Laws.

  8. [8] 
    John M wrote:

    [12] Don Harris

    "NO.
    The solution is to say fuck you to these stupid ideas and self driving cars.

    Expecting these things to work like the spin says they will is unrealistic and not viable."

    I totally understand the feeling. But it didn't work for the Luddites and it's not going to work now. At best we can mitigate and manage what's coming, but not stop it.

  9. [9] 
    John M wrote:

    [13] Don Harris

    "It is, after all, about money. The goal is to replace people that get paid to drive. Any other alleged reason is a bullshit cover story."

    No argument there. Which is exactly why we have conversations about the impact it will have and related things like a guaranteed national income replacing traditional welfare. What happens if all the Taxi drivers and all kinds of Truck drivers start losing their jobs to self driving vehicles?

  10. [10] 
    John M wrote:

    [15] Don Harris

    "Kind of like voting for Big Money Democrats?" :D

    Touche! :-)

  11. [11] 
    Kick wrote:

    CW

    This article has got to take the record for the most questions ever. While I don't have as many answers as you have questions, I did hear that Volvo is actually offering to pay for any damage to property or persons that is caused by its 100% autonomous vehicles that are estimated on the road by year 2020. Perhaps the other automakers will simply follow suit.

    If automakers choose to produce vehicles with computerized "drivers" with the ability to take humans completely out of the driving equation, then I'd say humans should therefore be entirely out of the chain of legal liability. :)

  12. [12] 
    Michale wrote:

    But perhaps the company as a whole isn't responsible -- perhaps it is the project manager who was in charge of the design and production of that particular vehicle. Or the software programmers who wrote the code the car uses to drive. Or maybe even the software quality assurance engineer whose job it was to find all the bugs in the self-driving software.

    Which would still make it the company's fault...

    It's simply a case of product liability.. Think Pinto and the gas tanks.. It was the "Gas Tank Department"s fault, it was Ford's fault...

    "Overall road deaths declined last year due to self-driving cars" isn't much solace if your loved one was killed by a self-driving car, to put this as starkly as possible.

    So, you're saying that THAT argument has merit?? :D

    Michale tucks that away for future reference.. :D

    There's an even more serious problem to contemplate as well. This is actually a classic ethics question, although it is usually described using a railroad (with a switch that only allows two choices of tracks). Removing the tracks from the equation, what should a self-driving car do if it comes whipping around a curve and has only two viable options to choose from -- it can swerve to avoid running over a baby carriage, but making such a swerve means it will run over a group of five adults standing on the sidewalk.

    Dr. Rodney McKay: Let me ask you a question. Say there's a runaway train. It's hurtling out of control towards ten people standing in the middle of the tracks. The only way to save those people is to flip a switch - send the train down another set of tracks. The only problem is there is a baby in the middle of those tracks.

    Teyla Emmagan: Why would anyone leave a baby in harm's way like that?

    Dr. Rodney McKay: I don't know. That's not the point. Look, it's an ethical dilemma. Look, Katie Brown brought it up over dinner the other night. The question is: is it appropriate to divert the train and kill the one baby to save the ten people?

    Ronon Dex: Wouldn't the people just see the train coming and move?

    Dr. Rodney McKay: No. No, they wouldn't see it.

    Ronon Dex: Why not?

    Dr. Rodney McKay: Well... Look, I dunno. Say they're blind.

    Teyla Emmagan: *All* of them?

    Dr. Rodney McKay: Yes, all of them.

    Ronon Dex: Then why don't you just call out and tell them to move out of the way?

    Dr. Rodney McKay: Well, because they can't hear you.

    Lt. Colonel John Sheppard: What, they're deaf too?

    [Rodney throws him a look]

    Lt. Colonel John Sheppard: How fast is the train going?

    Dr. Rodney McKay: Look, the speed doesn't matter!

    Lt. Colonel John Sheppard: Well, sure it does. If it's goin' slow enough, you could outrun it and shove everyone to the side.

    Ronon Dex: Or better yet, go get the baby.

    Dr. Rodney McKay: For God's sake! I was just trying to...
    -STARGATE ATLANTIS, The Game

    :D

    That choice must be made in advance and programmed into the car.

    Not necessarily...

    We're not going to have autonomous cars until we have a decent AI..

    And a decent AI will be able to make such moral decisions without programming...

    Many of the "what if" questions you put forth would be answered by AI....

  13. [13] 
    Michale wrote:

    This is actually a classic ethics question, although it is usually described using a railroad (with a switch that only allows two choices of tracks).

    THE GOOD PLACE, The Trolley Dilemma
    https://youtu.be/lDnO4nDA3kM

    :D

  14. [14] 
    Michale wrote:

    I just thought of a possible Law # 3

    3) An individual car's automatic programming may not be overridden by an individual occupant where doing so would conflict with either of the First Two Laws.

    I doubt anyone would want to buy a product that might, one day, kill the owner and the owner's family..

    Although I clearly see the moral dilemma, I don't personally have a problem with an autonomous car, whose FIST DUTY is to protect the occupants of the vehicle...

  15. [15] 
    Michale wrote:

    Although I clearly see the moral dilemma, I don't personally have a problem with an autonomous car, whose FIST DUTY is to protect the occupants of the vehicle...

    Uh... FIRST duty... :^/

  16. [16] 
    ListenWhenYouHear wrote:

    In your question of should the car hit a baby carriage or go over a cliff, it will hit the baby carriage every time. A car’s sensors will detect an object, but it won’t recognize the object as being a baby carriage; it is just a small object. Nor would the car know whether there is a live baby in the carriage or just a doll; again, it’s just an object in the road.

    The car does know that going off of a cliff is something to avoid at all costs. If the only choices are a) go over a cliff, or b) hit a baby carriage then b) will be the car’s choice every time.

    As for who is responsible for a wreck in a lawsuit....it will all boil down to each state’s legal definition of “operator of a motor vehicle”.

    A couple of other questions: can a 12 year old take the car out for a ride? Can you get a DUI operating an autonomous vehicle?

  17. [17] 
    Michale wrote:

    In your question of should the car hit a baby carriage or go over a cliff, it will hit the baby carriage every time. A car’s sensors will detect an object, but it won’t recognize the object as being a baby carriage; it is just a small object. Nor would the car know whether there is a live baby in the carriage or just a doll; again, it’s just an object in the road.

    If a car is truly autonomous then it will be able to recognize and identify shapes, objects and life forms..

    As I said, AI answers everyone's question..

    A couple of other questions: can a 12 year old take the car out for a ride? Can you get a DUI operating an autonomous vehicle?

    As to the former, I would think it would depend on child welfare laws, akin to "can you leave a 12 year old child alone"....

    As to the latter.. DUI laws specifically state that you cannot operate a vehicle while impaired.. Since one is not operating the vehicle, it would be logical to conclude that all DUI laws are rendered moot..

  18. [18] 
    TheStig wrote:

    Very interesting column CW!

    As I see it, automotive risks are already partitioned between drivers, manufacturers (think exploding Pintos) and other actors. As now, as always, insurance costs of using a self-driving vehicle (don't forget self-piloting light aircraft) are ultimately going to get passed on to the consumer...the only real change is going to be in the loading factors and who the consumer writes the checks to, and when. The biggest insurance cost of a brave new self-driving world is likely to be when you, the slow witted, possibly impaired flesh driver, choose to navigate a normally autonomous vehicle.

    If self-driving vehicles reduce overall risk, and markets actually work, then automotive insurance rates should decline. Lawyers, politicians and CEOs will work it all out. That last sentence is not meant as blanket solace...oh my no! I'm simply saying there are a lot of precedents for what is about to go down.

  19. [19] 
    Michale wrote:

    If self-driving vehicles reduce overall risk, and markets actually work, then automotive insurance rates should decline.

    If autonomous cars becomes the standard, it's likely that automotive insurance will disappear or, at the very least, become a 'niche' or specialty market ala insuring someone's tits or legs or monkees...

  20. [20] 
    dsws wrote:

    The operator of the vehicle will still be liable (and their liability will still be covered by their insurance), even when "operating" it consists of entering a destination and clicking ok -- and then not telling it to pull over and wait for better weather, if weather is the problem. You'll be agreeing to the EULA when you click ok, anyway.

    Laws will probably have to be changed some, but not much. If an accident is caused by a faulty vehicle, it will be governed by the same laws that now address malfunctioning brakes.

    Without tracks that the vehicle can't get off of, and a switch with only two options, the trolley dilemma can't happen. The car will try to avoid all accidents, with priority on the prior one, i.e. the one that would happen soonest.

  21. [21] 
    neilm wrote:

    If I were a car manufacturer, I'd be offering to assist Apple and Google with their self driving software - for example pre-purchasing 100,000 copies at $5,000 per copy.

    1. They will have a self driving car on the roads as soon as other car companies
    2. Google or Apple will provide the "driver" - they are just providing the "car"
    3. After a few years:
    - a. The legal costs and issues will have been paid by Apple or Google
    - b. The software will become commodity, and so lots of companies will be providing alternatives (think chess s/w - it started with Deep Blue, now Apple App software designers are selling better-than-most-human chess programs for a few dollars)

    The last thing I'd do is what Volvo seem to be doing - being the deep pocket company in the middle of a legal mess in the making.

  22. [22] 
    neilm wrote:

    As far as adoption is concerned, I think that it is inevitable that self-driving vehicles will become ubiquitous.

    If the U.S. has a blanket ban, Singapore or Korea or some other country will take the lead.

    If there isn't a blanket ban, some states will be more aggressive than others - I believe Arizona is particularly welcoming at the moment.

    The software will have bugs, and decision trees will need to be hashed out (e.g. kill the baby or the group of people?), but here is the key - once one automated driving program is improved, all instances are improved almost simultaneously, and if this is happening world wide the continuous ratcheting will improve the software.

    Like chess software, we need to remember that besting the best human is only a point on a continuum - 5 years after we conclude that autodriving cars are safer than even the best human driver, they will be so far past our driving abilities that driving might be unrecognizable.

    For example, if cars can communicate, why have traffic lights - they can simply drive through an intersection at full speed and be interspersed instead of batched into today's ad-hoc mini convoys controlled by a light.

  23. [23] 
    neilm wrote:

    OK, has anybody seen the movie "AlphaGo" (https://www.alphagomovie.com/).

    AlphaGo is a neural network program that can beat the best Go player in the world (the movie is phenomenal and free on Netflix or Amazon Prime - I can't remember which).

    Basically it learns be playing itself. Since driving isn't a game, how can this help? Well if a version of AlphaZero (AlphaGo's generic next version) is writing the self driving software and is collecting all of the data from every self driving car in real time, it is effectively learning as it goes - the human computer programmers are out of the loop. At this point, the acceleration of improvement for the self-driving software becomes exponential, and AlphaZero will basically be asking lawmakers to rule on ever more esoteric questions as the simple ones have been decided already (e.g. the motorcycle accident CW mentions).

    The automated driving software companies are going to push for a single world body to decide on how the vehicle behaves when given a choice with no happy outcome rather than have different decisions trees based on vehicle location, however that isn't an insurmountable problem. Currently the company I work for has software that will show different data about a customer depending on where the user is sitting - e.g. in France they might be able to see somebody's email address, but not in Japan.

  24. [24] 
    BashiBazouk wrote:

    Or, maybe the whole cockamamie idea will fade away, and common sense will prevail.

    So far, autonomous vehicles are considerably safer than normal drivers. Almost every accident with an autonomous vehicle has been the other drivers fault. Usually due to the autonomous vehicle perfectly following driving law, which is unusual and therefore throws normal human drivers off. How are considerably less accidents not common sense?

    In your question of should the car hit a baby carriage or go over a cliff, it will hit the baby carriage every time. A car’s sensors will detect an object, but it won’t recognize the object as being a baby carriage; it is just a small object. Nor would the car know whether there is a live baby in the carriage or just a doll; again, it’s just an object in the road.

    This is actually not true. Autonomous vehicles use deep learning to identify what an object is. Currently the systems can differentiate between non-moving objects, other vehicles, pedestrians, dogs ect. If the autonomous systems can't tell a baby buggy currently, they will be able to shortly.

    I think autonomous vehicle insurance will go to a no fault system, whether the owner pays for it directly or indirectly through higher initial vehicle cost. I think normal vehicle insurance will increase in cost considerably over time to the point it will dissuade all but the enthusiast to move to an autonomous vehicle.

  25. [25] 
    neilm wrote:

    In your question of should the car hit a baby carriage or go over a cliff, it will hit the baby carriage every time.

    There are two things about this statement:

    1. What is the right answer - dispassionately, how would this group decide? For the purposes of this, we are going to assume that somebody has to die - either the baby or the occupant(s) of the car.

    Here is my take at a decision tree:

    1. If there are two or more people in the car, hit the baby.
    2. If there is one person in the car, and that person is under 80 years old, hit the baby
    3. If there is one person in the car, and that person is over 80, crash the car

    Anybody else want a try? Are there other scenarios I've not thought of, or are we just going to argue over the age of the person in the car (e.g. if the only occupant was also a baby, either way it is going to be bad).

    Another decision tree could be:

    1. Always hit the baby (why should the occupants be killed because somebody else put a baby in danger?)

    Or another:

    1. Always crash the car (the occupants, or their guardians if they were kids, assumed the risk of a crash by getting in the car)

  26. [26] 
    neilm wrote:

    The second thing about the statement is the certainty that the baby carriage contains a baby. Can humans figure that out?

  27. [27] 
    neilm wrote:

    I think normal vehicle insurance will increase in cost considerably over time to the point it will dissuade all but the enthusiast to move to an autonomous vehicle.

    I agree - and it isn't inconceivable to me that a country could decide that foreign tourists are not allowed to rent cars or drive on their roads and have to use self driving cars instead.

    If it gets to the point where cars are communicating and deciding what they are going to do simultaneously (e.g. at an intersection, or a convoy on a freeway deciding to brake or accelerate at a certain rate), a human will not be able to participate.

  28. [28] 
    Chris Weigant wrote:
  29. [29] 
    C. R. Stucki wrote:

    Don

    Nothing for Spellcheck to catch. You didn't misspell anything, you used the wrong word. AI just ain't that good.

  30. [30] 
    ListenWhenYouHear wrote:

    neilm,

    The second thing about the statement is the certainty that the baby carriage contains a baby. Can humans figure that out?

    Unless the car sensors include a fleer heat sensor, I don’t know how the car can tell whether something in front of them is alive or not. Body temperature is one of the only ways an AI would be able to tell the difference between a group of people walking from a group of mannequins on skates.

  31. [31] 
    ListenWhenYouHear wrote:

    Michale,

    As to the latter.. DUI laws specifically state that you cannot operate a vehicle while impaired.. Since one is not operating the vehicle, it would be logical to conclude that all DUI laws are rendered moot..

    In some states, any person sitting behind the wheel of a car is considered to be the “operator” of the car, regardless of whether the vehicle is moving or not. That means you can get a DUI if you are passed out behind the wheel in a parked car.

    Devon and I discussed this last night and he said that it all depends on how the state defines the “operator” of the vehicle as to who gets charged in a wreck.

  32. [32] 
    BashiBazouk wrote:

    Unless the car sensors include a fleer heat sensor, I don’t know how the car can tell whether something in front of them is alive or not.

    Do you have a heat sensor? How do you tell if something in front of you is alive or not?

  33. [33] 
    Chris Weigant wrote:

    John M [2] -

    This really is a classic ethics problem (like the "is it immoral to steal bread to feed a starving child?" question). Most recently, I noticed it on the sitcom "The Good Place," but I first heard it decades ago.

    I can look it up if you want, it's probably got some catchy name that I'm unaware of, in the Ethics 101 texts...

    As for your second point, while I agree about urban dwellers, people in the burbs are going to still want their own vehicles, I think. Also, with Uber, that just inserts another layer of complexity to the question "Who pays?"

    [3] -

    Now that's an interesting idea. Hadn't really thought that far...

    Don Harris [4] -

    What if the posted speed is 50+ mph? I was specifically thinking of CA route 1, down around Big Sur. There's an upward cliff to one side of you, the road, and then a downward cliff to the other side, meaning very limited opportunities for manuevering.

    [5] -

    The prospect of millions not knowing how to drive scares me, too. There are already millions out there who have never seen a paper road map and wouldn't know how to read one if their lives depended on it. Also, millions who have never even looked under their own car's hood -- Americans used to know how to fix their own cars...

    but maybe I'm just being curmudgeonly, who knows?

    :-)

    [6] -

    And Libyan terrorists, to boot! Someone once told me: "We all have identical images of the future -- "future" architecture, "future" cars and technology -- because we all watched the same science-fiction movies as kids." There's a lot of truth in that.

    We may slowly be approaching the age of Rosie the Robot ("The Jetsons"), though. That seems like the logical next step, to me.

    Don Harris [8] -

    That's interesting. I didn't mention the term "chaos theory" in this article, but I really wanted to. The more complex a system, the more prone it is to someone determined to insert a monkey wrench in the machinery...

    John M [9] -

    Points for bringing up Asimov!

    However, programmers know that the Three Laws would really be impossible to program accurately. However, when there's a choice to be made, the choices you laid out might be enough of a filter.

    As for the next step: should passengers in the car rank higher or lower than people outside the car (do you drive over the cliff or hit the baby)?

    John M [10] -

    To do so, you'd have to REALLY trust the programming. So far, I don't. But that may change in the future, who knows?

    Good rule of thumb for all Silicon Valley products: DO NOT buy the 1.0 version. Wait AT LEAST until 2.0, so they can work some of the biggest bugs out.

    Don Harris [13] -

    Here's one of the most frightening articles I've read in the past few years:

    https://www.huffingtonpost.com/scott-santens/self-driving-trucks-are-going-to-hit-us_b_7308874.html

    It's not so much self-driving cars that will change the face of America, as self-driving trucks.

    Don Harris [15] -

    OT;I

    John M [16] -

    See above link.

    Kick [18] -

    That's interesting about Volvo. You know, until I sat down to write this, I never really considered it, but it would indeed make the most sense for all the car manufacturers to become, essentially, self-insured for all their vehicles. Which, as I pointed out, would end the car insurance industry, at least as we know it today.

    Michale [19] -

    Good point about the Pinto. We had a saying in Silicon Valley: "The Pinto came out 6 months ahead of schedule. Nobody remembers that now -- people only remember one thing about the Pinto." This saying was popular among people (like the SQA engineers) whose responsibility it was to push back on the people begging us to release it early.

    OK, your Atlantis quotes were funny, I have to admit. But it just goes to prove this is a classic thought-experiment.

    As for AI, well then, we can talk about Skynet, right? AI doesn't mean Artificial Morality, just Artificial Intelligence.

    Michale [20] -

    Aha! Thanks for the link... I knew I'd seen it on The Good Place...

    ListenWhileYouHear [23] -

    Ooooh... now there are some good points!

    You're right -- the car wouldn't necessarily know a baby is likely to be in there. Excellent point. But it also might not know that a quick veer of the steering wheel to the side would result in falling off a cliff. It'd either have to have REALLY good sensors or REALLY good GPS awareness.

    And also excellent questions about age and sobriety. If the car is 100% in control, it shouldn't matter in either case, at least legally. Will "driver's licenses" cease to exist in the future? That's something to think about...

    TheStig [25] -

    Thanks for the kind words! I always wonder, when I stray from political commentary, if anyone will be upset by the detour, but I have been happily surprised at the level of discourse I've seen here tonight.

    OK, gotta run for now, but I promise I'll be back to answer the rest of these -- this has indeed been fun! I love "what if" debates, personally...

    :-)

    -CW

  34. [34] 
    Chris Weigant wrote:

    OK, before I go, I looked it up. It's "the Trolley Problem."

    https://en.wikipedia.org/wiki/Trolley_problem

    -CW

  35. [35] 
    ListenWhenYouHear wrote:

    Bashi,

    Do you have a heat sensor? How do you tell if something in front of you is alive or not?

    Yes, I do. We call it “skin”. We are talking about AI for cars, not human drivers. How would AI determine if an object is living or not? One of the easiest ways to determine if something is alive is by their body temperature. Police use fleer devices to locate where suspects are hiding in dark areas all the time. Unfortunately, fleer sensors are not cheap. Because of the complexity of training an AI to recognize living creatures, combined with the limited need for such a complex determination, I doubt autonomous vehicles will be able to differentiate between a living baby and a toy doll of the same size.

  36. [36] 
    Michale wrote:

    In some states, any person sitting behind the wheel of a car is considered to be the “operator” of the car, regardless of whether the vehicle is moving or not. That means you can get a DUI if you are passed out behind the wheel in a parked car.

    Only if the keys are in the ignition, as that establishes 'intent'....

  37. [37] 
    TheStig wrote:

    The USA currently accrues about 35,000 automotive fatalities per year. That's one Korean War KIA per year, but down from peak mayhem in the 1970s. Another 200,000 or so automobile related hospitalizations per year. I can't find any detailed stats, but baby carriage deaths and injuries seem very rare. Why the obsession with that particular scenario? Focus on drunks, road rage, sleep deprivation, work your way down.

    Beware the THE PAIGE COMPOSITOR Fallacy: obsessive perfectionism leading to poor reliability and late release into the market. As my engineer father used to put it: simplicate and add lightness.

    I hold it to be self evident that Americans fundamentally enjoy driving most, or at least much of the time. So, what is the primary market for autonomous automobiles? I personally think it's delivery trucks, short and long haul. After that, urban area commuters? Social drinkers? A very important market could be the elderly. What drives (no pun intended) people into assisted living? In my experience, it's loss of ability to drive safely.

  38. [38] 
    BashiBazouk wrote:

    Yes, I do. We call it “skin”.

    Well, if we are going to be pedantic, cars also have multiple heat sensors, but like human skin, they are not very good at range. Autonomous cars use lidar. Basically laser radar. they create an image map of both object shape and movement patterns then through machine learning, learn to differentiate by both the shape and movement. Very similar to how you learned to do it. The main difference being, instead of having to send every car to school for a few years, they can just copy the decision matrix to the next car.

    I doubt autonomous vehicles will be able to differentiate between a living baby and a toy doll of the same size.

    You would be surprised what a computer when properly programed and trained can differentiate between. Yes, a car can be tricked with a non-moving baby and a similar looking doll but at distance so could you. Also keep in mind with computers and technology, what seems impossible today is common place tomorrow. Autonomous cars are already have less accidents than human equivalents per miles driven. That's only going to get better.

Comments for this article are closed.