The problem with so-so driving

Edwin Olson
May Mobility
Published in
5 min readSep 4, 2018

--

A common sentiment in self-driving car companies is to build vehicles that “do not cause” accidents. Clearly, we can’t fault a self-driving car if some other moron crashes into it.

Not so fast. Of course there are cases where one car truly is a victim of the dangerous driving of another. But other times, a mishap is the end result of a “minor hazard” created by another driver. Consider a car that brakes suddenly, creating a minor hazard for the car behind them (who now has a greater chance of rear-ending the first car). The first car has not broken a law, nor would they likely be “at fault” for any resulting collision, but they have created a real risk.

We all create minor hazards, and most of the time, nothing bad results: those hazards are mitigated by other drivers — perhaps with some honking or cursing. Think of the roadway as a marketplace of hazard-making and hazard-mitigation; when there’s too much hazard-making (and not enough mitigation), mishaps occur more frequently.

This brings us to the idea of a so-so driver. A so-so driver is someone who does more than their share of hazard making. They aren’t conspicuously dangerous, but they do a lot more hazard-making than hazard-mitigating.

A so-so driver might:

  • Brake aggressively when a traffic light turns yellow, even when another car is following close behind.
  • Wait excessively long at an unprotected left turn, causing traffic to back up, blocking a crosswalk, and encouraging other cars to pass them on the right.
  • Fail to create space for a merging car that is running out of space.
  • Drive slowly in the left-hand lane, forcing other cars to pass on the right.

If a mishap occurs, the so-so driver would probably not be “at fault”. But at the same time, that so-so driver contributed to the circumstances (either through action or inaction) that led to it. So-so drivers make driving more dangerous for everyone.

In contrast, a good driver tends to decrease the risk of minor hazards around them. As they drive around, they dole out a little extra room or a create a little extra time that helps to snuff out a hazard. This does not mean that they drive cautiously and skittishly — in some situations, it means driving more aggressively. Take the case of a traffic light turning yellow — whether you should brake (“cautious”) or accelerate (“aggressive”) depends more on whether there’s a car close behind you than anything else.

The key characteristic of good drivers is that they adjust their driving based on what other cars and pedestrians are doing. They anticipate how a scenario is likely to play out, and they subtly modulate their behavior now so that a minor hazard is less likely occur later.

Now for a not-very bold statement: I think self-driving cars should be good drivers, not so-so drivers. But are they?

Self-driving cars tend to be so-so drivers. If you program a car to proceed/brake at a yellow light based on “how yellow” it is, you’ve programmed in a so-so behavior. If you program a car to look for a gap in traffic of some specific size (say 60m) before making an unprotected turn, you’ve programmed a so-so behavior. Any behavior that doesn’t consider the interactions between other vehicles is likely to be so-so.

Amir Efrati’s recent article for The Information highlighted the challenges that Waymo is having in their deployment in the suburbs of Phoenix, AZ. Presumably, Waymo chose this deployment for its relatively low driving complexity: wide roads, few pedestrians, sedate speeds, and lack of precipitation. Still, according to the article, community members are annoyed by their slow, occasionally erratic, and frequently frustrating vehicles. From the article’s description, it seems Waymo’s cars are so-so drivers.

When we look at self-driving car accidents, there’s a widespread tendency to look at who — a human or an AV — was at “fault”. Kia Kokalitcheva recently reported that vehicles under autonomous control had a smaller fraction of “at fault” accidents than the same vehicles under the control of a safety driver. What we don’t know is whether those self-driving vehicles contributed to accidents through so-so driving. I think many of us in the industry suspect that self-driving cars get rear-ended more often than other vehicles — and that the car’s own behavior could be a factor.

A fundamental challenge to creating good self-driving cars — ones that create fewer minor hazards and mitigate those created by others — is that most self-driving systems simply don’t understand how a situation is likely to evolve over the next (e.g.) fifteen seconds. If a car can’t anticipate a possible future hazard, it won’t be able to prevent it from occurring.

At May Mobility, we believe that there is an elegant technical approach to making predictions that will help us create good drivers. The essential tool is an algorithm called “Multi-Policy Decision Making”. MPDM allows a car to make predictions about how a situation will play out, taking into account the likely behaviors of all the other cars and pedestrians around it. In turn, this allows a self-driving vehicle to choose behaviors that prevent the creation of minor hazards. Our system is live in downtown Detroit today, where it constantly makes decisions that help prevent minor hazards from occurring — sometimes by driving more cautiously, sometimes by driving more assertively.

Self-driving technology developers need to think about how their vehicles drive. Minimizing at-fault collisions isn’t a big enough goal. The vehicles should be good drivers — they should make the roads safer for everyone on them.

One part of the problem is that self-driving companies are under incredible pressure to publish low rates of interventions by their safety drives. That creates an incentive for companies to operate their vehicles even in places where they know the vehicle is so-so.

Self-driving vehicles, like human student drivers, need to experience situations in order to learn. But the consequences of so-so driving have real impacts, and technology companies should keep that in mind — even if it means more interventions by their safety drivers.

Interested in learning more about MPDM? It’s an exciting area of robotics research. I’ll write more about it in a future post, but in the meantime, you can find several in-depth descriptions of it by googling for “Multi-Policy Decision Making.”

--

--

Edwin Olson
May Mobility

CEO of May Mobility, Professor of Computer Science at University of Michigan