Two years ago I made a bet (a beer) with a friend who works in autonomous driving for a major automotive technology provider.
My bet was that in 5 years, we wouldn’t have self-driving cars circulating diffusely in our urban centers.
His bet was that it could happen by making them drive very slowly.
I recently spoke with him and suggested that we double the bet. He refused ? .
Eight months ago I talked to a friend who works at an important autonomous driving company.
I raised my central point with him as well, which is that there is an underlying unpredictability about road conditions and potential mishaps that is not easily resolved, and perhaps not at all, except in very controlled environments.
Privacy is a big hurdle, then.
Aside from test/demo environments, in a “real-life situation” you’d need close surveillance of everything going on (road conditions, jobs, shenanigans, hacks, etc.) and that might be appropriate for cultures like Singapore, certainly not Europe.
This includes other vehicles moving around the area; we can think of city centers where only self-driving vehicles are allowed, in order to reduce unpredictability. Will this happen widely? I think it’s unlikely.
I can imagine building an airplane with an on-board pool that commercially offers trips from Milan to NYC. Does this imply that planes in the future will have pools diffusely? Obviously not. It’s impractical, for many reasons.
Then comes the regulatory issue, and in particular the liability aspects. Not just civil liability, which can be offset with insurance, but criminal liability, which must be tied to a person, and that person cannot be the driver of a “full” self-driving vehicle. Does the CEO of the manufacturing company assume liability? This would further postpone full autonomous driving further into the future.
When criminal liability is placed on the driver (or a person transported with some degree of control over the system), we are effectively defining a constraint that contradicts one of the features: on the one hand, the possibility, by not driving, of focusing attention on something else, and, on the other hand, the need to be attentive to intervene to avoid accidents. This is an oxymoronic specification.
It does not seem likely to me that the obligation to pay attention (and the need to have some degree of responsibility) will widely vanish in the near future.
If that’s the case, why not use the technology for other purposes?
I mean, autonomous driving technology could be assisted driving on steroids, allowing even people who are untrained and don’t have a driver’s license to drive: They would steer, accelerate and brake according to their abilities, guided by the technology. Abilities that would improve with time, while the technology prevents them from making serious mistakes and causing harm (in some cases, a mechanism could be provided to override the limitations imposed by the assisted driving car).
With such a system, even teenagers would be able to drive cars and not be restricted to low-speed motorcycles. The elderly could drive, provided they have quick reflexes, tested at the beginning of the trip.
The core skill required would be precisely the one most complex to be performed by a computer: detecting, even in unpredictable, never-before-seen contexts, emergency situations and quickly operating the stopping device, in order to avoid accidents.
Perhaps, the killer app of autonomous driving technology might be not getting rid of drivers, but getting rid of bureaucracy, driver’s licenses, etc.
It would be an adoption path that can coexist with the current regulation and environment and would expand the market, by making driving a car easier than riding a bike.