Impediments to Creating an Autonomous Vehicle | Michael DeKort | Pulse | LinkedIn

Link articolo originale

Archivio di tutti i clip:
clips.quintarelli.it
(Notebook di Evernote).

Impediments to Creating an Autonomous Vehicle

Michael DeKort

Sign in to follow this author

Owner and CTO at Dactle
Impediments to Creating an Autonomous Vehicle
The creation of autonomous technology will result in benefits to humankind. Those benefits may be different than what we think now. But in the end creating this technology will have benefits. Chief among them is lowering the accident rates and the resulting injuries and loss of life. It is imperative that we create this technology not only as soon as possible but also as safely as possible. Unfortunately, there are several processes and practices currently being used by the industry that are so problematic they will make it impossible to get to a full level 4 autonomous vehicle. These issues will involve so much labor, cost and reputational damage most companies will not be able to bear them. The safety issues are so significant they will soon severely impact the entire industry. Fortunately, all of these issues are technically solvable. Some of this is clearly evidenced by Waymo’s recent paradigm shift to much more simulation and skipping L3.  
Public Shadow Driving for AI and Testing is Untenable and Needlessly Dangerous – This practice will make it impossible to create a fully autonomous vehicle. It is not possible, neither in time or money, to drive and redrive, stumble and restumble on all of the scenarios necessary to complete the effort. The other problem is that the process will cause thousands of accidents, injuries and casualties when efforts to train and test the AI move from benign scenarios to complex and dangerous scenarios. Thousands of accident scenarios will have to be run thousands of times each. When the public, governments, the press and attorneys figure this out they will lose their trust in the industry, question its competence and may impose far more regulation and delay than if the industry self-policed. (The Tesla and Uber tragedies demonstrate this.) The solution here is to use proper simulation for at least 99.99% of the effort.                                                     
Simulation is Inadequate – There are several issues with the capabilities, configuration and use of simulation and simulators in the industry. The first being that AV sensor system simulation is normally not being integrated with Driver-in-the-Loop (DIL) simulators. When they are integrated it is often not in proper real-time. When testing needs to be conducted using both parts it is usually done in a real vehicle on the test track. (It is for this reason I believe cloud based sensor simulation is problematic). To make matters worse the DIL simulator often does not have a full motion system. Also it appears that tire and road models are not precise enough. These issues will lead to false positives and significant levels of false confidence. The lack of motion cue, tire and road fidelity will largely be hidden. Most of the problems will not be discovered until real world tragedies occur way down the line. The reason for this is the human driver training and testing the car will perform differently when they do not have or expect motion cues. The vehicle will appear to drive properly in simulation. But in the real world there will be timing, speed and angular differences that will manifest themselves as differences in how the driver, vehicle tires and road interact together. These differences will cause enough change to make accidents worse or even cause them. An example of this is when there is a loss of traction. The solution here is to follow aerospace, DoD and the FAAs lead and integrate the AV simulation with a full motion DIL simulator in proper real-time. And to ensure that all of the models are accurate. (Most people think of air travel when I mention aerospace/DoD. Saying it is not near as complex. That is true. What is as complex is urban war-games in simulation where hundreds of entities interact in urban areas in actual real-time. Something not a single simulation product the AV industry has can do as far as I know. And we did it 20 years ago. How is that possible if computers were feeble in comparison to what we have today? Shared memory and an executive that controlled when and how often tasks run).
Accidents Scenarios are being clarified as Edge or Corner Cases – Here is the Wiki definition of a Corner Case – “In engineering, a corner case (or pathological case) involves a problem or situation that occurs only outside of normal operating parameters—specifically one that manifests itself when multiple environmental variables or conditions are simultaneously at extreme levels, even though each parameter is within the specified range for that parameter.” What folks are calling edge or corner cases are the core complex or dangerous scenarios that must be learned in the primary path. Call them exception handling cases or negative testing but they are NOT edge or corner cases. Edge or corner cases would be cases outside the bounds of those normal operating cases. I say normal because these are scenarios that have to be learned because they will or can happen. Whether the scenarios are benign, complex or dangerous they all have to be learned and tested. The concern here is that the proper depth and breadth of engineering and testing is not accomplished because these scenarios are seen as outside the bounds of proper due diligence. This is where corners will be cut to save time and money.
No Minimal Acceptance Criteria – Recently the GAO admonished the DoT for not creating test cases to ensure the minimal set of criteria is known and verified to prove autonomous vehicles perform as good or better than a human. The GAO stated – “The Transportation Department, for its part, said it concurs that a comprehensive plan will eventually be needed. But in a prepared statement published alongside the GAO report, a department official said such a plan is “premature,” because of “the nature of these technologies and the stage of development of the regulatory structure.” It is a myth that most of the scenarios cannot be created because of associated technology. There is almost no correlation between the technology involved and creating test scenarios to ensure that tech is doing what it should. The minimal acceptable scenarios should have already been created and been utilized for the vehicles already in the public domain. The second myth is that the majority of the test scenarios will come from public shadow driving. As I have already stated it is impossible to drive the miles required to do so. The solution is to include a top down effort to create a proper scenario matrix, create the minimal testable criteria needed to ensure these systems are safe and to use that system in the same geofenced progression the systems are being fielded for engineering, test or public use. Regarding the scenario matrix, there is the issue of using miles and disengagements to measure AI and testing progress. Miles and disengagements mean very little and can be misleading without the scenario and root cause data. The primary metrics that should be used are critical scenarios that need to be learned and those that have been learned.
Handover (L2+/L3) Is Not Safe – While there are limited situations where system control, specifically steering, must be handed back over to the human driver, the practice, in general, cannot be made reliably and consistently safe, no matter what monitoring and control system is used. The reason for this is it takes 5 to 45 seconds to regather situational awareness once it is lost. In many scenarios, especially when they are complex, dangerous and involve high speed or quick decisions and actions, the time to acquire the proper situational awareness to do the right thing the right way cannot be provided. The solution here is to skip handover or L2+/L3 activities where they are not imperative.
Remote Control of Autonomous Vehicles – I have seen at least one company introduce a system to remote control an AV. They are doing so using cellular communication systems without a full motion Driver-in-the-Loop (DIL) simulator. While there may be scenarios where this is the best option to assist the driver or passengers and will perform satisfactorily using the current approaches, the system latency and lack of motion cues could cause significant, even catastrophic, problems. Especially when the scenarios are complex and involve speed, quick movements and loss of traction. You may also miss clues you were hit or hit something. 
V2X Update Rate is Too Slow – The current update rate being discussed most is 10hz. (Or updates 10 times per second). In many critical scenarios that is not often enough. For example – two vehicles coming at each other in opposing lanes, with no median, at 75mph each would require 60hz to deal with last moment issues. If the first message reliability is not 99% and a second is needed the rate moves to 120hz. There are other scenarios which would raise it more. The industry needs to looks at the most complex of threads in the worst of conditions and set the base update rate to accommodate that.
Vehicles are Targets for Hacking and Weaponization – I am sure we have all seen cases where vehicles have been hacked. It is not a leap to suggest these systems are prime targets for weaponization. Particularly those systems that remote control these vehicles or where source code is actually being provided to users. While many in the industry are aware that cybersecurity needs to be addressed what is being missed is the fact that most companies and organizations literally avoid several key cybersecurity best practices. A clear example of that is Privileged Account Management. This has led to almost every hack that has ever occurred. Unless addressed will we will never significantly reduce them.
Hardware Reliability – Building a self-driving car that meets the reliability requirements equal to our current system is one of the most challenging technological developments ever attempted. It requires building a very sophisticated, reliable electromechanical control system with artificial intelligence software that needs to achieve an unprecedented level of reliability at a cost the consumer can afford. Boeing claims a 99.7% reliability figure for its 737. Which is equivalent to about 3,000 failures per million opportunities. A modern German Bosch engine control module achieves a reliability of about 10 failures per million units which is about 6 times worse for a single component than our current system of flawed drivers. This level of quality may be extremely hard to produce in volume and to be cost competitive.
Common Mapping Versions – Map versions have to be common for every user in any given area. We cannot have different services providing different maps for which there are crucial differences in data. For examples changes due to construction will cause system confusion and errors. A solution would be to create a central configuration management process or entity that ensures commonality and the latest versions are being used.
Exaggerated Capabilities – Far too many of those involved in this industry, from those who are creating the technology, the press, oversight organizations and those who are in downstream industries, are exaggerating the current capabilities of these systems. As there are no minimal testable scenarios, even for progressive geofenced engineering or public use, this is all too easy to do. While those exaggerations may lead to funding and improve moral they create a false level of confidence. Given all of the other issues we discussed, and that sensor systems still cannot handle bad weather, this can only contribute to backlash when tragedies are caused by the issues I have already mentioned. It is not an exaggeration nor hyperbole to state that if these issues are not remedied avoidable tragedies will occur. The Joshua Brown accident was bad enough. When a child or family is harmed, the public realizes it was avoidable, that backlash will be significant if not debilitating. 
For more detail please find these articles. The first is the most extensive and has links to the references I cite to make my case.
Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI
www.linkedin.com/pulse/autonomous-levels-4-5-never-reached-without-michael-dekort
Autonomous Vehicle Testing – Where is the Due Diligence?
www.linkedin.com/pulse/autonomous-vehicle-testing-where-due-diligence-michael-dekort/
Corner or Edge Cases are not Most Complex or Accident Scenarios
www.linkedin.com/pulse/corner-edge-cases-most-complex-accident-scenarios-michael-dekort/
The Dangers of Inferior Simulation for Autonomous Vehicles
www.linkedin.com/pulse/dangers-inferior-simulation-autonomous-vehicles-michael-dekort/

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *