I dedicate this piece to my father who introduced me to world of waves when I was four by teaching me how to rebuild Ducati engines and to my mother who revealed the world of bits when I was six by demonstrating how to alter QBasic computer games.
This article is geared towards people with a STEM background. For something shorter try this article in The Weekly Standard.
With cyber security the first thing to understand is that the internet is ungovernable because locality is irrelevant and identity shroudable. This is by design—it’s the internet—we’re all supposed to be able to talk to everyone and it wasn’t designed at the protocol level to require payment or identification.
Cyber criminals make mistakes and are occasionally arrested, but attacks often originate within states that do not cooperate with international institutions or foreign governments. As a result, even the most competitive startups and technology companies capitulate to ransom demands when compromised; rather than rely on protection or action of law enforcement. In places outside of the developed world there is less of a distinction between private individuals and state actors. A software security expert can work to enrich herself or a criminal enterprise one day, and work for her governments intelligence service the next. This makes international cooperation less tenable with adversaries or unaligned states.
The second thing to recognize about cyber security is that attack is a thousand times easier than defense. Attackers can probe from multiple points, like previously hacked computers or from servers rented with stolen credit card information, patiently trying different strategies until they succeed. Especially talented attackers may first invent a new method of attack then write software to scan servers or internet traffic to create a prioritized list of potentially vulnerable organizations before systematically breaching them. Like burglars scoping a neighborhood before trying their new approach on many domiciles.
A defender, on the other hand, is a sitting target. Her application is public facing; with URLs, domains, and data centers that anyone can investigate. A consistent, detectable set of operating systems, software languages, and libraries that are as well understood by potential adversaries as they are by the team that is responsible for the software that uses them.
It only takes a single entry point incorrectly secured to allow inadvertent public access. Defending all entry points and permanently keeping them defended, despite changing organizational requirements, personnel, and a never ending stream of vulnerability updates to software libraries; is nigh on impossible. Even organizations that have the technical competence and resources to defend against persistent attack, like the NSA, are one insider away from critical breach or exfiltration. Think Edward Snowden or Chelsea Manning. The stakes are high, the attack surface huge, and the threat persistent.
Once a system is breached, a sufficiently prepared attacker can use pre-written software to accomplish their prioritized objectives. For example, malware may exfiltrate password lists and cryptographic keys first, and later collect information like purchasing history. Once an attacker begins to suspect that their activities have not gone unnoticed they can resort to encrypting the compromised server’s hard drives before ransoming them back to their owners. Welcome to the cyberpunk future: it’s stranger and less dingy than promised, but hackers have panicked organizations scrambling to pay cryptocurrency ransoms. It’s strange to admit publicly, but I’m one of the people that gets called because of my early involvement in crytpocurrencies and long history of responsible vulnerability disclosure. Attackers: Please understand that due to the public nature of this website, I no longer personally hold cryptocurrencies.
System administrators and software developers, the primary people responsible for defending our software enabled devices, are invisible until a breach occurs. Since organizations do not observe an attacker’s failure, the market generally does not reward extreme competence in cyber defense. The best paid and most well known people in cyber security are generally known for their public disclosures or exploits, and rarely for the bulk of their quiet work: making sure standard issues are continuously monitored and addressed.
Blackhats, in stark contrast, are sexy. They seem to come out of nowhere. They disrupt events and enact their will to a degree that would otherwise be disproportionate to their resources. Look no further than the Clinton email breach to see how much a single hacker can change the world. The power of a talented software expert is growing every day. In terms of electoral votes per person-hour worked, door-to-door canvassing pales in comparison to cyber attack.
Even the most dedicated defenders have an Achilles’s heel: Previously undisclosed software exploits. These are known as 0days to experts due to the lack of notice. Attackers utilizing these weaknesses can breach systems regardless of the care or knowledge of the software developers involved. A competent defender’s confidence in code or agents is a spectrum ranging from untrustworthy to highly trusted. They don’t ever trust completely; which conflicts with rapid software development and deployment.
Heartbleed, a security hole in a widely used library called OpenSSL, caused a number of organizations online to panic as system administrators rushed to patch in advance of a breach. The most prepared either immediately disconnected their servers or waited to see signs that an attack was ongoing through utilization of network monitoring tools; only removing their computers from the internet once the vulnerability transitioned to an ongoing attack.
Michael Hayden former US Air Force General and Director of both the NSA and CIA has these comments:
[The cyber domain of war] is characterized by great speed and great maneuverability, so it favors the offense. It is inherently global. It is inherently strategic.
The survival of the agent and its persistence as a command and control platform for payload delivery depend on its ability to hide, morph and masquerade.
The third thing to understand about cyber security is that certain classes of cyber attack, including most 0days, can break all instances of the same system at the same time. For example, while it would take two separate missiles to destroy two separate predator drones, a single software vulnerability found in both of them can be exploited for a cyber attack to breach and disable both simultaneously.
This is how WannaCry was able to infect hundreds of thousand computers, including life saving machines used by hospitals in the UK. Bombs from the world of bits don’t have yields measured the attributes of their identity, like mass or explosive power, they have yields measured by the attributes of the world around them. The number of vulnerable targets, their network connectivity, the privileges they operate under, and the additional access to other devices they provide. Their usefulness is a function of the target’s exposure, much like the incendiary devices used during the Second World War.
There are countermeasures at a defender’s disposal, but they aren’t perfect and in the arms race between attack and defense, they’re harder to build, deploy, and for non-technical organizations to evaluate. Even armed with the best tools, dedicated operators still don’t win every single time.
The one advantage that defenders have is that most software experts do not want to work for criminal gangs or to get involved with corrupt, illiberal states. But the cyberarms market is growing and nation states like Saudi Arabia pay millions for certain classes of vulnerabilities. Money talks. Burner computers and cryptocurrencies enable the otherwise timid to sell their discoveries without worrying about potential blackmailers getting their face or fingerprints; though only the truly careful evade NSA detection.
- The internet is anarchy. It is difficult to attribute attacks and even when it is possible, public disclosure reveals sources and methods.
- Cyber defense is extremely difficult and underappreciated, especially over time as organizations change.
- Some classes of cyber attack allow control of all instances of a device and, with the right pre-planning, can prevent access to the device owner once breached.
Reliance on conventional deterrence, like bombing a cyber criminal’s house, is politically difficult. Intelligence isn’t perfect. Malware can destroy or alter itself and attacks can be staged to implicate uninvolved third parties. Attacks can even be inadvertently launched due to errors made during exploit development. A malware developer can accidentally program a bug into their virus, they can unintentionally connect a test computer to the internet, they can even have their virus launched due to the presence of third party malware.
For deterrence to be effective it must be not only credible, it must threaten rational actors who have full control of their resources. Software experts are no different than the public in their capacity for derangement or extreme ideological persuasion. Even if western intelligence agencies were omniscient and had perfect, provable attribution that didn’t leak sources and methods, deterrence against all terrorist cells and cyber activists would still be impossible.
Reacting during an attack is difficult because the communication systems that run the internet move data at roughly the speed of light. A server in Austin, Texas takes about 140 milliseconds to send data to a server in Tokyo, Japan. An ICBM traveling the same distance takes almost a million times longer in travel time alone. Attacks originating from compromised computer components can be measured in microseconds.
Through the use of methods like automated network monitoring, unsophisticated attacks can be automatically mitigated. Servers can be powered off or disconnected from the internet, but reliance on human judgment during an attack isn’t feasible and disconnecting or shutting down devices may not always be possible since compromised devices can disengage themselves from remote control.
A hacked phone, for example, may have its network code updated to pass all information through a server controlled by the assailant. Without countermeasures enacted ahead of time by the phone manufacturer, instructions to update the phone’s vulnerable software can be automatically blocked. Failure to plan for the worst or other missteps by a defender may result in a permanently compromised device.
In 2010 Sergey Ulasen of VirusBlokAda discovered Stuxnet, a virus believed to be written in a joint collaboration between Israel and the United States to disrupt the Iranian nuclear weapons program. Though the Iranians took steps to ensure that their nuclear material processing devices were not connected to the internet, the virus was able to breach them by hopping onto USB keys. Once the malware had taken hold, it subtly altered instrumentation to destroy centrifuges without revealing its presence. It was a watershed moment that I recall disbelieving upon first learning of its existence:
It sounds way too Hollywood to be true.
It wasn’t until I’d read through the code myself that I finally believed that it had actually happened.
Stuxnet taught the cyber security community two things: First, viruses aren’t just intelligence tools—they’re weapons of war. Second, whether or not a computer is internet connected is just another area of trust for competent defender: More of a continuum than a binary fact. The Iranian weapons program can be pretty dark but still let in USB keys and a Twitter profile can be white hot but still go offline occasionally; and data can be exfiltrated through a multitude of methods.
Every single electrical and radiological device can be utilized to transmit data. What’s the baud of opening and closing HVAC vents and reading them from space satellites?
Due to advances in AI, data can be analyzed and filtered locally so low bandwidth or high latency methods of communication aren’t the barriers that they once were. For example, an adversary can employ voice-to-text on an organization’s internal videos and only exfiltrate those that mention keywords critical to research.
Kaspersky Lab, the research group that uncovered Stuxnet to the public, has recently been classified by the United States Department of Homeland Security as likely having ties to the Russian FSB, the post-Soviet replacement of the KGB.
Over the past twenty years a multitude of everyday devices have turned into computers. Fridges are now computers. Watches are now computers. Even things like one time use sensors to ensure correct concrete hardening are computers. Cars are computers.
After a cyber security breach at very reputable company I was working with years ago I asked myself: If even we can’t keep personal data secure, how are these automotive companies doing it?
Then Jeep got hacked and had to ship millions of USB sticks to patch the automotive software.
Vaguely vindicated that there wasn’t some special secret to securing computers that they knew that we didn’t, I also felt uneasy. If hacking a Jeep is as straightforward as hacking a server, and servers are routinely breached, then where are all the hacked cars? It’s a bit like the Fermi Paradox.
The explanation is likely a mix of things. It may not occur to most security researchers that they could be just as effective hacking cars as they could hacking servers or workstations. There may be a lack of motivation by assailants capable of breaching these devices, since they may be used to corporate espionage contracts where buyers buy information, not control. They may also have trouble turning security techniques that work for servers or personal computers to cars because the attack surface is smaller.
Though unlikely, the answer could also be more sinister. Just as Uber quietly paid off their attacker, automotive companies could be quietly doing the same. Public companies may need to publish audited financial statements, but they don’t have to report on the scope of nature of their cyber breaches. If VW acted unfaithfully to circumvent pollution regulations, why should we trust them to responsibly warn us about the growing nature of this threat?
Sometimes I think we’ve overloaded the word security. It’s used to indicate both a feeling as well as a reality. One person can feel safe, but actually be in grave danger, while another can feel terrified even though they’re under no risk of calamity.
Structural engineers limit how much deflection a beam or causeway undergo during expected load not because the deflections themselves are necessarily unsafe, but because they expect people to reliably report when things feel wrong and if large but otherwise safe deflections are routine, then large but unsafe deflections will go unreported. Structural engineers also use more conservative designs when systems cannot exhibit visible weakness before failure.
Computers hit with sophisticated malware show no sign of infection. Even if an attack requires multiple stages or intermediary computers—as the Stuxnet virus did—carefully written malware is invisible. All software, including computer worms and viruses, is just code and code is just data and data doesn’t change how we perceive the computer it resides in unless the software on the computer intends for us to perceive the change; which carefully written malware thwarts.
We think of Teslas as cars just like we think of an iPhone as a phone, but a more accurate account of reality is that they’re both just computers. One drives you around, but that’s basically the sum of the difference. No matter how strange it may sound to a layperson, to a software developer it’s so obvious that it isn’t worth discussing.
Recall that the third thing to understand about cyber security was that some classes of cyber attack compromise all instances of a device. WannaCry is just as possible for Teslas as it is for hospitals in the UK.
Autonomous vehicles have a second problem: They can be used as powerful, directed bombs. A fully charged Telsa traveling over 200 kilometers (125 miles) per hour that crashes into a chemical plant, electrical subsystem, oil line, or gas station will have an impact worse than bombs that terrorists set off in the middle east.
Now combine those two thoughts together. What are hundreds of thousands or millions of cars capable of taking out every gas station or power grid in the developed world?
Weapons of Mass Destruction.
Look, I know how it sounds, but please hear me out. I’m deeply concerned about the future and I see no other way to secure it other than to characterize it this way and to bring my research and suggestions forward.
The first atomic bomb dropped on Hiroshima killed eighty thousand people. Today the death of that number of people is possible by compromising only one type of Tesla and killing the driver alone. These things aren’t in league with modern day nuclear weapons, but they are weapons of mass destruction just the same. The hazard from autonomous devices is growing every day as we put increasing numbers of them on the road and in the air.
Attacks of this nature are not mere perturbations of general purpose computing systems or traditional tools of espionage.
I would also contend that they aren’t even the first class of cyber WMDs. Our formerly vulnerable industrial facilities and power plants probably fell in the same category, but it was prior to Stuxnet and most could be both physically isolated and disconnected from the internet, unlike self-driving cars. It is also harder to hack multitudes of them because they run software customized to their needs and it takes work to understand how to use compromised industrial equipment for evil. Lower hazard; easier risk mitigation.
True, the problem isn’t solved, but it’s being carefully watched and addressed by countries in the West. Autonomous devices are a much harder problem to tackle because there are going to be billions of them with the same set of technologies and we can’t stop attackers from physically inspecting them. I’m convinced it is why some voices in the United States talk of nuclear deterrence to cyber attack. They know the damage these weapons can cause.
One of the problems I’ve had over the past year and a half is how to communicate this idea without:
- Sounding like a crank.
- Giving ideas to terrorists or hostile foreign governments.
After this first occurred to me a year and a half ago I reported it to Public Safety Canada and a year later I met with my Member of Parliament to try to communicate my concerns and to get answers about what efforts we were putting forward to mitigate this threat. What I learned was that there were not only no regulations there were no plans for them either. While I think my MP took my concerns seriously and did what she could, I came to understand that political will lags public outcry and that the best way of protecting the public from this threat is to make this concern public so we can change the way we think and talk about cyber security.
I’ve contacted most major autonomous car companies, including Tesla, BMW, GM, Toyota, Waymo, and others. With the exception of GM, my technical questions about the nature of their cyber security practices were completely ignored. The person I emailed at GM answered what she could, but there’s a limit to what a corporation can share because obscurity is a valid defensive measure. It’s how passwords and authorization tokens work. But if decision makers in large automotive companies had a magic solution that startup software developers somehow missed it should be front page news.
I also used this time to quietly talk with researchers from military think tanks, autonomous vehicle committees, university professors, cyber security experts, respected personal contacts, and members of the intelligence community. I’ve even talked to a member of the cyber warfare division of a major military. It was hard to convince some of them to accept my characterization of this threat as cyber WMDs, but only two of them disagreed with me overall. They rebutted that individual cars are much easier to hack and after they are first used in a terror attack we will get the political will to fix the problem. One security expert countered that unethical cyber experts lack incentive; but when I brought up cyber warfare arms sales and far out put options he reversed his position completely.
Though I do not agree with these counterarguments, I include them to present as full a picture as possible.
Our automotive corporations know they need autonomous driving technology to compete and they also know they’re exposed. Their primary defense is the obscurity of their platform. That it is unnecessary for individual automotives to service arbitrary, inbound internet traffic also helps. Most software developers haven’t thought of self driving cars, cyber security, and national security at the same time, nor do they know the interfaces that the communication systems on automotives expose. When I bring up the nature of this threat to strangers with data science stickers on their laptops while pretending to be a layperson, they assure me that things are fine because there must be regulation. Even smart people, used to thinking via first principles, have trouble thinking critically about new ideas from people they don’t know.
The people at the NSA, CSE, and others are working very hard to do what they can to protect us, but neither they, nor the automotive companies have anything special secretly rolled up their sleeve that they’ve managed to hide from the millions of people involved in automotive manufacture. If it were easy for countries to coordinate with industry, then our phones would imperceptibly communicate GPS location data in high pitched sound during calls to emergency services (911 in North America) to communicate location. Instead, we have hundreds of thousands of preventable deaths around the world each year. Rather than save lives, this simple technological method is instead used to track television ad viewership through hacked phones or otherwise monitored audio devices.
Cyber attack of this type is difficult, but it isn’t impossible. What this means is that we have time. These exploits are possible for the NSA, but they aren’t something that ISIL or the DPRK is currently capable of pulling off. We’re exposed, yes, but the sky isn’t falling. We have time to create the right regulations and international agreements if we can foster the political will and if we act with haste. However, if a cyber attack of this nature does happen before we fix things please do not take my words as proof that the event is a conspiracy by the US government. Liberal democracies don’t purposely kill their own civilians. Belief to the contrary is rooted in deranged ideology and black propaganda.
I have a number of ideas on how to approach a solution to this problem, but the most important one is this: Engineers and software professionals need to recognize that our politicians aren’t able to intelligently regulate autonomous devices and our corporations lack the incentives to completely protect us. A well-funded, open source effort with clear recommendations will be the most effective way to securing the future.
Software professionals need to encourage electrical and mechanical engineers to submit open source proposals that will help autonomous vehicle companies and governments protect the public. The OSINT and Arms Control community need to help with drafts of international agreements to make cyber targeting of civilian systems resulting in mass casualty during wartime illegal under international law.
Legal minds and translators, especially those with STEM backgrounds, can help craft sample regulations that less technical countries can use as a baseline. Even if you lack these skills, you can make a difference just by writing a pen and paper letter to your governmental representative or writing an article for your local newspaper about how important it is to work together to solve this problem.
Our trade agreements need to reflect the changing nature of our interdependence. China recently announced that foreign self-driving automotive companies like Google’s Waymo cannot photo every square inch of their roads due to concerns over national security. But autonomous vehicles require both cameras and an internet connection to operate. What difference does it make? If followed to its logical conclusion the regulation will have the effect of keeping foreign-made autonomous vehicles off of Chinese roads.
The Chinese either understand the threat that autonomous devices hold and want to limit the potential damage that foreign autonomous devices present or they are attempting to use national security as a fig leaf to mask their larger ambitions: Supremacy in the next automotive revolution.
China’s population is 40% greater than Nato’s; they’re modernizing and we won’t have digital supremacy over them forever. Liberalized trade is great, but national security takes precedent. International cooperation on autonomous device regulation and symmetric trade agreements with harsh violation provisions are required. Otherwise, we cannot allow non-friendly states access to our autonomous device markets. Nor can we allow components from these countries in our autonomous devices. Today, China is both an opportunity and a geopolitical threat. Continued economic integration is important because friendship, peace, and, ultimately, alliance between China and the West is possible; but we need to do so carefully. Without a cooperative China this challenge becomes an order of magnitude harder to solve.
Though my primary recommendation is a well organized open source initiative, I have my own specific suggestions after thinking about this over the past year and a half.
First allow me to address what I think won’t work:
Reliance on off-the-shelf antivirus software. Antivirus software is the perfect platform for foreign intelligence services because it requires heavy disk scans with decryption. It should be used only where absolutely required, and entrusted only if it is from a highly vetted corporation from an allied nation. Antivirus software compromises one thing for another, and should be treated only as a very last resort. In a perfect world we wouldn’t use or need 3rd party antivirus, but in the short to medium term it may be useful.
Trying to air gap autonomous devices. Between aerial Bluetooth worms traveling across smart light bulbs, debugging devices at your local automotive repair shop, or just plain old mistakes like vulnerable software defined radios; air gaps can’t be ensured. It never stopped the CIA so we shouldn’t count on it stopping the DPRK, either. It was my first instinct a year and a half ago, and it took me a long time to lose, but even though it works for power plants it won’t work for cars. It will just slow things down.
Code review by governmental agencies. The British may review the code of Chinese network gear in Banbury, Oxfordshire, but even they admit that if it ever came to total war with the Chinese they would likely have to replace it all with American or EU stuff. The chief deterrent the UK uses to discourage the widespread intentional abuse of Chinese networking gear is loss of trade and reputation. The British haven’t convinced themselves that Huawei switches or antennas are completely safe because they’ve inspected every byte of the source. Code is as slippery as an eel. What you analyze today is irrelevant tomorrow. Code is just data and data changes. Plus, there are plenty of Huawei pieces that got through code review only to have critical breaches in the field.
Trusting automotive companies to self-regulate. Equifax and Ashley Madison were secure until they weren’t. National security with WMDs isn’t something to entrust to corporations, and certainly not corporations from countries with a poor history on cyber security. Capitalism rewards invention and risk, not long tail risk mitigation.
Certifying individual components or vehicles. Detailed, prescriptive regulation and individual certification is too slow to keep pace with the fast changing nature of modern software development. Our most secure software companies update their code multiple times per day. This isn’t just a correlative artifact of well run tech companies—it’s causal. The first actor to find a vulnerability is often the organization responsible for the service or device and getting the fix out as fast as possible is important. The only counter example I can think of is Nasa, but their interfaces and latency issues are out of this world.
Instead, regulations should be functional. Introduce maxims such as Data should never be readable by an intermediary network device, and, no action taken by the media computer should change state in the control computer. This way fines and security bounties aren’t arbitrary, but still allow automotive corporations to compete on the speed of their technology advancement while addressing concern over gambling on security.
The insurance market should not be responsible for pricing widescale cyber attack. They have neither the balance sheet, nor the expertise to effectively estimate the risk. The Poisson Distribution is wonderful but it can’t be used to price or predict mass cyber attacks. It requires statistical independence, something vulnerabilities of this nature do not have. It is impossible to get precise measures of the probability or scale of black swan cyber events. Civil engineers design for one in hundred year storms. What is a one in hundred year cyber attack?
Though blockchains could be useful to ensure code is consistent across a myriad of devices, they also delay safety critical updates. Also, some security techniques are harder to employ if code must be consistent across devices, so I lean against recommending them.
Waiting for autonomous vehicles to be used in small scale attacks before crafting legislation. Though it is easier to hack a single computer in person, the resulting regulations will not be geared towards widespread attack. The concern should first be focussed on our national security. Securing soft targets should be a distant secondary priority.
If terrorists can hack one car in person why wouldn’t they hack more before initiating an attack?
Only the wasteful would because a pre-step of hacking many or all devices of the same type is hacking a single one locally to make sure that that the approach works. A great insight I would have missed were it not for @mattlovesmath.
With what won’t work out of the way, allow me to share what I think might help:
Any multi-use, non-military device that can fly, drive, walk, rocket, or swim autonomously must allow for a standardized set of components designed for different contexts (underwater, high heat, etc.) that can be collectively reasoned about as a singular concept I term the safety module.
The power or fuel of the propulsion mechanism, and the computers and sensors that command the autonomous device must both connect through the safety module. If this is not possible due to the nature of the propulsion system (e.g., devices with solid fuel chemical rockets), then an emergency disabling or guidance system should be present instead.
The device must not be powerable or operable if the safety module is not present and the device should not be able to alter or remove the safety module of any device, including its own. Though military devices shouldn’t be subject to these regulations, they should follow as many of them as possible, and when not possible, strive to follow their intent in other ways.
Cyber-secure drones and killbots, ironically, are something that our militaries have a vested interest in acquiring, so they’re presently less of a concern. Plus, they can be safely turned off when they aren’t needed so fewer of them threaten us during peacetime. If they do employ safety modules they should employ layers of them so our defenses cannot be brought offline by breaching a single safety broadcasting system.
The UN should take steps to protect the right for every country to specify what safety modules are acceptable within their domain and it should be against international law to allow the manufacture or sale of civilian autonomous devices that do not meet these standards. It should also be illegal to create products that only accept safety modules from a predefined set of nations. States will probably contribute to open source initiates to establish a common working platform, but will also probably employ final tweaks based on their own understanding of the threats they face.
Close allies, like the United States and Canada, will either trust each others modules or work together to develop a jointly managed strategy. Every type of standardized safety module should have a preset expiry date and the bays that the module is designed to fit within should be designed with this in mind so that upgrading the design of the safety module isn’t costly. After months of warning, safety modules reaching their expiry date should safely shutdown the device they enable.
The manufacture or provision of counterfeit safety modules by states should be considered an Act of War and be illegal under international law. Individuals knowingly involved should be treated as felons and arrested. By Interpol if necessary.
Devices could be designed to accept multiple modules any of which can initiate safety procedures. That way autonomous movement does not need to cease when changing jurisdictions; though care would need to be taken to ensure that the devices were truly independent. Any one safety module should be able to shutdown or disconnect their host, even if other modules have been designed to act maliciously. This isn’t just for hard nosed realism, safety module redundancy is helpful since villains should never be able to negate protective measures through a safety module. Security here should be additive or, at worst, redundant.
The safety module must be able to communicate through multiple channels; including satellite, radio, and LTE. Pulsating light should act as a backup if the radio spectrum is jammed or unavailable, and the submerged should pre-planned communication with civilian subullites. By utilizing cryptographic keys and certificates, states can order autonomous devices through the safety module to head emergency commands like shutdown in 5 seconds or cease software updates until further instruction or revert to fallback computer and safely cease operation and the safety module should parrot these commands through a secure one way channel.
It should be assumed that the module may be relaying commands to a compromised control computer, so the module itself must be able to disengage both the power and the controlling computer during an emergency. The automotive navigation system should routinely communicate intention to national bodies to establish a geofence that, if breached, would initiate automated emergency procedures. Countries should also analyse the automated plans for statistical anomalies, like number of devices simultaneously proximate to soft targets, to detect when an adversary may be subtly organizing the position of autonomous devices to reduce attack detection response time.
Cryptographic tricks could be used to try to handle some of the negative implications to our privacy. But the reality is that regardless of the measures taken, autonomous devices are going to be easy for most states to track. Just as modern phones are. The tool to fight abuse of the safety module by illiberal states should be economic sanctions and armed conflict.
The most straightforward way of securing the safety module would be to employ both a secured computer within the safety module and a fallback to an unchangeable ASIC, each with their own set of cryptographic keys. Either should be able to power down the device in the event that the other is compromised. The secured computer of the safety module should communicate with the main computer over shielded, private, encrypted, one-way (i.e., connectionless) bus and electrical diodes should enforce this directive. Though I’m not fully certain here; radio broadcast seems jammable and forgeable (sans frequency hopping), but a bus connected to the control computer seems potentially dangerous.
If any computer must communicate to the safety module this should be accomplished by having the message relayed to the manufacturer through the internet or with a predetermined physical state change to the safety module, like the flip of a physical switch. For example, if the media computer detects that the control computer has been compromised while the automobile is in an area lacking an internet connection, it should physically flip an emergency shutdown switch to safely cease vehicle operation.
Safety modules should have no ports and no network connection to debugging devices or upgrade servers. The code that commands them should not be alterable at the hardware layer. Their job is simple: Relay commands and initiate emergency shutdowns. They should be designed to be regularly recyclable, and should be physically replaced in secure, government run facilities when requiring an upgrade or when necessary at boarder crossings. Exceptions can be made in extreme situations, like remote communities or, temporarily, during natural disasters. Very large bounties and ceremonial honors should be awarded to security researchers that bring forward critical vulnerabilities for safety modules.
Standardized, redundant systems are another safeguard. If power is removed from the primary computer of an autonomous device then the vehicle should still be able to surface, land, or park safely. Hard shutdowns should always be available to stop weaponization, but it shouldn’t be the first resort during a cyberwar. For planes my instincts say that we should either plan to keep pilots around or have flight attendants trained as backup pilots, but this may limit options for smaller autonomous aircraft. Perhaps they should have redundant fallback computers that do nothing but glide?
In order for a device to go faster than a preset limit it should request permission to do so via the internet and it should be enabled by the module for a predefined period. If the device is traveling to an area without an internet connection, it should request this capability beforehand. That way our autonomous devices can get to the hospital quickly during an emergencies, but governments can limit how many simultaneous high speed vehicles are allowed. A car moving half the speed has one quarter of the kinetic energy and is less likely to have a battery explosion on contact. GPS signal is jammable and forgeable, so use multiple methods to determine location.
Isolating the control computer of our autonomous devices is also important. For example, the automotive media computer should not be connected to the control computer. Today, many automotives communicate through a Controller Area Network Bus, or CAN bus. It is an unauthenticated, unencrypted multi-master bus. Around the world software developers reading the previous sentence just spat out their coffee.
Do not allow control computers to read data directly from the CAN bus. If we must get state from the CAN bus then it should be through an intermediary component that converts signals to one of a finite series of states, enums, or bounded analog voltages. Create an international agreement to sunset the CAN bus and replace it with something more secure. The replacement components and On-Board Diagnostics (OBD) systems should be viewed as methods for the patient to quietly gain control of thousands of vehicles and should be rethought, replaced, or removed completely.
Some of these proposals, like the previous one, will make repairs by small shops hard or impossible, but the unfortunate reality is that it makes no difference decades from now because vehicle maintenance is also going to be automated and automated vehicle navigation mean automotive collisions will be far less likely. It’s the primary reason automating vehicles is a good thing. Electric vehicles also inherently require less routine maintenance because they have less dramatic engines and drivetrains. We’re lucky in one sense: This transition is happening at the same time as the transition to fully electric vehicles. Let’s approach it as an opportunity to wind down small shops where we can instead of getting them to retool just before going bankrupt.
We should solve social problems from automation with paid training programs for new jobs and we should institute a basic income to make sure nobody gets completely left behind. We have tools to finance this, like temporarily higher income taxes after paid training programs and targeted taxes on corporate income from patents on AI and robotics. Corporations around the world benefit from our collective investments in universities and infrastructure, and everyone should benefit from automation. International treaties should set minimum effective tax rates for corporations and high net worth individuals. This 2011 piece by Jeffrey Sachs summarizes my views near perfectly.
Our designs should take a note from Apple’s Secure Enclave and mandate that critical tasks, like updating the control computer’s software, be accomplished in a similar way. The security enclave should also have fallback communication methods to safely disable the car during a critical cyber vulnerability. Ideally, the enclave should be designed with multiple sets of chips from different manufacturers to mitigate damage from espionage or hardware vulnerability, like Intel’s Meltdown.
The network shouldn’t be trusted and DNS and certificate chains shouldn’t either. Employ IP pinning and certificate pinning with fallback strategies. It shouldn’t be possible to send or receive UDP packets over IP through any networked device. UDP over untrusted gear isn’t secure and we should stop pretending that it is. Support both IPv4 and IPv6 for update servers so if a vulnerability is discovered in either the safety module or enclave can instruct the update process to utilize one over the other.
If the media computer supports Bluetooth connections or connections to untrusted devices like phones, then the media computer and its components should be completely isolated from any components involved in navigation or control. Create a separate navigation computer with its own user interface so attacks cannot be pre-staged.
The DSRC should be similarly redesigned and isolated. I haven’t researched this one in great detail, but with that proviso: From the surface it looks difficult because the control computer needs to assume the component responsible for it has been compromised, but the component needs to communicate with the control computer, so it’s an ideal surface for attack.
A partial mitigation I can think of is to split its function across two components. The first connects to the control computer and communicates a finite series of states related to multi-vehicle cruise control and collision avoidance, and another that connects to the media computer for things like automated electronic tolls. Compromise of this component should come with a large bounty to encourage less hackable solutions, like stateless ASICs. Even so, research and ideas are needed here.
Use HTTPS/2 with TLS 1.2 or greater and ensure ciphers and configurations employ forward secrecy. Nginx and other web servers get the default wrong here, so employ care and regularly scan endpoints. Do not solely rely on HTTPS no matter how seemingly secure your setup seems. Ciphers aren’t perfect and misconfigurations, evil certificates, and protocol downgrade attacks are too easy. Use client side encryption in addition to HTTPS and use really big keys.
Ship every car with its own, massive One Time Pad (OTP). Create the OTP with multiple secure random sources on computers never connected to the internet in the secure, guarded facility used for code signing. It should not be physically possible to read the same bit twice from the OTP. The Snowden leaks taught us that layered encryption is what the CIA does to communicate, so do the same. Final code review should be on very simple, secure computers that have never been connected to the internet and are as stateless as possible. These computers should also scan codebases for irregular characters and should communicate through a permanent, physical, auditable channel; like write only disks or giant printed QR codes. Otherwise it’s just another easy hop for the talented.
This layered approach will ensure that compromise of one layer does not enable MITM attacks.
Never SSH or otherwise allow access via programmer terminal to any autonomous vehicle carrying passengers or those outside of secure testing facilities. This includes cars under development. Since working on this article I’ve heard a couple very reckless stories, but unfortunately nobody is willing to publicly or anonymously breach their non-disclosure agreement.
If you want to come forward, leave your electronics on their chargers at home and hand me a hand written note. I’ll vet you, then connect you to an attorney that takes cyber security seriously. There are bad managers everywhere so don’t just irresponsibly Snowden your employer. Whistleblowing should be safe, measured, ethical, and, if at all possible, legal.
Employ cryptographic signing and encryption on everything that is possible, given performance and safety considerations. This should include both code and data in volatile memory. All updates to the control computer should be encrypted, signed, and double checksummed. The checksum should be shared with governments and broadcast to the safety module. If the control computer cannot verify the software update signature or checksums with the safety module the control computer should shutdown the vehicle safely. The safety module should disconnect the power from the computer if it does not do so.
All code on any device connected to the control or navigation computer should have a high degree of test coverage, and these tests and other techniques, like fuzzing, should be run before any software update to any component.
Bug bounties, like those employed by GM, Tesla, and others are a good start, but they do not pay well enough to fund non-casual security research. Government mandated and internationally agreed upon rewards should be set and payments should scale with the number of autonomous devices in operation. Network attacks and social engineering should be allowed so our national security isn’t compromised because IT trusts compromised laptops that look like they’re from Dell. Other corporate espionage techniques, like moles, should remain illegal and outside the scope of the bounty. Bounties for real remote control should range between $100 to $10k USD per device, depending on factors like max speed attainable and geofence breachability.
Harshly fine and imprison those that manufacture, sell, or provision counterfeit automotive electronics. Extend import and manufacture regulatory bodies to routinely inspect automotives and components. Use international treaties to regulate ocean bound autonomous devices.
Take a note from what the USA did during the past twenty years and dramatically increase funding for COMSEC and cyber operations. Expand or create cyber reserve programs to cross pollinate skills between armed forces and the private sector. Try to find a way to engage cannabis using cyber experts through organized challenges or well funded think tanks, even if it isn’t possible politically to have them join the intelligence services or armed forces. Update Nato military spending guidelines and other alliances to include a minimum on cyber operations because defense is a collective benefit. Stop stigmatizing addiction. Create targeted programs to help drug users transition to sobriety if they want to help but need to quit.
Fund research into chips that are specialized for security so vulnerabilities like Meltdown and Spectre are less likely. Create regulations that encourage safer programming languages, like Haskell and Rust, over those that have unsafe or undefined behaviour. Sunset unsafe languages, like C and C++, for security critical devices via regulation over the next decade. In the interim, devices should come with a cyber safety score similar to metrics for other devices like miles per gallon or kilowatt hours per year. Trust non-profits, like Mozilla, and open source initiatives over proposals from automotive corporations to help craft these. Include detection of violation of these in bounties.
Build systems to detect reverse engineering of security critical code and collaborate with intelligence agencies to filter friendly security researchers from potential hostiles or cyberarms manufacturers. Investigate canaries for this, a suggestion by security researcher @collinrm.
There is a silver lining to all the work we have to do: The nature of this threat may finally bring about the political will, economic incentives, and ideas we need to finally secure our computer systems. We may wake up decades from now and talk with amazement about cyber events of the early 2000s like we do of the chemical fires on the rivers of the mid 1900s.
Long articles are hard to naturally spread, so please help me by sharing this. For updates please follow me on Twitter. If you have a technical background and would like to translate this article into your native language, please contact me.