Last month, a massive ransomware attack hit computers around the globe, and the government is partly to blame.
The malicious software, known as 鈥淲annaCry,鈥 encrypted files on users鈥 machines, effectively locking them out of their information, and demanded a payment to unlock them. This attack spread rapidly through a vulnerability in a widely deployed component of Microsoft's Windows operating system, and placed hospitals, local governments, banks, small businesses, and more in harm's way.
This happened in no small part because of U.S. government decisions that prioritized offensive capabilities 鈥 the ability to execute cyberattacks for intelligence purposes 鈥 over the security of the world鈥檚 computer systems. The decision to make offensive capabilities the priority is a mistake. And at a minimum, this decision is one that should be reached openly and democratically. A bill has been proposed to try to improve oversight on these offensive capabilities, but oversight alone may not address the risks and perverse incentives created by the way they work. It鈥檚 worth unpacking the details of how these dangerous weapons come to be.
Why did it happen?
All complex software has flaws 鈥 mistakes in design or implementation 鈥 and some of these flaws rise to the level of a vulnerability, where the software can be tricked or forced to do something that it promised its users it would not do.
For example, consider one computer, running a program designed to receive files from other computers over a network. That program has effectively promised that it will do no more than receive files. If it turns out that a bug allows another computer to force that same program to delete unrelated files, or to run arbitrary code, then that flaw is a security vulnerability. The flaw exploited by WannaCry is exactly such a vulnerability in part of Microsoft鈥檚 Windows operating system, and it has existed (unknown by most people) for many years, possibly as far back as the year 2000.
When researchers discover a previously unknown bug in a piece of software (often called a 鈥渮ero day鈥), they have several options:
- They can report the problem to the supplier of the software (Microsoft, in this case).
- They can write a simple program to demonstrate the bug (a 鈥減roof of concept鈥) to try to get the software supplier to take the bug report seriously.
- If the flawed program is free or open source software, they can develop a fix for the problem and supply it alongside the bug report.
- They can announce the problem publicly to bring attention to it, with the goal of increasing pressure to get a fix deployed (or getting people to stop using the vulnerable software at all).
- They can try to sell exclusive access to information about the vulnerability on the global market, where governments and other organizations buy this information for offensive use.
- They can write a program to aggressively take advantage of the bug (an 鈥渆xploit鈥) in the hopes of using it later to attack an adversary who is still using the vulnerable code.
Note that these last two actions (selling information or building exploits) are at odds with the first four. If the flaw gets fixed, exploits aren't as useful and knowledge about the vulnerability isn't as valuable.
Where does the U.S. government fit in?
The NSA didn鈥檛 develop the WannaCry ransomware, but they knew about the flaw it used to compromise hundreds of thousands of machines. We don't know how they learned of the vulnerability 鈥 whether they purchased knowledge of it from one of the specialized companies that sell the knowledge of software flaws to governments around the world, or from an individual researcher, or whether they discovered it themselves. It is clear, however, that they knew about its existence for many years. At any point after they learned about it, they could have disclosed it to Microsoft, and Microsoft could have released a fix for it. Microsoft releases such fixes, called 鈥減atches,鈥 on a roughly monthly basis. But the NSA didn't tell Microsoft about it until early this year.
Instead, at some point after learning of the vulnerability, the NSA developed or purchased an exploit that could take advantage of the vulnerability. This exploit 鈥 a weapon made of code, codenamed 鈥淓TERNALBLUE,鈥 specific to this particular flaw 鈥 allowed the NSA to turn their knowledge of the vulnerability into access to others鈥 systems. During the years that they had this weapon, the NSA most likely used it against people, organizations, systems, or networks that they considered legitimate targets, such as foreign governments or their agents, or systems those targets might have accessed.
The NSA knew about a disastrous flaw in widely used piece of software 鈥 as well as code to exploit it 鈥 for without trying to get it fixed. In the meantime, others may have discovered the same vulnerability and built their own exploits.
Any time the NSA used their exploit against someone, they ran the risk of their target noticing their activity by capturing network traffic 鈥 allowing the target to potentially gain knowledge of an incredibly dangerous exploit and the unpatched vulnerability it relied on. Once someone had a copy of the exploit, they would be able to change it to do whatever they wanted by changing its 鈥減ayload鈥 鈥 the part of the overall malicious software that performs actions on a targeted computer. And this is exactly what we saw happen with the WannaCry ransomware. The NSA payload (a software 鈥淪wiss Army knife鈥 codenamed DOUBLEPULSAR that allowed NSA analysts to perform a variety of actions on a target system) was replaced with malware with a very specific purpose: encrypting all a users鈥 data and demanding ransom.
At some point, before WannaCry hit the general public, the NSA learned that the weapon they had developed and held internally had leaked. Sometime after that, someone alerted Microsoft of the problem, kicking off Microsoft鈥檚 security response processes. Microsoft normally credits security researchers by name or 鈥渉andle鈥 in their security updates, but in this case, they are not saying who told them. We don't know whether the weapon leaked earlier, of course 鈥 or whether anyone else had independently discovered knowledge of the vulnerability and used it (with this particular exploit or another one) to attack other computers. And neither does the NSA. What we do know is that everyone in the world running a Windows operating system was vulnerable for years to anyone who knew about the vulnerability; that the NSA had an opportunity to fix that problem for years; and that they didn't take steps to fix the problem until they realized that their own data systems had been compromised.
A failure of information security
The NSA is ostensibly responsible for protecting the information security of America, while also being responsible for offensive capabilities. 鈥淚nformation Assurance鈥 (securing critical American IT infrastructure) sits next to 鈥淪ignals Intelligence鈥 (surveillance) and 鈥淐omputer Network Operations鈥 (hacking/infiltration of others鈥 networks) right in the Agency鈥檚 . We can see from this fiasco where the priorities of the agency lie.
And the NSA isn鈥檛 the only agency charged with keeping the public safe but putting us all at risk. The FBI also hoards knowledge of vulnerabilities and maintains a that take advantage of them. statement says that it works 鈥渢o protect the U.S. from terrorism, espionage, cyberattacks鈥.鈥 Why are these agencies gambling with the safety of public infrastructure?
The societal risks of these electronic exploits and defenses can be seen clearly by drawing a parallel to the balance of risk with biological weapons and public health programs.
If a disease-causing micro-organism is discovered, it takes time to develop a vaccine that prevents it. And once the vaccine is developed, it takes time and logistical work to get the population vaccinated. The same is true for a software vulnerability: it takes time to develop a patch, and time and logistical work to deploy the patch once developed. A vaccination program may not ever be universal, just as a given patch may not ever be deployed across every vulnerable networked computer on the planet.
It鈥檚 also possible to take a disease-causing micro-organism and 鈥渨eaponize鈥 it 鈥 for example, by expanding the range of temperatures at which it remains viable, or just by producing delivery 鈥渂omblets鈥漜apable of spreading it rapidly over an area. These weaponized germs are the equivalent of exploits like ETERNALBLUE. And a vaccinated (or "patched") population isn't vulnerable to the bioweapon anymore.
Our government agencies are supposed to protect us. They know these vulnerabilities are dangerous. Do we want them to delay the creation of vaccine programs, just so they can have a stockpile of effective weapons to use in the future?
What if the were, in addition to its current mandate of protecting 鈥淎merica from health, safety and security threats, both foreign and in the U.S.,鈥 responsible for designing and stockpiling biological weapons for use against foreign adversaries? Is it better or worse for the same agency to be responsible for both defending our society and for keeping it vulnerable? What should happen if some part of the government or an independent researcher discovers a particularly nasty germ 鈥 should the CDC be informed? Should a government agency that discovers such a germ be allowed to consider keeping it secret so it can use it against people it thinks are "bad guys" even though the rest of the population is vulnerable as well? What incentive does a safety-minded independent researcher have to share such a scary discovery with the CDC if he or she knows the agency might decide to use the dangerous information offensively instead of to protect the public health?
What if a part of the government were actively weaponizing biological agents, figuring out how to make them disperse more widely, or crafting effective delivery vehicles?
These kinds of weapons cannot be deployed without some risk that they will spread, which is why bioweapons have been prohibited by international convention . Someone exposed to a germ can culture it and produce more of it. Someone exposed to malware can make a copy, inspect it, modify it, and re-deploy it. Should we accept this kind of activity from agencies charged with public safety? Unfortunately, this question has not been publicly and fully debated by Congress, despite the fact that several government agencies stockpile exploits and use them against computers on the public network.
Value judgments that should not be made in secret
Defenders of the FBI and the NSA may claim that offensive measures like ETERNALBLUE are necessary when our government is engaged in espionage and warfare against adversaries who might also possess caches of weaponized exploits for undisclosed vulnerabilities. Even the most strident supporters of these tactics, however, must recognize that in the case of ETERNALBLUE and the underlying vulnerability it exploits, the NSA failed as stewards of America's 鈥 and the world's 鈥 cybersecurity, by failing to disclose the vulnerability to Microsoft to be fixed until after their fully weaponized exploit had fallen into unknown hands. Moreover, even if failing to disclose a vulnerability is appropriate in a small subset of cases, policy around how these decisions are made should not be developed purely by the executive branch behind closed doors, insulated from public scrutiny and oversight.
A bipartisan group of US Senators has introduced a bill called the , which would create a Vulnerabilities Equities Review Board with representatives from DHS, NSA, and other agencies to assess whether any known vulnerability should be disclosed (so that it can be fixed) or kept secret (thereby leaving our communications systems vulnerable). If the government plans to retain a cache of cyberweapons that may put the public at risk, ensuring that there is a permanent and more transparent deliberative process is certainly a step in the right direction. However, it is only one piece of the cybersecurity puzzle. The government must also take steps to ensure that any such process fully considers the duty to secure our globally shared communications infrastructure, has a strong presumption in favor of timely disclosure, and incentivizes developers to patch known vulnerabilities.
This will not be the last time one of these digital weapons leaks or is stolen, and one way to limit the damage any one of them causes is by shortening the lifetime of the vulnerabilities they rely on.