Nuclear Weapons | Selected Article

On the Digital Frontlines

Kim Zetter | April 2018

Perspective

The Future of Weapons Requires Cyber Vigilance

The Global Operations Center is the nerve center for United States Strategic Command, which oversees US Cyber Command. “There are a lot of questions and challenges still to be worked out around the use of offensive cyberweapons. What constitutes an act of digital warfare is one of the most basic,” says journalist Kim Zetter.

US Strategic Command

With no concrete definition of a cyberattack or what might warrant retaliation in response to one, experts say we are only beginning to see the potential for cyberattacks and other exploits to disrupt critical systems and operations. Senior reporter Kim Zetter responds to questions on potential weapons systems vulnerabilities that could present new risks.

The Stanley Center is exploring the potential consequences of cyber vulnerabilities and intrusions in nuclear weapons systems. After publishing Cybersecurity of Nuclear Weapons Systems: Threats, Vulnerabilities, and Consequences with Chatham House in January 2018, Stanley Center Nuclear Policy Program Associate Danielle Jablanski asked Zetter to weigh in on cybersecurity and state and nonstate capabilities.

Zetter spent 13 years reporting for Wired. She has broken numerous stories over the years and has been a frequent guest on TV and radio, including CNN, ABC News, NPR, PBS’s Frontline and NewsHour, and Public Radio International’s Marketplace. She is the author of Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon, which details the use of a computer worm designed to sabotage Iran’s uranium enrichment program.

Jablanski: You have distinguished yourself as one of the nation’s top security reporters and have been covering cybersecurity, specifically, for quite some time. What is catching your interest now, and are there new challenges or opportunities as a journalist investigating these kinds of stories?

Zetter: What’s interesting now is how predictions after the discovery of Stuxnet are finally proving true. In 2010, when Stuxnet was uncovered and it became known as the first digital attack aimed at causing physical destruction, a lot of people in the industrial control system community feared it would open the gates to a slew of copycat attacks targeting critical infrastructure. And it surprised everyone when this didn’t occur. But we’re seeing the first stages of such attacks now—with the attack that targeted Ukraine’s power grid in 2015 and 2016 and the more recent attack in Saudi Arabia that targeted a safety system. These attacks are warm-ups that don’t fully exploit what attacks like these are capable of accomplishing but forecast what we’ll see in the future. We can expect that these kinds of assaults will grow in number and sophistication. I also expect that in the near future we’ll begin to see evidence of data integrity attacks—where data is altered in a way that critical systems and information are no longer trustworthy. This could be the surreptitious alteration of software code before it’s distributed (think weapons systems or accounting software changes that cause changes in calculations leading to death or financial loss) or the alteration of financial or voting data. This may already have occurred and we just don’t know it.

Jablanski: States are beginning to recognize the emergence of offensive cyberwarfare capabilities. What potential challenges and vulnerabilities to weapons systems are most relevant in your opinion?

Zetter: There are a lot of questions and challenges still to be worked out around the use of offensive cyberweapons. What constitutes an act of digital warfare is one of the most basic. Every time there is a cyberattack involving a nation-state, we have kneejerk reactions from lawmakers calling it war. We need to be clear about our use of language and not hype attacks for the sake of political gain. Aside from that, there are still questions around the government’s use of zero-day exploits and the need for independent oversight around what gets retained for offensive use and what gets disclosed. There has been recent talk that the government plans to make the process a little more transparent and accountable, but we’ve seen no evidence of this yet. And of course the WannaCry attack last year has shone a light on the real dangers that can occur when governments fail to secure their cache of digital weapons.

With regard to vulnerabilities specifically to weapons systems, I mentioned earlier the concern about data integrity with regard to weapons systems, which could cause guns controlled by software, for example, to shoot off target. In the case of purely digital weapons, these are even more difficult to control; unless you’re skilled at creating a virus/worm that is precise and targeted and won’t cause collateral damage (and do sufficient testing to demonstrate that), you risk having destructive worms rampaging through networks causing unintended consequences. That can happen unintentionally with bugs you don’t catch. But imagine someone infiltrating your development environment for creating covert digital weapons and altering code so that your attacks have unintended consequences that lead your victim to retaliate with war. These are extreme circumstances, but history has shown that when you don’t plan for extreme circumstances, you get surprised by them.

Jablanski: Over the next 30 years, the United States plans to spend more than $1 trillion upgrading nearly every piece of its nuclear weapons systems—everything from communication and satellites to delivery systems. With increased digitization, what concerns does this raise for you from a cybersecurity standpoint?

Zetter: Anytime you digitize systems you make them more complex and you create new possibilities for vulnerabilities and new avenues for attack that didn’t exist before. When industrial control systems were analog systems, you needed to physically destroy the wiring or equipment in person or with an aerial bomb. With digital systems, you now have to worry about remote attacks. These can occur over the Internet if the systems are connected online in any way, or connected to other systems that are online, or via removable media such as USB sticks if the systems are air-gapped from [not connected to] the Internet.

Obviously the security of nuclear weapons is more critical than even the electric grid, so the stakes are much higher when you’re talking about introducing potential vulnerabilities into these systems when you digitize them. And it’s even more important to have a supply chain that is controlled. I haven’t seen any plans for this conversion, so it’s not clear to me what exactly it’s going to involve.

Supply-chain attacks could include things like logic bombs—malicious code designed to trigger at a future date—that get implanted in chips and hardware during the manufacturing stage or en route during shipment. These aren’t theoretical attacks. Documents released by Edward Snowden show the NSA [National Security Agency] and CIA [Central Intelligence Agency] engaging in “interdictions”—intercepting routers, laptops, and other hardware on their way to end users and secretly installing spy code in them or some other malicious code.

The US has a history of digitizing systems without thinking through the potential consequences. Smart meters are one example: the government subsidized the cost of rolling out smart meters to homes and businesses because it would save utilities time and money if they could simply turn electricity on and off remotely and take readings of electricity and gas meters without having to send workers out to neighborhoods to read them. But they did this without conducting a security-impact assessment, installing systems that remote hackers could use to create blackouts in entire neighborhoods. It’s the government’s responsibility anytime it modernizes something—nuclear weapons systems in particular—to produce sound impact assessments that lay out the potential security risks and explain how those will be addressed.

Jablanski: While states’ cybercapabilities continue to be the most sophisticated, what have you seen in terms of the role of nonstate cyberthreats to classified networks and systems vital for national security?

Zetter: Governments don’t like to admit when their classified networks are infiltrated, so I don’t think we have a clear view of what has occurred in the past or is currently happening in that realm. But in general, state and nonstate actors don’t have to be sophisticated to be effective. We saw this with the agent.btz infection that targeted military systems, including classified ones. The infection reportedly began with a USB stick that a soldier picked up at an Internet cafe and put into his work computer. Security vigilance is hard, and people let their guard down or violate security rules. There will always be ways to get into systems—even if you’ve developed means to keep out the malicious outsider, the insider threat is always going to be a problem.

As for the sophistication of nonstate actors, governments have to understand that nonstate actors learn from state actors. It used to be the other way around back in the early days of hacking, that the government learned from nonstate actors. People working for the NSA, CIA, and other agencies attended hacker conferences to learn about vulnerabilities and techniques they could use. Hacker knowledge trickled up from the lower levels. Now it’s the other way around. Hackers are learning from nation-state attacks and co-opting their methods and tools. Government attack methods trickle down, and targeted attacks and methods have the potential for becoming widespread.

Jablanski: Your book, Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon, detailed how a computer worm effectively sabotaged Iran’s uranium enrichment program. I imagine you come up against a lot of barriers when looking into any cyberthreats. How do you get around classification and other roadblocks to getting the information you need for reporting?

Zetter: Covering national security has always been a challenge for reporters. But the methods for obtaining information haven’t changed, just the tools have. You still get information through whistleblowers and other sources —sometimes through authorized government leaks or mistakes the government makes in redacting information. You can also obtain information when it gets exposed inadvertently in the way Stuxnet was. If it wasn’t for the fact that Stuxnet’s authors made errors that exposed the covert operation, we might not have learned about the attack against Iran’s nuclear program unless an insider leaked the information. Had the operation been more successful, we might still not know about it.

But secrets are hard to keep forever—even when they’re closely held in the way Stuxnet was. It’s one of the things reporters count on—that information wants to be free. There are all kinds of reasons people will leak information: they want to shine a light on an important issue or policy that is not being debated; they want to expose waste, fraud, or a crime; they’ve exhausted other avenues of recourse for righting a wrong; or they feel the secrecy around something is unwarranted and see benefit in it being revealed. Sometimes they have an ax to grind and just want to see actions exposed. It can take a lot of work for reporters to get the information. But if you’re patient and have a reputation for handling information and sources with integrity, sometimes the information will come to you without you having to go find it.

Jablanski: What are some of the most common misperceptions in discussing cybersecurity and weapons programs that you believe journalists can help debunk?

Zetter: That every attack is serious or merits coverage.

That every attack conducted by a nation-state with political motives is cyberwarfare.

That attribution is a solved problem—it may very well be possible to discern the attacker in some cases (particularly if it involves signals intelligence where a spy agency is sitting on the computer of the attacker and watching them plan or perform the attack). But attackers are going to become more sophisticated at using false flags to hide their identity or point the finger at others, and the public and reporters have to be skeptical whenever a government or private security firm attributes an attack to a particular nation or actor. Stories should always carry caveats to this effect.

Jablanski: How do decision makers demonstrate to the public that they are taking adequate steps to maintain resilient systems, especially for nuclear weapons, given the sensitivity of those actions?

Zetter: I mentioned above that the government should be required to do a security-impact assessment before digitizing nuclear systems. This should be done by trusted third parties who don’t have a stake in the program. Unfortunately, the kind of entity capable of doing this no longer exists because Congress defunded it.

The Office of Technology Assessment [OTA] for years provided expert assessments conducted by scientists, technical experts, and others who produced valuable reports advising lawmakers on the efficacy and drawbacks of planned programs and legislation. But lawmakers didn’t like some of the conclusions that the OTA reports reached, since they clashed with what lobbyists or other interested parties wanted. If decision makers truly want the public to trust that they are taking adequate steps to maintain resilient systems, independent assessments are essential—both before and after a program is implemented.


Stuxnet was a computer worm that caused damage to Iran’s nuclear program. First identified in 2010, it is thought to have been developed by American and Israeli intelligence. Stuxnet is considered the first known cyberweapon to be released in the wild and is the first piece of malware aimed at causing physical destruction.

The WannaCry’s 2017 attack took advantage of an exploit in Microsoft Windows to spread ransomware on computers all over the world. North Korea has been blamed for the incident.

In 2008, a worm dubbed agent.btz infected US military computers. It spread after a USB flash drive was inserted into a laptop connected to US Central Command. It is suspected that Russian hackers were behind the attack.


The Stanley Center values independent and accurate journalism and the role it plays in building better-informed, just, and accountable societies. We enable rigorous and impactful reporting on topics related to our three issue areas—mitigating climate change, avoiding the use of nuclear weapons, and preventing mass violence and atrocities. Find out more.

This article was written for the Stanley Center and we encourage others to share its important message, with attribution.