|Getty Images, Berlin Germany:|
A participant sits with a laptop computer as he attends the annual Chaos Communication Congress of the Chaos Computer Club at the Berlin Congress Center on December 28, 2010 in Berlin, Germany. The Chaos Computer Club is Europe's biggest network of computer hackers and its annual congress draws up to 3,000 participants.
Panic is in the air — at least, that is, the air surrounding the debate over cybersecurity. It has become virtually impossible to read an article about cybersecurity policy, or sit through any congressional hearing on the issue, without hearing prophecies of doom about an impending “Digital Pearl Harbor,” a “cyber Katrina,” or even a “cyber 9/11.”
Let’s be clear: Cybersecurity and cyberwar are serious matters. Real dangers exist to individuals, companies, and our country. And there are steps that both Congress and the Obama administration can and should take to make sure America better secures digital networks and critical information systems from cyberattacks.
Still, that does not excuse the apocalyptic rhetoric so frequently heard in these debates. What’s going on here is what political scientists refer to as “threat inflation.” It refers to the artificial escalation of dangers or harms to society or the economy. Threat inflation is a key ingredient of many technopanics.
The concept of threat inflation has gotten more circulation in the field of foreign policy, where numerous examples of it have been documented. Jane K. Cramer and A. Trevor Thrall, editors of the book American Foreign Policy and the Politics of Fear, define threat inflation as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.”
Jerry Brito and Tate Watkins of the Mercatus Center at George Mason University have warned of the dangers of threat inflation in cybersecurity policy and the corresponding rise of the “cybersecurity industrial complex,” much like the military-industrial complex of the Cold War era.
They appear to be on to something. Gen. Michael Hayden, who led the National Security Administration and Central Intelligence Agency under president George W. Bush, recently argued that a “digital Blackwater” may be needed to combat the threat of cyberterrorism. Susan Crawford, a former White House senior advisor on technology policy matters, has noted that “cyberwar hysteria aids consultants” and “would certainly create work” for many organizations surrounding the Beltway.
A skeptic might ask: Where’s the harm in using a little inflammatory rhetoric to stir the passions of the public or policymakers? Isn’t a little panic useful if it prompts beneficial action?
In reality, technopanics and threat inflation often backfire or have many unintended consequences.
Panics and threat inflation can create distrust in many institutions, especially the press, and result in a “boy who cried wolf” problem. When panic becomes the norm, it becomes more difficult for the public to take seriously those who propagate such tall tales. “When a threat is inflated,” argue Brito and Watkins, “the marketplace of ideas on which a democracy relies to make sound judgments—in particular, the media and popular debate—can become overwhelmed by fallacious information.”
Apocalyptic rhetoric and prophecies of doom are also inappropriate—even offensive—when comparisons are made to horrific events that are not analogous to cybersecurity attacks. Thousands lost their lives or were injured in the attacks on Pearl Harbor in 1941 and the World Trade Center during 9/11, and Hurricane Katrina also resulted in thousands of deaths and injuries in 2005. To compare cybersecurity attacks to those incidents is to insult the memories of those who lost their lives.
The technopanic mentality is also troubling because it can lead to calls for comprehensive regulation of the Internet or forms of information control.
For example, in his recent book, Cyber War: The Next Threat to National Security and What to Do About It, Richard A. Clarke, a former cybersecurity advisor in the Clinton and Bush Administrations, calls for government to impose a fairly sweeping set of new rules on Internet Service Providers to better secure their networks against potential attacks. Clarke wants ISPs to engage in a great deal more network monitoring for digital dangers (using deep-packet inspection techniques) under threat of legal sanction if things go wrong. He admits there are corresponding costs and privacy concerns, but largely dismiss them in the name of a safer and more secure cyberspace.
Most ISPs already take steps to guard against malware and other types of cyberattacks, however, and they also offer customers free (or cheap) security software. It is certainly true that “more could be done” to better secure networks and critical systems, but it is important to acknowledge that much is already being done to harden systems and educate the public about risks.
That points to the better approach to cybersecurity going forward: education and resiliency.
Recent work by Sean Lawson, an assistant professor in the Department of Communications at the University of Utah, has underscored the importance of resiliency as it pertains to cybersecurity. “Research by historians of technology, military historians, and disaster sociologists has shown consistently that modern technological and social systems are more resilient than military and disaster planners often assume,” he finds. “Just as more resilient technological systems can better respond in the event of failure, so too are strong social systems better able to respond in the event of disaster of any type.”
Education is a crucial part of building resiliency. People and institutions can prepare for potential security problems in a rational fashion if given more information and tools to better secure their digital systems and understand how to cope when problems arise.
Panic, by contrast, is never the right answer.