In his January 2016 Cryptogram newsletter, Bruce Schneier reprinted an essay on “normalization of deviance”: the process of divergence from defined policies and procedures into increasingly risky practices. Explored in detail by Dr. Diane Vaughan, as well as by other researchers and practitioners seeking to explain catastrophic failure events, it bears great relevance on cyber security; “The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal — despite that fact that everyone involved knows better…. As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.”
Normalization of deviance is part of a larger dynamic called the “drift into failure” that has been explored most fully in a recent book of that name by Sidney Dekker.
As RSA president Amit Yoran called out in his RSA Conference keynote a few weeks ago, the cyber security industry has drifted into failure, not only because of the innovation dilemma that I wrote about earlier this year, but also because of the normalization of deviance in security that Schneier highlights. As preventive measures became less effective, lower expectations regarding their effectiveness were accepted. “We stop every attack” became “we stop 80%”, then 50%, then 30%. As a result, organizations continue to pursue the same failed technologies and strategies.
Acknowledging that a drift to failure has occurred enables us to embrace an initiative to re-think our approaches and priorities, empowering organizations to make the shift to a new security mindset, to new security processes and technologies, to new areas of knowledge and innovation. In the course of this shift, we must be cognizant that the drift into failure can occur again, and we must guard against it.
How do we reverse that drift? An approach was suggested by Dr. Nancy Leveson in her early article on the STAMP methodology that I’ve written about a number of times. This methodology views the drift into failure, including the normalization of deviance, as a systems issue. Reversing the drift into failure is not like fixing a broken component (in this case, us fallible and unpredictable humans!) It is not a question of more training or more oversight, even though those are the kinds of solutions we all tend to think of. As I’ve suggested in earlier blogs, a more effective approach is to address the issue in terms of the system dynamics that underlie and reinforce the unproductive behavior.
At the core of a systems strategy for security is a shift of focus: a shift away from a focus on analyzing, predicting and preventing individual component failures to a focus on the discernible impacts that can occur, and the methods that reduce the likelihood and severity of such impacts. In Smart Grid security, for example, the explosion of a substation, such as happened in Miami in 1992, can certainly occur because of broken sensors and faults in arc-suppression control systems. But it can also happen because a cyber attacker has disabled those substation sensors remotely, as happened in the attack on the Ukraine electric grid in December 2015. Focusing narrowly on preventing the failure of the sensor and controller can blind you to other systemic activity that can result in the same damage to the substation, as well as the cascading effects that such damage can have.
(Miami substation explosion in 1992)
A systems approach to reversing the drift into failure entails more than fixing organizational policies and employee awareness. It requires thinking about organizational culture, organizational management processes, independent oversight, and regulatory policies and procedures. Leveson described this in terms of the drift into failure for safety: “Instead of defining safety management in terms of preventing component failure events, it is defined as a continuous control task to impose the constraints necessary to limit system behavior to safe changes and adaptations.”
The same point applies to the drift into failure for security. Like safety, cyber security is complex and dynamic. Security cannot be achieved just by making and enforcing rules. If we want to reverse the drift into failure in security, we have to start by being clear about our expectations regarding security. Even though we cannot prevent cyber attacks, even though we cannot expect to prevent targeted attacks from getting through even the best preventive defenses, we must succeed in effectively detecting and responding to those attacks. We cannot expect less than this. In achieving security, failure is not an option. This mindset will enable us to adopt the new, more effective approaches to today’s changing security paradigms.