I spoke recently at a meeting of the Dublin, Ireland chapter of ISACA about the continued (and increasing) use of social engineering in cyberattacks discussed in several recent reports, including the joint report by ISACA and RSA that documents the results of a survey of cybersecurity professionals, conducted in the first quarter of 2015. Those results show that phishing and other kinds of social engineering attacks were the most common attacks within enterprises in 2014, with nearly 70% of respondents citing phishing as having resulted in exploits in the enterprise, and 50% citing other social engineering attacks, including water-holing attacks, SMS phishing (SmiShing), voice phishing (vishing) and so on.
(graphic from RSA-ISACA report)
Similarly the RSA Cybercrime 2015 report published in April, calls out the increasing use of water-holing attacks as the ways in which attackers begin their campaigns against an enterprise. And the Verizon Data Breach Report 2015 reported that more than half of all APT attack campaigns starting with spear-phishing and other social engineering attacks.
(graphic from Verizon DBIR 2015)
These issues apply to attacks on electric utilities, as well as every other industry vertical, government and academia. As long ago as 2001, the SANS institue defined attack scenarios that used various social engineering techniques to gain access to control systems. The 2014 “The Hacker Always Get Through” report from SANS Institute re-confirmed the on-going risk of social engineering attacks as the launch vehicles for cyberattacks against all industries, citing the 2013 attack against the South Korean industry. What can anyone do against this on-going threat?
The focus in many organizations is on education as the way to help users – including control system administrators – recognize and avoid social engineering attacks. But as I had suggested in an earlier blog, important as education is, it isn’t enough. An organization has to expect that some social engineering attacks will get through that protective net and some users will fall victim to those attacks. To respond to that situation, organizations have to employ the analytics-based approach often called “intelligence-driven security” . Based on comprehensive visibility to detect possible attacks, this approach employs a broad range of analytics to understand and prioritize those attacks, as well as technology and processes to respond quickly and effectively to those attacks.
This same approach has to be applied within the real-time processes for authenticating users and evaluating requests for access. Technologies that remove social engineering attacks before they reach the user: tools like email filtering, blacklisting and whitelisting, have to be enhanced by information-sharing processes that leverage a broad range of intelligence sources. In parallel, technologies like adaptive authentication are also needed, so that enable the organization to detect attempts by an attacker to use stolen administrative credentials to gain access to control system.
Adaptive authentication includes three critical capabilities, illustrated in the diagram below.
(graphic copyright © 2015 by EMC. Used by permission)
First, it is risk-based. That is, it takes advantage of rich context about the user, the requested resource and the environment in ordr to make a decision about whether a user is who they say they are and whether they should be allowed access to a requested resource. For example, a login request by an administrator is normally evaluated just in terms of credentials such as username and password. But the context could include other context about the user, such as detailed charactristics of the device being used, the typing speed and other key-stroke characteristics, and so on. It can include characteristics of the system they are attempting to gain access to, particularly in terms of the potential for disruption in the case of an attack. And it can include environmental characteristics, not only day of the week and time of day, but also intelligence about reent attacks on the same or other facilities, anomalies in this request compared to other requests by the user, and so on.
The second critical capability is to evaluate this request and its rich context in terms of level of risk. Most often, authentication and authorization requests are evaluated in absolute terms. Is this user who they say they are? Should they be granted access to the specific resource requested. But if the evaluation is done in terms of risk, then the response to the request can be to ask for additional validation by the user that reduces that risk through “step-up authentication”. The response can be to grant access to a more lmited or less sensitive resource that may be adequate for user purposes, while the request is handed off to a cyber specialist for further evaluation.
The third capability is to perform this evaluation continuously, each time there is a request for access, rather than just once in an initial authentication step. Because the evaluation is looking at the risk relative to a resource, not just to the user identity, the level of risk for different requests can be vastly different. A late-night request by an administrator connecting into the public portal list of current outages is unlikely to have a high level of risk. A work-hours request by the same administrator to update the driver for a key controller in the distribution systemis much higher risk, even though it may be a completely legitimate user performing a completely legitimate operation; attacks such as the Dragonfly attack that compromised controller software demonstrate exactly this kind of risk.
Most organizations still have a lot of work to do to put in place an effective response to cyber attacks. Nowhere is this more true that in terms of social engineering attacks. We need to bring together education, preventive mechanisms and the intelligence-driven security approach of visibility/analytics/action to reduce both the risk and the impact of this continuing – and increasing – threat.