In Part 1 of this series on delivering meaningful metrics to boards, I talked about the need to discuss security risks in ways that relate to board concerns. Many CISOs are reporting the wrong metrics to boards – for example, a malware platform supposedly finding 333 million malware alerts or 234,333 wrong password entries. Without context for the organization and its particular risk posture, these raw numbers are meaningless.
Here in Part 2, I’ll explain how to go beyond raw numbers and prioritize risks, in a way that boards can understand.
Here’s a standard risk equation: Likelihood x impact = risk
That’s pretty simple. So why are we so worried about presenting risks to the C-suite, if that math is so easy? The problem is that cyber complicates the equation in every direction.
For example, when we’re talking about “likelihood,” we also need to know about the likelihood of what – and from whom. We need this extra context so we can complete the equation.
The same is true of impact, which is commonly thought to be something that we CISOs control. But impact is a business decision, and therefore should be determined by business stakeholders who can define the importance of information within an IT system. The job of CISOs is to provide business teams with the framework and methodology for classifying the value of information, without confusing teams with esoteric cyber-babble.
Adding to the challenge of contextualizing risk is that it can be hard to know who’s attacking us and why. Sensational media coverage of high-profile ransomware and DDoS attacks tends to blur the true picture of risk – in other words, which attacks an organization should worry about. As security departments, we need to contextualize the threats applicable to our environments.
Consider the WannaCry ransomware attack, which topped the headlines recently – and which elevated businesses’ fears of becoming the next victim. If most of the machines in an organization are firewalled off from each other, and are accessed by only a small set of users, the risk of falling victim to such an attack is lower than in more connected networks. Perhaps we have legacy systems that can no longer be patched, so we need to understand the data these systems house, and the other systems they are connected to. At the moment, most of us don’t do a good job at this kind of threat assessment.
There are some great risk-management frameworks out there that take the traditional risk equation and provide some cyber relevance. However, these frameworks can be onerous to work with, and can require a thorough background in risk management. But there’s an easier way: I’ve created a four-step process to qualify threats and prioritize risks. The process helps us understand who is attacking us, what exactly they’re attacking, and how vulnerable our assets are. Armed with this information, we can create metrics, assign budgets, and prioritize efforts.
This step sounds obvious, but it is often overlooked. It’s also not easy: The proliferation of device types in most enterprises means that there are exponentially more assets than there used to be, along with many more users of these devices, and more types of data traveling through them.
For example: IoT devices are already becoming a target of threat actors, and will continue to be so as more and more devices become truly connected. And this won’t just be for ransomware, but all forms of cyber attacks, including worms, viruses, and denial of service (DDoS) attacks.
Even though the task is becoming harder, it cannot be overlooked. Any good security program begins with understanding what you need to protect. SANS Institute defines the maintenance of both software and hardware inventories as critical to any organization looking to implement a holistic cyber security program.
If you want a frightening wake-up call about the challenge of identifying assets, here’s what the Center for Internet Security says about the craftiness of attackers who have their eyes fixed on your networks:
“Attackers, who can be located anywhere in the world, are continuously scanning the address space of target organizations, waiting for new and unprotected systems to be attached to the network. Attackers also look for devices (especially laptops) which come and go off of the enterprise’s network, and so get out of synch with patches or security updates. Attacks can take advantage of new hardware that is installed on the network one evening but not configured and patched with appropriate security updates until the following day. Even devices that are not visible from the Internet can be used by attackers who have already gained internal access and are hunting for internal jump points or victims.”
The network is becoming increasingly irrelevant to a security strategy. Users are mobile, their devices are of multiple form factors, and their “office” locations are any place that can provide a wifi connection. In this world of cloud and mobility, companies need to look at securing the applications we use, and securing the users who use them. Companies also need to adopt a “zero trust” posture – that is, we should liken all connections to the level of a Starbucks hotspot. We assume our users are on networks that we don’t control – because today, that is usually the case.
Once we understand what we’re looking to protect, we need to better understand who is looking to obtain access to our assets and the capabilities that they possess.
Here again, context is important. Many CISOs I know say that they cannot afford to protect themselves from “nation-states” – but the fact is that many cybercriminals use tools formerly thought of as the exclusive domain of nation-state actors, such as encrypted communications and polymorphic malware. If many bad actors are using these tools, then organizations can’t ignore them.
From a threat perspective, contemporary information risk management breaks threat actors and events into different categories. This is incredibly important when we start to look at actionable metrics. Threat events associated with adversarial actors are very different from threats that are accidental in nature. (Accidental threats can be classified as a coding mistake or an incorrect command – as opposed to a planned attack.) However, both types of threats can significantly impact the ability of your department to preserve the confidentiality, integrity, and availability of information.
Threat intelligence can tell us which tools, techniques, and procedures a particular actor is likely to use. Armed with this knowledge, you can focus your organization’s defenses, and ask which threat actors are likely to target you. Are they financially motivated cybercriminals who want your customer data? If so, they commonly use drive-by download attacks. Are nation-state threat actors an issue? Their attacks are often costly to mount, so not every company is likely to be a target.
This analysis comes in handy when you’re asking your board for significant investment in security controls. Board members will ask, “Why do we need these controls?” You respond by explaining the threats you are most worried about – hence the need to apply behavioral analysis techniques to sophisticated malware. Board members might then ask, “How likely is this threat?” The answer to this question will depend on your industry and your most pressing concerns. If you’re in retail, you’re most concerned with keeping ecommerce platforms up and running. If you’re an electric car manufacturer, you want to keep people from having deadly accidents in your cars. This is where context matters, since it helps you prioritize resources.
Vulnerabilities are weaknesses across people, processes, or technology. Why do we identify vulnerabilities after we profile threats and classify assets? Because we live in a world where absolute security simply isn’t possible. Automated tools can only do so much in terms of unearthing the weak points, like finding technical vulnerabilities in a software stack. But they can’t tell if your users need training so that threats don’t get past them.
Pragmatism and prioritization are two key tenets of good vulnerability management. We need to look at which systems house data we are concerned about, and in what volume. A few key questions to ask about these systems:
We cannot patch everything; we cannot make everything secure. Your vulnerability scanners might tell you that you need to apply 272,000 patches – and you know that simply isn’t possible, given the limitation of time and people. But you can prioritize the vulnerability patching needed for the most sensitive assets and the most likely threats.
Vulnerabilities will always crop up. However, controls and safeguards can lessen the impact or likelihood of a risk occurring (remember our equation?). Controls do not have to be absolute. It’s unusual for a control to remove a risk entirely – we’re looking to lessen the risk to a palatable level. Who sets this bar? Again, it’s the business!
Once again, context comes into play when considering which controls to establish, and where: The decisions are based on the value of the data. In my experience, problems arise when roles and responsibilities are unclear between information security and a data owner. The information security team is there to ensure that security controls are applied that are commensurate with the classification of the information in question. It’s common to be asked by a business leader, “What’s the classification of this database/file server/website?” It’s the responsibility of business representatives to categorize systems and to classify data. We ensure that controls exist to ensure the protection of information; we shouldn’t be categorizing it.
In our next article, we’ll dig down deeper into the metrics themselves, and explain how to assess the effectiveness of a control or safeguard. We’ll spend time looking at what a metric is, and how to present meaningful reporting to executive stakeholders outside of the information security function.