I’ve always been a believer of the saying – “If you can measure it, you can manage it”! Metrics seem to be first thing Security Professionals think of, but usually the last thing to be implemented–understandably so because you need to have the process in place before you can start measuring.
The first thing I’d propose is change how you perceive metrics, most people explicitly measure the positive…not the negative. For example – it’s easy for executives to agree to success of patching when you report Server Patching is 80%; but the inverse of that metric is 20% of servers ARE NOT patched. Regardless of the percentage reported, the percentage patched in the negative solicits a much different perspective on that metric. In otherwords, an executive who see a positive metric rarely expects to see 100%; however, when a negative metric is presented there’s pressure to move that to 0%.
The second thing I’d propose that metrics need to be tied to business objectives. Metrics should articulate strategic alignment with a business driver. As a security department you’ll have much better success at budget negotiation time when you can directly show that the Security initiatives support the Business Strategy.
The third thing I’d propose is to structure the context of the metric in a purposeful way. This means that you need to know the business objectives, high-value services or assets, critical security controls, critical business risks, or disruptive events that could impact the brand value of the company OR impact the hard-earned revenue stream. Then once you’ve considered these areas, you need to think about how the metric is going to be composed – is it projects, tasks, performance goals, fiscal investment? You should be prepared to explain how the metric is composed and why. If there’s any doubt to the accuracy or completeness of the metric your metric will lose credibility.
For consistency in the metric examples, the scenario we’ll use is outsourcing Security Operations. Remember…these are strategic metrics, and if done correctly will lead to some good conversation.
Top 10 Metrics (from the folks at the Carnegie Mellon University CERT)
1. Percentage of security activities that DO NOT support the business objectives. (This could be project count, SLA on the part of the outsourcer, re-training of staff, number of Security processes that remain in house).
2. Number of security activities/projects tied to business objectives. (You want to be equal to or greater than 1 here. Alignment with CEO or other executive stakeholder goals would be a good place to gather business objectives. Feel this out a bit and report in the positive or negative based on the response you are looking to receive.)
3. Percentage of high-value services that DO NOT satisfy the security requirements. (One example of the outsourcer scenario above, this metric could be a performance review of your outsourcer to confirm compliance. If there are issues reported, you should be prepared to speak to the corrective actions taken.)
4. Percentage of high-value assets that DO NOT satisfy their security requirements. (One example might be Network Access Control Lists without business justification, another example is incidents spawning from incomplete systems configurations.)
5. Percentage of high-value services with controls that are ineffective or inadequate. (This could be an extension of metric #3 to extend to the number ineffective controls identified in the service, OR if the outsourced SLA specifies controls which to be compliant with and subsequently are not.)
6. Percentage of high-value assets with controls that are ineffective or inadequate. (This is the same as #5, only the metric applies to assets instead of the service.)
7. Confidence factor that all risks that need to be identified have been identified. (This one is my favorite, best described in a screen shot from the Carnegie Mellon University CERT)
8. Percentage of risks with impact above threshold. (Examples risks without mitigation plans [should be 0], or risks that are effectively mitigated by their mitigation plans [should be 100%].)
9. Probability of delivered service throughout a disruptive event. (You can get creative with this metric but some angles to consider, probability of delivering high-value controls in crisis mode OR results from outsourced normal SLA’s given disrupted operations.)
10. For disrupted, high-value services with a service continuity plan, percentage of services that did not deliver service asintended throughout the disruptive event. (Services with SC plans that do not maintain required service levels identified.)