A friend of mine has two young boys who are pretty intense, to put it mildly. They are good kids—they love to play hard and be outdoors. They recently learned to ride bikes and it has quickly become their favorite activity. Like their other play, their bike riding is also intense. My friend was worried that the kids may not appreciate how dangerous it can be to share the road with automobiles. So, he brainstormed some ideas to help them understand how serious it is to get hit by a car.
He decided to task them with straddling their bikes while in the safety of their grassy yard, and then he pushed them over. These rough-playing boys gained at least a modicum of appreciation for how much it would hurt to be struck by a vehicle. I found this story interesting because it has an element of parental humor that I hope you can enjoy and, more important, a risk management nexus that requires some consideration.
He was not wrong to worry about his kids being hit by a car. The US National Highway Traffic Safety Administration (NHTSA) estimates that approximately 250,000 bike incidents take place every year, with approximately 100 resulting in death. Kids younger than 15 (like my friend’s) have an accident rate 2.5 times higher than those older than 15. This demonstrates that experience is an important part of reducing the number of incidents that take place.
Such practices in cybersecurity look a little different, but are categorically the same. First, attack and defense exercises are performed with the help of tools such as vulnerability scans, penetration tests and red teaming. The results of these assessments and tests oftentimes find their way to interested parties. But there are other, real cybersecurity attacks happening that are rarely widely communicated throughout an organization—because nothing quite so bad actually happened. These so-called near misses are events and related stories about circumstances that could have gone badly, but either through some miracle of inaction, heroic action, or sheer dumb luck, did not go anywhere at all.
At face value, it may seem doubtful that a story about something that did not happen could provide much value. However, when building a quantitative program for measuring and managing cyberrisk, you need data. These data can come from only a few places: internal or external sources. Namely, you are either gathering all the data you need from within your realm of operations, or you are obtaining data externally. Mature programs do both. Regardless, there is a strong preference for data related to actual incidents (i.e., losses) within your own enterprise. The good news (or bad news, for modelers) is that your organization is not likely to have many loss incidents. This can make it difficult to communicate how detrimental cybersecurity incidents can be, especially to those who discount loss data pertaining to peers.
This is why near misses can be useful to an organization. Near misses allow an enterprise to analyze something that almost happened. This can be more effective than a risk assessment because it enables an organization to recall immediately an actual event, which has a greater effect than asking the organization to imagine something that could hypothetically happen. Such imagination is a necessary part of the communication process of risk assessments and one of the reasons that organizations react more forcefully to actual incidents than to risk assessment results.
A categorization program that aligns the near misses with risk assessment results can also be effective. This adds further weight to established risk assessment results and any linked issues or findings. Such a categorization program can be a powerful addition during budget allocation time. Instead of (again) asking the organization to imagine a negative event happening and then spend money preventing it, you can articulate a negative event that may come to pass, show evidence of control weaknesses (i.e., issues/findings), and demonstrate that something negative almost happened as a result. Combine that with some loss projections using cyberrisk quantification (CRQ) and that is a winning formula for improving organizational cyberrisk posture.
Jack Freund, Ph.D., CISA, CISM, CRISC, CGEIT, CDPSE, NACD.DC
Is a cyberrisk quantification expert, coauthor of Measuring and Managing Information Risk, 2016 inductee into the Cybersecurity Canon, ISSA Distinguished Fellow, FAIR Institute Fellow, IAPP Fellow of Information Privacy, (ISC)2 2020 Global Achievement Awardee and the recipient of the ISACA® 2018 John W. Lainhart IV Common Body of Knowledge Award.