Why we need to move away from qualitative risk management and embrace quantitative risk management.
Advocating for FAIR as a pragmatic quantitative risk analysis model.
Presented at MESCON 2018, Dubai
2. FAIR INSTITUTE MISSION
The FAIR Institute is a non-profit organization made up of forward-thinking
risk officers, cybersecurity leaders and business executives that operates with
a central mission:
Establish and promote information risk management best practices that
empower risk professionals to collaborate with their business partners
on achieving the right balance between protecting the organization and
running the business.
Factor Analysis of Information Risk (FAIR) is the discipline, the framework, and
the driver behind our mission.
3. What is FAIR?
Factor Analysis of
Information Risk (FAIR) is the
only international standard
quantitative analysis model
for information security and
operational risk
A Standard Taxonomy for
Information and Operational Risk
A Methodology for Quantifying and
Managing Risk in Financial Terms in
Any Organization
A Complementary Analytics Model
to existing Risk Frameworks, such as
ISO 31000, COSO, NIST CSF
A Standard of The Open Group (since
2013)
4. High
High
High
Med Hi
Med HiMed Hi
Med Hi
Med HiMedium
Medium
Medium
Medium
Medium Medium
Low Med
Low Med
Low Med
Low Med
Low MedLow Med
Low
Low
Low
Low
Low
SevereSignificantModerateMinorNegligible
Very Likely
Possible
Likely
Unlikely
Very Unlikely
Impact
Likelihood
5. Bernard Shaw
The single biggest
problem in
communication is the
illusion that it has taken
place.
7. CISO – VP Ops Risk
CIO
Are we spending our
cybersecurity budget on
the right things?
AUDIT
Did you fix those
high priority
issues?
BOARD/CEO
What are our top risks
and what is our
exposure?
When will we be done?
How much risk do we have?
Are we spending too little or
too much on mitigation?
What is the ROI?
CFO
9. Failure of Qualitative RM/Risk Matrices
• FAIR – Jack Jones
• What’s Wrong with Risk Matrices?
Louis Anthony (Tony) Cox, Jr. PhD
• The Risk of Using Risk Matrices.
Philip Thomas (University of Stavanger), Reidar B.
Bratvold (University of Stavanger), J. Eric Bickel (University of
Texas at Austin)
• The Failure of Risk Management: Why It's Broken and How to Fix It
Douglas W. Hubbard
• How to Measure Anything in Cybersecurity Risk
Douglas W. Hubbard, Richard Seiersen
11. - Dr. Tony Cox Jr.
Reference: Louis Anthony (Tony) Cox, What’s Wrong with Risk Matrices?
“Risk matrices can mistakenly assign higher
qualitative ratings to quantitatively smaller risks.
For risks with negatively correlated frequencies
and severities, they can be “worse than useless,”
leading to worse-than-random decisions.”
“They can assign identical ratings to quantitatively
very different risks”
12. “The motivation for writing this paper was to point out
the gross inconsistencies and arbitrariness embedded
in RM. Given these problems, it seems clear to us that
RMs should not be used for decisions of any
consequence.”
“In this paper, we have illustrated and discussed
inherent flaws in RMs and their potential impact
on risk prioritization and mitigating.” … “These
flaws cannot be corrected and are inherent to the
design and use of RMs.”
Reference: Thomas, Philip & Bratvold, Reidar & Bickel, J. (2013). The Risk of Using Risk Matrices.
13. Douglas Hubbard, Richard Seiersen “How to measure Risk in Cyber
Security”
Reference: Thomas, Philip & Bratvold, Reidar & Bickel, J. The Risk of Using Risk Matrices.
“…there is not a single study indicating that the use of
such methods actually helps reduce risks.”
18. • Breakdown complex problems
• Support critical thinking
• Support communication
• Overcome bias
• Clarify assumptions
• Clarify Scope
• Normalize analysis results, reduce
variance, consistent quality
Importance of Models
19. Random
Regular
Intentional
Value
Level of Effort
Risk
Skills
• Knowledge
• Experience
Resources
• Time
• Materials
Risk
Loss Event
Frequency
Threat Event
Frequency
Contact
Frequency
Probability of
Action
Vulnerability
Threat
Capability
Resistance
Strength
Loss
Magnitude
Primary Loss
Secondary
Risk
Secondary
Loss Event
Frequency
Secondary
Loss
Magnitude
#
%
#
%
%# %% $
$$
$
$
20. What is a measurement anyhow?
• Using ranges instead of single point estimates.
Min Most Likely Max
5 10 25
“A quantitatively expressed reduction of uncertainty based
on one or more observations.”
- Douglas Hubbard
21. Dealing with Uncertainty
When we are uncertain we express it through the choice of wider ranges.
When we are uncertain about the most
likely value the tools allow us to express
that through “Confidence” parameters.
(make the peak wider).
Min Most Likely Max
8 10 14
5 13 20
22. Estimating is not the same as guessing!!!
Estimating
Informed assessment, examining assumptions,
consider available data, develop rationale, use
ranges to account for uncertainty…
Guessing
Intuitive, casual, spontaneous conclusion, no
thought behind it…
23. Calibrate your estimates
• Everyone is systematically “overconfident”.
• You can get trained or “calibrated” so that when you say you are 90%
confident, you will be right 90% of the time.
40%
50%
60%
70%
80%
90%
100%
50% 60% 80% 90% 100%70%
Assessed Chance Of Being Correct
ActualPercentCorrect
Range of results from studies
of un-calibrated people
Range of results for studies of
calibrated persons
Source: Hubbard Decision Research
24. Random
Regular
Intentional
Value
Level of Effort
Risk
Skills
• Knowledge
• Experience
Resources
• Time
• Materials
Risk
Loss Event
Frequency
Threat Event
Frequency
Contact
Frequency
Probability of
Action
Vulnerability
Threat
Capability
Resistance
Strength
Loss
Magnitude
Primary Loss
Secondary
Risk
Secondary
Loss Event
Frequency
Secondary
Loss
Magnitude
#
%
#
%
%# %% $
$$
$
$ The probable frequency
and probable magnitude of
future loss.
The probable frequency,
within a given timeframe,
that a threat agent will
inflict harm upon an asset.
The probable frequency,
within a given timeframe,
that a threat agent will act
in a manner that could
result in loss.
The probability that a
threat event will
become a loss event.
27. •If it matters at all, it is detectable/ observable.
•If it is detectable, it can be detected as an
amount (or range of possible amounts).
•If it can be detected as a range of possible
amounts, it can be measured.
Reference: How to Measure Anything by Douglas W. Hubbard
28. •Your problem is not as unique as you think.
•You have more data than you think.
•You need less data than you think.
Reference: How to Measure Anything by Douglas W. Hubbard
•There is a useful measurement that is much simpler
than you think.
•You probably need completely different data than
you think.
30. Asset at Risk Threat Community Threat Type Effect
Customer Data Cyber Criminal Malicious Confidentiality
Customer Data Cyber Criminal Malicious Availability
Customer Data Privileged Insider Malicious Confidentiality
… … … …
31. Random
Regular
Intentional
Value
Level of Effort
Risk
Skills
• Knowledge
• Experience
Resources
• Time
• Materials
Risk
Loss Event
Frequency
Threat Event
Frequency
Contact
Frequency
Probability of
Action
Vulnerability
Threat
Capability
Resistance
Strength
Loss
Magnitude
Primary Loss
Secondary
Risk
Secondary
Loss Event
Frequency
Secondary
Loss
Magnitude
#
%
#
%
%# %% $
$$
$
$
38. “HOW MUCH RISK DO WE HAVE?” “WHAT ARE OUR TOP RISKS?”
A B C D E F G H I J K M N O P Q
“HOW IS OUR RISK TRENDING VS. APPETITE?”
“HAVE WE REDUCED RISK?” “WHAT IS THE COST/BENEFIT OF THIS PROJECT?”
IT Security
Investment
Current
Risk
Reduced
Risk
$80M
$9M
$2M
“WHAT TYPE OF LOSS CAN WE EXPECT?”
Financial
Impact
$71M
RISK REDUCTION
VS.
$2M
INVESTMENT
RSA ARCHER®
CYBER RISK QUANTIFICATION
Notas do Editor
How does that help decision makers to make better decisions to do deal with probable future loss events?
How does it help us to do this cost effectively?
And sometimes it gets rolled up into a single dot on the heat map of ERM that is presented the board.
Is that the best we can do?
“Risk theatre”, everyone is playing along as scripted in the approved Risk Management Methodology Document.
There is a communication gap. Heat Maps, scoring methods etc. are not expressing operational risk in terms the business can understand.
The business talks “$”.
How many mediums make a high?
What’s the difference between the lowest high and highest medium? Are they the same?
We probably don’t realize the problem we are in, after all we are just following best practices.
We followed a “best practice”, everyone else is doing it, so it must be OK, don’t question it. That mentality is preventing us from improving and making progress, in any domain. We are ignoring our ignorance, behaving like we have it all figured out and not need to improve.
I have a love/hate relationship with “best practices”. Best practices have value but times are changing and even if they are valid they might not remain valid for ever.
We would never be doing Agile if we weren’t willing to recognize the limitations of waterfall.
We would not be looking in DevOps if we didn’t recognize that there are other methods beyond ITIL
NIST recognizing that their password complexity recommendations aren’t useful. (for the last 20 years we have been doing it wrong).
Douglas Hubbard: Inventor of Applied Information Economics
Technical: Fake math on ordinal scales, range compression etc
Design: Scale response psychology
Behavioral Economics: Cognitive Biases, homo economicus, predictably irrational, systematically flawed, bounded rationality, rational in limit
We frequently say that Risk Management is “more Art than science”, that is by choice, it doesn’t have to be.
Some People did question what we are doing , and this is what they concluded:
We need to stop throwing words around and talk about loss events, we need to tell a story.
If we use terminology inconsistently we can’t measure real risks, prioritize or communicate effectively with the business.
Risk is a function of likelihood and impact, that means an event. A risk description should reflect that.
How much risk is associated with this particular loss event?
Technology, Threats, Attack Vector etc. , “Lack of ..” these are not risks, Conformance exceptions, variations etc.
If you following a particular standard, how consistent are you using the terminology?
Is the terminology defined clearly for everyone to understand?
How much time have you spend on ensuring everyone speaks the same language, or did you assume…
FAIR pays attention to taxonomy and ontology and uses it consistently.
FAIR offers simple and precise definitions.
Using terminology wrong has an actual impact on the analysis (in contrast with other standards it is unlikely to impact the analysis result).
Mental Model: Quality of analysis depends on the understanding of the practitioner. Inhibits communication, development, rationalization…
How do you communicate your rational using the “mental model”? How inconsistent must that feel to your audience?
Are we surprised that we get accusations like “You are just making this up as you go…”?
How do we improve the analysis if only results are documented but not the analysis itself and the underlying assumptions?
Douglas Hubbard: “…relatively naïve statistical models seem to outperform human experts in a surprising variety of estimation and forecasting problems.”
FAIR supports getting the the model out of our heads and putting it on paper.
What makes quantitative RM quantitative?
It’s the inclusion of measurements. Each contributing factor can be measured in FAIR is either a number, percentage or $ value.
Welcome to “economically driven cyber risk management”
But what does it mean “to measure”?
Measurements need to reduce uncertainty. They basically need to be useful, help us making decisions, know more after then we did before…. They are not about precision, imprecise measurements can still be useful.
We aim for accuracy within a level of precision that is useful for our task.
Every measurement taken is an estimate with some potential for variance and error. The questions isn't if a "measurement" is an estimate or not because they all are but:
Are they accurate (accurate i.e. correct not precise)
Reduce uncertainty
Are able to be arrived at within your time and resource constraints
"we lack data to assign probabilities…" --> We use probability because we lack perfect information, not in spite of it.
That’s all just guesswork!
Richard Thaler mentions Homer Simpson as an example of “System 1” and Mr. Spock as an example for “System 2”
The automatic and reflecting systems mentioned by Daniel Khaneman in Thinking fast and slow”.
You are still a little skeptical?
Understandable, after all we have been told over and over again that quantitative measurements are difficult, not enough data and then there are all those intangibles, and “soft” measures…
It’s a little difficult to absorb, after a life time of being told about “intangibles” that can’t be measured.
Sounds all good, but not everything can be measured, there are intangibles and soft measures!!!
To overcome this mental block, we need to open our mind. The Clarification Chain….
Important assumption: You need less data than you think!
This is the FAIR ontology. We will get back to it later.
It looks intimidating but the beauty is that you can stay as high level as the scenario/context/available data need you to be. You don’t have to drill down and complicate it if not needed.
What makes quantitative RM quantitative?
It’s the inclusion of measurements. Each contributing factor can be measured in FAIR is either a number, percentage or $ value.
Welcome to “economically driven cyber risk management”