0% found this document useful (1 vote)
349 views77 pages

FAIR Open Course - Module 01 - Introduction

Uploaded by

Dimitris Maketas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
349 views77 pages

FAIR Open Course - Module 01 - Introduction

Uploaded by

Dimitris Maketas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

1

This course material by Osama Salah is released under the


following license:
Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)

Conditions for use are published here:


[Link]

2
Acknowledgement
• I would like to thank and acknowledge the following people for their
valuable contributions to this course material:
• Have your name and effort acknowledged here!!!

3
FAIR Open Course

Module 1 - FAIR Introduction


Ver. 0.3 /Last Update: 29/Nov/2019

4
Welcome
• Welcome to the FAIR Training Course.
• This material is being made available under the
Creative Commons Attribution-ShareAlike 4.0 International 
(CC BY-SA 4.0) License
• Corrections and Suggestions can be sent to [Link]@[Link]
• All material mentioned in this material has been referenced, if any
omissions were made please point them out and I will correct them.
• If any material has been included that is copyrighted then please note
that this has been done in error and will be corrected.

5
Module 1 – Introduction to FAIR

6
What is FAIR?

7
What is FAIR?
• FAIR stands for: Factor Analysis of Information Risk
• Originally published in 2005 by Jack Jones
• Adopted by the Open Group in 2013
• The Open FAIR Body of Knowledge.
• Risk Taxonomy Standard (O-RT)
• Risk Analysis Standard (O-RA)

8
Standard Taxonomy/Ontology
This means:
• it clearly defines the factors that make up risk
• it defines how they relate to each other.

9
A Methodology for Quantifying and Managing
Risk in Financial Terms in Any Organization
This means it enables us to talk to the business in the language they
speak best which is dollars, euros or any other currency you prefer.

10
A Complementary Analytics Model
• Complements Risk Frameworks, such as ISO 31000, COSO, NIST CSF,
OCTAVE, …
• It typically addresses activities such as Risk Analysis, Risk Evaluation,
Risk Treatment selection and prioritization for which which most
standards do not provide pragmatic guidance.
• It is not a Risk Management Framework, but a well-reasoned and
logical risk evaluation framework.
• It provides a scenario modeling construct to build and analyze risk
scenarios.

11
Examples of where FAIR fits

ISO 31000 OCTAVE


12
A Standard of The Open Group since 2013
• This means it has been extensively peer reviewed before being
endorsed and adopted by the Open Group.

13
The Risk Management Stack
The Risk Management Stack explains the value of Cost-Effective Risk Management
quantitative risk analysis models.

• Our objective is to achieve cost-effective risk


Well-informed Decisions
management.
• Risk management is a decision making discipline,
which means we try to make well-informed decisions.
Effective Comparisons
• Decisions are typically trade offs between multiple
options. We can only decide on which is best for our
particular context by making effective comparisons.
• Effective comparisons are objectively enabled through Meaningful Measurements
meaningful measurements.
• For measurements to be meaningful, logical,
consistent and defensible they need to be based on Accurate Models
accurate models. 14
A word on Models
Without a formal model we are depending on “Mental Models”.
Extensive research has already shown, that we do much better using
formal models. Even simple linear models are better than mental
models.

An example of such research is the work of Professor Philip Tetlock.

15
“It is impossible to find any domain in
which humans clearly outperformed
crude extrapolation algorithms, less still
sophisticated statistical ones”
284 19 Years
Philip Tetlock, "Expert Political Judgement”
Author, Professor
82,361
16
Fundamental Terminology
• Before we get started with the FAIR model itself and its terminology,
lets first focus on some other important terms.

17
Terminology - Asset
Assets: Anything that may be affected in a manner whereby its value
is diminished or the act introduces liability to the owner.
Assets are things that we value. They usually have intrinsic value, are
replaceable in some way or create potential liability.
The business cares about the “real asset”. For example a server might
be an asset but most often it isn’t the primary asset of interest in the
analysis. It may be the point of attack through which an attacker gains
access to the data.

? Can you think of any other examples of primary and


secondary assets?
18
Terminology - Asset
Assets: Anything that may be affected in a manner whereby its value
is diminished or the act introduces liability to the owner.

? What are your thoughts on “Reputation” as an asset?

19
Terminology - Asset
• Reputation is an important organizational asset but it is not the
subject of a risk analysis.
• Reputation is harmed by acting against an asset.
• For example if our service is unreliable and suffers frequent outages
our reputation will be harmed. A risk analysis would investigate what
could cause our service to become unreliable, how often that could
happen and what the impact may probably be to our reputation.
• Thus, reputation damage is an outcome of a risk event.

20
Terminology - Threat
A Threat is anything that is capable of acting in a manner resulting in
harm to an asset and/or organization.
• Every action has to have an actor to carry it out. Typically called
“Threat agent” or “Threat community” but generally referred to just
as “Threat”.
• Threats need to have an ability to actively do harm to the asset we
are performing the risk analysis on. It must have the potential to
inflict loss.
• Natural events like earthquakes, floods etc. are also considered
threats.

21
Terminology – Threat Communities
Threat Communities: A subset of the overall threat agent population that
shares key characteristics.
Nation States
Cyber Criminals
Organized Crime
Insiders:
Privileged Insiders
Non-privileged Insiders
Hacktivists/Activists
Natural

22
Terminology – Threat Profiling
Threat profiling is the technique of building a list of common
characteristics with a given threat community.
Possible attributes for a threat community profile:
• Motive
• Primary Intent
• Sponsorship
• Preferred general target characteristics
• Preferred targets
• Capability
• Personal risk tolerance
• Concern for Collateral Damage

23
Nation State Threat Community Profile
Factor Value
Motive Nationalism
Primary Intent Data gathering or disruption of critical infrastructure in furtherance of
military, economical, or political goals.
Sponsorship State sponsored, yet often clandestine
Preferred general target Organizations/Individuals that represent assets/targets by the state
characteristics sponsor
Preferred targets Entities with significant financial, commercial, intellectual property, and/or
critical infrastructure assets.
Capability Highly funded, trained, and skilled. Can bring a nearly unlimited arsenal of
resources to bear in pursuit of their goals.
Personal risk tolerance Very high, up to and including death.
Concern for Collateral Some, if it interferes with the clandestine nature of the attack.
Damage

24
Source: Jones, Jack, and Jack Freund. Measuring and Managing Information Risk: a FAIR Approach.
Cyber Criminal Threat Community Profile
Factor Value
Motive Financial
Primary Intent Engage in activities legal or illegal to maximize their profit.
Sponsorship Non-state sponsored or recognized organizations (illegal organizations or gangs).
The organization, however, does sponsor the illicit activities.
Preferred general target
characteristics
What are your thoughts on the
Easy financial gains via remote means; prefer electronic cash transmission over
physical crimes involving cards. Needs money mules or other intermediary to

Preferred targets threat community profile


shield them from reversed transactions.
Financial services and retail organizations.

desperate characteristics of a cyber


Capability Professional hackers. Well-funded, trained, and skilled. May employ relatively
actors with or without native skillsets.

criminal?
Personal risk tolerance Relatively high (criminal activates); however, willing to abandon efforts that
might expose them.
Concern for Collateral Not interested in activities that expose themselves or others from their
Damage organization. Prefer to keep their identities hidden.

25
Source: Jones, Jack, and Jack Freund. Measuring and Managing Information Risk: a FAIR Approach.
Threat Community Resources
• Threat community profiles are typically very high level and don’t
change much over time.
• MITRE ATT&CK (Adversary Tactics, Techniques and Common
Knowledge) can give more detailed information on tactics and
techniques used by threats.
• Data Thieves, The Motivation of Cyber Threat Actos and Their Use and
Monetization of Stolen Data
• All major Threat Intelligence service providers help their customers
understand their specific threat communities and provide detailed
information on specific threat actors (APT33, Shadow Brokers,
Equation Group etc.)

26
Probability vs. Possibility
• Possibility is “binary": something is possible or it is not.
• Probability is a continuum addressing the area between certainty and
impossibility.
Risk management deals with probability as it deals with future events
that always have some amount of uncertainty.
Given enough time almost everything is possible. By focusing on the
probable we can prioritize on what really matters.

Probability is not prediction. The odds of rolling “1” with


! a single die is 1 in 6, but we can’t predict on what the
dice will fall.
27
Probability vs. Possibility
Possibility Probability
It’s possible it could rain today. There is a 50% chance of rain today.
It’s possible we could win the lottery. The chances of winning the lottery are
one in 20 million.
It’s possible we could die in a car The chance of being killed by a car is one
accident. in 50,000.
It’s possible that we face a ransomware There is a 60% probability that we will
attack this year. suffer a ransomware attack this year.

28
Probability vs. Possibility
• It is essential to use the right terminology as it shifts the discussion. 
• For example, auditors tend to focus on compliance. Their world is mostly
binary either you are or are not compliant or a situation is a risk or isn’t. 
• For example, they might report that ten endpoints not having the latest
AV signatures (out of 3000) is a risk. They might be looking at it from
a “possibility” perspective, i.e. “..it’s possible that some malware will not
be detected on one of these ten machines and will cause major losses.”
• If we shift the discussion to focus on probability we might reason that
infection at worst will be limited to these 10 machines and impact will be
low as there is now outbreak.

29
Fun Probability theory was invented to solve a
gambling problem!!
Fact

• Chevalier de Méré was a A 17th century gambler. He ignited the


mathematical foundations for the theory of probability. When he was
tired of loosing in a game of dice he turned to Blaise Pascal for help.
Pascal in turn worked with Pierre de Fermat and together they laid
out the mathematical foundation for the theory of probability.
• Blaise Pascal and Pierre de Fermat are considered the fathers of
probability theory.

30
Accuracy and Precision
Explained in plain English:
Accuracy: How close a value is to its true/expected value.
Precision: How repeatable a result is.
You can think of accuracy as related to being correct and precision
related to being repeatable.

Which is precise
? and which is
accurate?

31
Accuracy and Precision

? When we have to estimate a quantity, what is typically


more important accuracy or precision?

If you asked for the height of Burj Khalifa and I told you 812meters and
someone else said between 820 and 840 meters you would probably
believe it’s 812m. It’s a precise number and implies certainty. However
it happens to be wrong and the estimate of 820 to 840m is correct i.e.
accurate because the real value is somewhere in that range (828m).

32
“It’s better to be
roughly right than
precisely wrong!”
- John Maynard Keynes
British econmist

33
Measurement
• We have presented FAIR as a quantitative risk analysis model, this
means that its building blocks can be quantified or measured.
• The building block are usually measured in numbers, percentages or a
currency.

34
But, what exactly is
a measurement?

35
“Although this may seem a paradox,
all exact science is based on the
idea of approximation. If a man tells
you he knowns a thing exactly then
you can be safe in inferring that you
are speaking to an inexact man.”
- Bertrand Russell
English mathematician and philosopher

36
Measurement
• Our general understanding of “measurement” is usually linked to
stating some exact and very precise number.
• However, as we have seen in the previous quote “…exact science is
based on the idea of approximation…”.
• This basically implies that a “measurement” can always be made
more exact for example by using a different tool or method. However,
a measurement that is more exact than another (i.e. the
approximation) does not mean that the approximation on its own
wasn’t useful or sufficient for the particular decision to be made.

37
Reference: How to measure anything by Douglas W. Hubbard
Measurement
• It is reasonable to consider approximations as valid
measurements.
• Douglas Hubbard in his books “How to measure
anything – Finding the Value of “Intangibles” in
Business” (now in its third edition) and “How to
measure anything in Cybersecurity Risk” discusses this
in more detail.
• A good presentation from the author is available on
YouTube:
• [Link]

38
“A quantitatively expressed
reduction of uncertainty based
on one or more observations.”
- Douglas Hubbard

Hubbard’s definition is based on insights from “Information theory” which was developed in the 1940s by
Claude Shannon.
“Shannon developed information entropy as a measure of the information content in a message, which is a
measure of uncertainty reduced by the message, while essentially inventing the field of information theory. ” (
Wikipedia)

Hubbard proposes to see measurements as a reduction of uncertainty (not necessarily elimination of


uncertainty). A quantitatively expressed reduction of uncertainty is sufficient to qualify as a measurement.
39
Measurement
• As we have seen from the Risk Management Stack, RM is a decision
making discipline. Decisions are made under uncertainty, if we can
reduce this uncertainty through an observation and express it
quantitively then this qualifies as a measurement.
• Every measurement taken is an estimate with some potential for
variance and error. The questions isn't if a "measurement" is an
estimate or not because they all are but more importantly:
• Are they accurate (accurate i.e. correct)?
• Do they reduce uncertainty (they support decision making)?
• Are able to be arrived at within our time and resource constraints?

40
Measurements and “Intangibles”
• This all sounds good, but what about “intangibles” or “soft
measures”? Surely there are things out there we just can’t measure!
• And what do we practically do with this definition?

? Can you think of any immeasurable “intangibles”?

41
Measurement and “Intangibles”
First of all Douglas Hubbard offers the Clarification Chain as a simple
tool to overcome our misconceptions on measurements and to avoid
paralysis i.e. not even attempting to think on how to measure because
we just believe its impossible for the start.

42
The Clarification Chain
• If it matters at all, it is detectable/observable.
If you can’t observe the effect of whatever it is you value, then it has
no effect and thus why bother with it? On the other hand if it
matters it must somehow be detectable or observable.
• If it is detectable, it can be detected as an amount (or range
of possible amounts).
If you can observe its impact you should be able to observe more or
less of it.
• If it can be detected as a range of possible amounts, it can
be measured.
43
Reference: How to measure anything by Douglas W. Hubbard
Measurement and “Intangibles”

?
The clarification
Do you see chain
howis the first step, mentioned
previously but to practically apply its logic
“intangibles”
Douglas Hubbard proposes the following steps to solve your
could be measured?
measurement problem:
Problem: I want to measure X.
1. What do you mean with X? How do you observe X? What do you
observe when you see more or less of it?
2. Why do you care? If you know how to measure X, what will you do
with this measurement? What decisions will you make?
3. What do you know now about X?

44
Reference: How to measure anything by Douglas W. Hubbard
Measurement Challenge
On his blog you can follow Douglas Hubbard helping readers with their
measurement challenges and learn more about this technique:
[Link]

45
Measurement Assumptions
Douglas Hubbard, based on his extensive experience,
also offers the following assumptions:
Four Useful Measurement Assumptions
• You problem is not as unique as you think.
• You have more data than you think.
• You need less data than you think.
• There is a useful measurement that is much simpler than
you think.

46
Reference: How to measure anything by Douglas W. Hubbard
Measurement in Practice
So we have now become “measurement believers” and
deny the existence of “Intangible measurements”, but
what do we do with this believe? How do we apply it to
risk management?

47
Replace Single Point Estimates with
Ranges
When we place a dot on a risk matrix (heatmap) we are
envisioning a particular scenario to unfold. However we are
dealing with uncertainty and single point estimates do not
allow us to express this uncertainty.
In our problem space single point estimates are almost always
wrong.
What does the dot represent anyhow? Best case? Worst Case?
Something in between?

48
Replace Single Point Estimates with
Ranges
Imagine you analyze a particular risk, let’s say a ransomware attack on a
company that takes down all workstations and servers. Your predefined
risk heat map, has already ranges or “bins” defined for losses:

Low less than $100K What if you estimate losses to be


Medium $100K – $500K somewhere between $400K and $1.5M?
High $500K - $2M Which of the predefined ranges do you
Very High > $2M pick?
What if it was around $500K?

49
Replace Single Point Estimates with
Ranges
Instead of being limited by these pre-defined ranges you get now to
pick any range you feel fits your particular scenario.
You get to do that for likelihood and consequences for every single risk
scenario you analyze. Every time you can use the range you feel fits
best.
And as a bonus we will do actual math directly with these ranges
instead of fake math based on labels like “1”, “2” etc.
We will see later how to do the math with these ranges.

50
Single Point Estimates become Ranges

2 5 10
min most likely max

For example, if we are trying to estimate how often a risk event could occur
during a year, we simply define a minimum, most likely and maximum
value.

51
Ranges and Uncertainty

2 5 10
min most likely max

1 5 12
min most likely max
The more uncertain we are, the wider we make our ranges. This is how we
express our uncertainty, we don’t ignore it.
52
Ranges become Distributions
Probability of Occurrence

10 70 300
Loss Magnitude ($1000)
We can then transform these ranges into distributions which will allow us later to do the
53 math.
Ranges Become Distributions
• Besides using wider ranges another tool we have to deal with
uncertainty is to define a confidence in the most likely value.
• Basically, the more confident we are about the most likely value the
more of peak our distribution will have.
High confidence

Medium confidence

Low confidence

54
Estimating is not the same as guessing!!!

Guessing
Intuitive, casual, spontaneous conclusion, no
thought behind it…

Estimating
Intentional thought process, Informed assessment,
examining assumptions, consider available data,
develop rationale, use ranges to account for
uncertainty…
55
Calibrate your estimates
Remember, we are estimating, not
guessing which means we can
improve and get better at it. 100%
Range of results for studies of
calibrated persons

Actual Percent Correct


• You can get trained or “calibrated” 90%

so that when you say you are 90% 80%


al C
alibrati
on
Overconfidence
Range of results from
Ide
confident, you will be right 90% of 70% studies of un-calibrated
people

the time. 60%


Avera
ge o f U n-calib
rated

50%
• Everyone is systematically 40%
“overconfident”. 50% 60% 70% 80% 90% 100%

Assessed Chance Of Being Correct

Source: Hubbard, D. W., & Seiersen, R. (2016). 


How to Measure Anything in Cybersecurity Risk.

56
Calibration - Tips
• If you feel like you don’t know where to start then start with absurdly
wide ranges.
• Don’t think of a number and then add/subtract an error. Treat each
bound separately (e.g. are you 90% certain the value is less than the
upper bound?)
• Be comfortable with wide ranges (they express your uncertainty)
• Assume your estimate is wrong. Now state why it was wrong.
• Apply an “equivalent bet”.

57
Calibration – The Equivalent Bet
Douglas Hubbard & Richard Seiersen in their book “How to Measure
Anything in Cybersecurity Risk” refer to the ”Equivalent bet” as a tool
to help make better estimates. The idea is simple:
• If your estimate is correct you can win $1000, else you win nothing.
• But, you are also given the option to spin a wheel instead and let fate
decide. The wheel has a 90% chance to win $1000 and 10% chance to
win nothing.
• Do you stick with your estimate or instead choose to spin the wheel?

58
.
Calibration – The Equivalent Bet
• The idea is, that if you prefer to spin the
wheel then you believe the chance of

0
winning through spinning (90%) is higher

n$
than betting on your own estimate (i.e.

Wi
lower than 90%).
• In that case you go back and revisit your
estimate. For example you could do
further research or widen your estimated
range. Win $1000
• The sweet spot is where you are kind of
ambivalent between the two choices.

59
More on Expert Estimation
• There is no denying that there is subjectivity to expert estimation, however with
exercises such as calibration this can be reduced.
• Other improvement options are to interview multiple SMEs and combine the results.
• Open brainstorming sessions should be avoided as they tend to be affected by “
Groupthink” bias.

Groupthink is a psychological phenomenon that occurs within a group of people in which


the desire for harmony or conformity in the group results in an irrational or dysfunctional 
decision-making outcome. Group members try to minimize conflict and reach a consensus
decision without critical evaluation of alternative viewpoints by actively suppressing
dissenting viewpoints, and by isolating themselves from outside influences.
Source: Wikipedia

60
More on Expert Estimation
• Instead of an open brainstorming session you can have a session where the scope is clearly
laid out and all assumptions are documented. The purpose is to avoid misunderstandings in
the interpretation of the objective. However estimates are not discussed.
• The estimates are then collected from the individuals separately.
• The results can be combined by giving higher weight to the more senior or expert.

Resources:
• Modeling expert opinion
• Incorporating differences in expert opinions
• Sources of error in subjective estimation
• Aggregating Expert Opinion in Risk Analysis: An Overview of Methods
• The Team as a Measurement Instrument (registration required)

61
Calibration Exercise
• Please open the provided Excel sheet to do an estimation calibration exercise.
• This will be a Google Sheet that can be used online by multiple students and
will then track progress and compare results etc.
• Need a lot more questions for that!!!
• Additional Calibration Questions are provided by Douglas Hubbard as an
additional resource to his books:
• [Link]
/sites/3/2017/06/[Link]

• The Credence Calibration Game, by CFAR


• [Link]
• [Link]
62
Risk Quantification Illustration

We might be getting ahead of ourselves here, but for some people it is


easier to follow once they see where all of this will lead to.

Risk

Loss Event Frequency Loss Magnitude

For now just understand that Risk in FAIR is a function of Loss Event
Frequency (how often losses will be incurred) and the Loss Magnitude.

63
Risk

Loss Event Frequency Loss Magnitude

1 5 12 10 70 300

Imagine you defined two ranges one for Loss Event Frequency and one for Loss Magnitude.
These get transformed into distributions, visualized here as histograms (instead of a curve).
64
Risk

Loss Event Frequency Loss Magnitude

1 5 12 10 70 300

Imagine we pick a random value from each range, for example 5.1 and $162.5K. If we multiple these two values we get
the loss exposure of one particular scenario that might unfold.
65
Risk

Loss Event Frequency Loss Magnitude

1 5 12 10 70 300

Now we just keep repeating picking values randomly, however the higher the bar the more often the
algorithm picks values from that range. After each pick we multiply and record the result.
66
Pick
Randomly
From 0 to 1

67
Monte Carlo Simulation

68
Monte Carlo Simulation
The modern version of the Monte Carlo
method was invented in the late 1940s by 
Did you Stanislaw Ulam, while he was working on
Know? nuclear weapons projects at the 
Los Alamos National Laboratory.

Being secret, the work of von Neumann and Ulam required a


code name. A colleague of von Neumann and Ulam, 
Nicholas Metropolis, suggested using the name Monte Carlo,
which refers to the Monte Carlo Casino in Monaco where Ulam's
uncle would borrow money from relatives to gamble.
69
[Link]
Monte Carlo Simulation
..that Monte Carlo is used in path/ray tracing in
Did you the rendering of realistic looking computer
Know? generated images?

[Link]
70
Monte Carlo Simulation
• That’s essentially what is known as a Monte Carlo Simulation. Of
course with computers we can calculate thousands of scenarios in
seconds.
• And to make sense out of all these calculated loss exposures we
visualize them again using a histogram.

Further Reading:
Monte Carlo Simulation, a Simple Guide
Mathematical Foundations of Monte Carlo Methods

71
Probability Distribution

80% of Losses equal or below $800K


80%

$800K 72
80% of Losses between $198K and $534K

73
80% of
Losses equal
or below
$514K
80% of Losses equal or below $800K

Risk
Risk after Treatment

74
Loss Exceedance Curve

80% chance of losing more than $220,000

75
Loss Exceedance Curve Comparison
Min Avg 20% Max
Risk $23.3K $538.7K $800K $2.6M
Treatment Option 1 $15.1K $329.7K $500K $1.8M
Treatment Option 2 $11.2K $77.5K $110K $279.7K

76
End of Module 1
Now we should have a basic understanding of how risk quantification
works. In the next modules we will start focusing more on the FAIR
model itself and see how it fits into risk quantification.

77

You might also like