Winter 2019-2020 CSSE442
CSSE442 – Computer Security
Sid Stamm
[email protected] Rose-Hulman Institute of Technology
Computer Science and Software Engineering Department
Confidentiality and Integrity Policies Notes
1 Confidentiality Policies
A confidentiality policy is intended to protect secrets; specifically, it is intended to prevent unau-
thorized disclosure of information. One model (general purpose template) of a confidentiality policy
is the Bell–LaPadula (BLP) security model. Your textbook contains information BLP in Chapter 1
(and this handout assumes you’ve read that), but we need to examine some more specifics about
BLP by introducing more formal definitions to examine its limits.
Before diving in, we must make some assumptions about trust. First, we will reason about subjects
in the system and not users. BLP assumes that users do not share data outside of the computer
system and cannot use the computer system without first authenticating as a subject. While users
may indeed leak secrets outside of the computing system (by reciting from memory, for example),
we assume that authorized users of the system are committed to protecting the secrets contained
within.
The Bell–LaPadula model leverages mandatory access control: the model does not give users power
to alter access control in the system manifested in a principle called tranquility. We’ll discuss this
a bit later.
1.1 Multi-Level Security
In its simplest form, BLP is a multi-level security model that protects Confidentiality. The goals
of the system are to keep secrets with specific confidentiality classifications disclosed only to sub-
jects with the appropriate clearances to read them. In addition, subjects should not be able to
accidentally (or intentionally) leak secrets to other subjects with lower clearances.
In BLP, each subject S and object O is assigned a confidentiality level (LS and LO , respectively).
The classification levels are:
Top Secret (TS)
Secret (S)
Confidential (C)
Unclassified (UC)
The most important, or most sensitive data are kept at the Top Secret (TS) level. Data that can
be read by anyone is kept at the Unclassified level. The levels in between provide granularity so
that some data can be shared in limited ways.
5 December 2019 Page 1
Winter 2019-2020 CSSE442
The Simple Security Condition requires that a subject S can read an object O only
if LO ≤ LS and any DAC permits it (“read down”).
The F–Property requires that a subject S can write to an object O only if LS ≤ LO
and any DAC permits it (“write up”).
This means that in BLP, all subjects can tell their secrets to anyone with an equal or higher
clearance and they can listen to secrets classified as their level or in a level less confidential. These
rules are mandatory, and controlled by the system.
Formally: given a system with set of states Σ and set of transformations T that map from one state
σi ∈ Σ to another σj ∈ Σ, and an initial state σ0 : If the Simple Security Condition and F–Property
are preserved by all t ∈ T and σ0 is secure, then the system is considered secure.
1.2 Some Questions
1. Why does BLP allow subjects to write into objects that are more higher than the subject’s
level?
2. Would raising the classification of an object violate either property of BLP? Why or why not?
3. If the users of a system are not malicious, why is it so important for BLP to have the F–
Property? Give an example situation that would violate the F–Property even if the users of
the system aren’t trying to leak sensitive data to lower levels.
1.3 Multi–Lateral Security
Sometimes data levels are not enough. To reduce the risk of unauthorized data flow, you may want
to limit subjects’ access not only by their clearance level but also by what they’re working on. It’s
not only about how much you trust the subject, but also about what they need to know.
The Bell–LaPadula model implements Multi–Lateral security by adding compartments. Now a
subject S has a clearance level LS and also a set of compartments corresponding to their domain.
EXAMPLE: Consider an intelligence agency that operates in Europe, North America
and Asia. They may compartmentalize operations into three groups: {EU R, N A, ASIA}.
A subject working for the agency may have a Top Secret clearance, but work on only
European projects and so they would have LS = T S, CS = {EU R}. The goal here
is to protect the subject from leaking data to less-trusted sources and also limit their
exposure to confidential data regarding Europe.
Now we need to define a new measure of trust or sensitivity.
5 December 2019 Page 2
Winter 2019-2020 CSSE442
Dominates: (L, C) dom (L0 , C 0 ) if and only if L0 ≤ L and C 0 ⊆ C.
And this means we need to revisit the principles of BLP:
The Simple Security Condition requires that a subject S can read an object O only
if S dom O and any DAC permits it (“read down”).
The F–Property requires that a subject S can write to an object O only if O dom S
and any DAC permits it (“write up”).
1.4 Changing Levels and Compartments
How can a subject “write down” when it’s necessary to produce less-sensitive documents or interact
with other subjects with different levels or compartments?
There’s a notion of a maximum security level and a current security level. A subject is allowed
to temporarily reduce her level or set of compartments from (L, C) → (L0 , C 0 ) so long as the
temporary or current security level (L, C) dom (L0 , C 0 ).
But this has to be done carefully, and in a way that won’t violate the F–Property. So we introduce
a third principle (or requirement) of the system that dictates how and when people can change
levels.
Tranquility means that when a system is in use, subjects and objects levels and com-
partments do not change. This is also known as Strong Tranquility and is system-
enforced.
Tranquility exists so that someone who wants to prove confidentiality in the system is not reasoning
about a moving target. The textbook takes a weaker definition of tranquility with the sole purpose
of disallowing accidental declassification. In a system the rights of objects can’t change while the
object is being modified. To paraphrase, the weaker definition:
A system has Weak Tranquility when subjects or objects levels and compartments
do not change in a way that violate a given security policy.
Weak tranquility is introduced because often times in a system, some amount of write-down (or
declassification) is required for normal operation.
1.5 Some Questions
1. Are there systems that are only Multi–Level or Multi–Lateral but not both? Give examples
of any if there are some.
5 December 2019 Page 3
Winter 2019-2020 CSSE442
2. What is an example of a system that would benefit from having Weak Tranquility over Strong
Tranquility?
3. Does it make any sense to have compartments in the Unclassified level? Why or why not?
2 Integrity Policies
An integrity policy is intended to protect the trust in data, and not keep the data secret. Its
primary purpose is to keep the quality of data as high as possible. At first glance, it seems
that Confidentiality and Integrity policies are very similar but approach Security from different
perspectives. In Confidentiality, the system is designed to protect how much everyone can trust
who accesses the data. In Integrity models, the system is designed to protect how much everyone
can trust the origin or quality of the data. One feels very much like read–access control and the
other feels very much like write–access control.
2.1 Low Water Mark
One way to think about integrity is to consider how the trustworthiness of your knowledge can
change over time. Say you start out with only the highest quality facts in your mind. As you
read more and more information, the “trustworthiness” of all of your knowledge is only as good
as the least trustworthy bit of information in your head. If you find yourself reading completely
untrustworthy information, it’s possible you may repeat it in the future — so the amount of trust
you have drops.
Specifically, in a Low Water Mark model with subjects s ∈ S, objects o ∈ O and integrity levels I,
there are three rules:
1. A subject s can only write to an object o if Io ≤ Is .
2. If a subject s reads an object o, then its integrity drops to the minimum of Is or Io .
3. A subject s1 can only execute (or invoke) another subject s2 if Is2 ≤ Is1 .
This has an interesting effect. Over time, the integrity levels of all the subjects in the system drops
to a minimal level as they access various objects. This is primarily because that, on a long timeline,
all subjects will access objects with less integrity than they have seen in the past and thus lower
their integrity level. This is an unfortunate race to the bottom.
2.2 Biba’s Model
So how do we maintain integrity in a system and avoid this race to the bottom? Kenneth Biba
probably saw how Integrity seems like the inverse of Confidentiality, so he turned the Bell–LaPadula
model on its head and protect trust in the data and not in who accesses it.
5 December 2019 Page 4
Winter 2019-2020 CSSE442
In Biba’s model, each subject S and object O is assigned an integrity level in I. Subjects can only
create content at or below their integrity level and can view only at or above. His model effectively
has the inverse principles to the Simple Security property and the F–property.
1. A subject s can only read an object o if Is ≤ Io (“read–up”; inverse of Simple Security).
2. A subject s can only write to an object o if Io ≤ Is (“write–down”; inverse of F–Property).
Colloquially: if you are trustworthy, you can tell untrustworthy subjects what you know. If you
are untrustworthy, you can only listen to subjects at least as trustworthy as yourself (so you don’t
learn lies).
2.3 Principles of Function
All integrity models have a few principles of function that make sure the system maintains its
integrity.
Separation of Function: If two functions do not require the same resources, they do not share
resources. For example, do not develop your web site on a live system.
Separation of Duty: If it takes at least two steps to perform a critical function, at least two
different people should perform the steps. For example, the developers of a system are not
in charge of pushing it to a production system; the two steps here are “make sure it works”
and “make it live.”
Auditing: Often times external to any Integrity model, good auditing practices can help with
recovery mechanisms when integrity fails.
2.4 Some Questions
1. The principles of function seem very much like Prevention, Detection and Recovery. How do
they relate?
2. Why is it so important for Biba’s model to prevent a subject from executing/invoking a
subject with a higher integrity level?
3. How might Separation of Duty, Separation of Function, and Auditing be implemented in
Biba’s model?
4. Can Biba’s model make use of compartments like Bell–LaPadula’s model can? If so, what
extra flexibility can compartments provide Biba’s model?
5 December 2019 Page 5