1. Security Systems:
Goals, Definitions,Requirements, and Principles
Chulantha Kulasekere
Department of Electronic and Computer Engineering
Sri Lanka Institute of Information Technology
chulantha.k@sliit.lk
September 21,2013
ECK/2013 (SLIIT)
FCCS
September 21, 2013
1 / 68
2. Security Systems: Requirements
Putting in place an effective security system requires planning,
resources and effort from all levels in an organization.
Support from the management of an organization is key for a good
security system because it is the only entity that can effectively
provide:
the list of assets and information that need to be protected to ensure
the continuity of the organization based on
risk analysis
risk mitigation
the resources to setup and maintain the system based on
funding for the equipment required
funding for the training + education of the staff
the means to enable enforcement of policy compliance and revision
auditing
policy updating
The security program needs to:
be driven by the management to have a better chance of being
effective.
to be developed in terms of the whole of the organization and then
refined to fit the specific areas within the organization.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
2 / 68
3. Security Systems: Requirements ....
A key aspect of the development and implementation is
communication (between the security development team and the rest
of the organization).
The security system requires:
information ownership to be clearly specified
clear definition of staff responsibilities
policies to handle asset/information access
clear hierarchy and reporting procedure
Restrictions to to information/assets is done via:
administrative controls
technical controls
physical controls
Example: critical security controls from SANS
http://www.sans.org/critical-securitycontrols/guidelines.php
ECK/2013 (SLIIT)
FCCS
September 21, 2013
3 / 68
4. Key Concepts
Threat:
Any circumstance or event with the potential to adversely impact
organizational operations (including mission, functions, image, or
reputation), organizational assets, or individuals through an information
system via unauthorized access, destruction, disclosure, modification of
information, and/or denial of service
Vulnerability
Weakness in an information system, system security procedures,
internal controls, or implementation that could be exploited or
triggered by a threat source
Risk
a situation involving exposure to danger.
Incident: An incident is the result of a successful attack
Countermeasure:
any organizational action or tool able to mitigate a risk deriving from
one or more attack classes intended to exploit one or more classes of
vulnerabilities.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
4 / 68
5. Security System Principles
Effective security programs are shaped by the organization’s short and
long term objectives.
All effective programs are based on the AIC (or CIA) principles triad:
Availability
Integrity
Confidentiality
The security measures put in place to attempt to:
provide support for the AIC principles
address the threats that may compromise or more of the AIC principles
ECK/2013 (SLIIT)
FCCS
September 21, 2013
5 / 68
6. Availability Principle
What does the principle of Availability entail?
All systems should perform in a predictable manner and with the
condition that the performance is of an acceptable level.
Three causes of availability problems:
software
denied access to information resulting in software non availability
denied access due to strong encryption
hardware
Denial of services due to DDOS attacks
Denial of service due to hardware non availability
unexpected circumstances
Unexpected circumstances as a result of a natural event
Unexpected circumstances as a result of human caused disaster
ECK/2013 (SLIIT)
FCCS
September 21, 2013
6 / 68
7. Addressing Availability
To address the problem of service denial one can deploy security
solutions specifically aimed at DDOS attacks.
To address the problem of service provision failure one can employ
fault tolerant computer systems.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
7 / 68
8. Integrity Principle
What does the principle of Integrity entail? :– No unauthorized
modification is permitted and the information provided both accurate
and reliable.
Requires a combination of hardware, software and communication
methods to ensure that the data is not compromised.
Carefully developed controls are key to preventing data integrity
problems.
The integrity of the data can be compromised either by mistake or
with specific intent.
From a commercial software development point of view, the integrity
principle can be further refined in terms of:
a) whether the information is valid
b) whether the data has been compromised
c) whether the data source can be determined and verified
ECK/2013 (SLIIT)
FCCS
September 21, 2013
8 / 68
9. Addressing Integrity
The ensure the integrity of information an auditing procedure must be
in place in conjunction with a separation of both functions and duties.
Separation of duties and functions - why is it important and how does
it help with the system security?
To compromise the security setup, a concerted effort from multiple
staff is required - the larger the number of staff required the smaller
the likelihood that the security will be breached
Auditing - why is it important and how does it help with the system
security?
Having a simple and clear procedure to gain access to
information/resources reduces the chances that access can be obtained
without proper authorization and without detection.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
9 / 68
10. Confidentiality Principle
What does the principle of Confidentiality entail?
The secrecy of the data is maintained at all times.
The privacy of the data can be protected through a combination of
data access control and encryption.
The secrecy can be compromised in several ways:
malware
intruders
insecure networks
poorly administered systems.
packet capture
social engineering
password attacks
ECK/2013 (SLIIT)
FCCS
September 21, 2013
10 / 68
12. Universal security principles: Commonly used methods
Least Privilege
Motto: Do not give any more privileges than absolutely necessary to
do the required job
It applies not only to privileges of users and applications on a
computer system, but also to other non- information systems
privileges of an organizations staff
The principle of least privilege is a preventive control, because it
reduces the number of privileges that may be potentially abused and
therefore limits the potential damage
Some examples of application of this principle include the following:
Giving users only read access to shared files if that’s what they need,
and making sure write access is disabled
Not allowing help desk staff to create or delete user accounts if all that
they may have to do is to reset a password
Not allowing software developers to move software from development
servers to production servers
ECK/2013 (SLIIT)
FCCS
September 21, 2013
12 / 68
13. Universal security principles: Commonly used methods
Defense in Depth
The principle of defense in depth is about having more than one layer
or different types of defense cascaded.
Even if one layer is breached, the next layer will hold out. Hence
difficulty of breaching increases.
An example application is to use a firewall and IPSEC encryption
using a firewall between the Internet and your LAN, plus the IP Security
Architecture (IPSEC) to encrypt all sensitive traffic on the LAN.
Firewall and the data encryption both have to be compromised. Hard.
A suggested method
first use preventive controls:
second use detective controls to check on preventive control breach
third use corrective controls to help you respond effectively to security
incidents and contain damage
It does not mean indiscriminately application of controls. A balance
has to be found between security provided and the financial, human,
and organizational resources you are willing to expend following it.
(some IT people use a lot to make life easier for them)
ECK/2013 (SLIIT)
FCCS
September 21, 2013
13 / 68
14. Universal security principles: Commonly used methods
Minimization
The minimization principle is the cousin of the least privilege principle
and mostly applies to system configuration
do not run any software, applications, or services that are not strictly
required to do the entrusted job
a computer whose only function is to serve as an e-mail server should
have only e-mail server software installed and enabled. All other
services and protocols should either be disabled or not installed at all
to eliminate any possibility of compromise or misuse
minimization principle not only increases security but usually also
improves performance, saves storage space, and is a good system
administration practice in general.
Examples : See unix security best practices.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
14 / 68
15. Universal security principles: Commonly used methods
Keep Things Simple
Complexity is the worst enemy of security. Complex systems are
inherently more insecure because they are difficult to design,
implement, test, and secure
complexity of information systems and processes is bound to increase
with our increasing expectations of functionality, we should be very
careful to draw a line between avoidable and unavoidable complexity
and not sacrifice security for bells and whistles
One can deliver a simple system via security audits to match the
effort with threat.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
15 / 68
16. Universal security principles: Commonly used methods
Compartmentalization
Use of compartments (also known as zones, jails, sandboxes, and
virtual areas), is a principle that limits the damage and protects other
compartments when software in one compartment is malfunctioning
or compromised
Compartmentalization in the information security context means that
applications run in different compartments are isolated from each
other
Examples of this are:
Zones in Solaris 10 implement the compartmentalization principle and
are powerful security mechanisms.
If you have root privileges, you can basically do anything you want. If
you don’t have root access, there are restrictions. For example, you
can’t bind to ports under 1024 without root access
Similarly, you can’t directly access a lot of operating system
resources—for example, you have to go through a device driver to write
to a disk; you can’t deal with it directly.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
16 / 68
17. Universal security principles: Commonly used methods
Use Choke Points
Choke points are logical narrow channels that can be easily monitored
and controlled
An example of a choke point is a firewall
Virtual private network (VPN) and dial-in access points
The Windows domain controller is an application choke point
If every employee is in direct contact with everyone else in the world,
then there is a great potential for a social engineering attack to
perform. Employees are not allowed to directly contact the outside
world during business hours.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
17 / 68
18. Universal security principles: Commonly used methods
Fail Securely
Failing securely means that if a security measure or control has failed
for whatever reason, the system is not rendered to an insecure state.
For example, when a firewall fails, it should default to a deny all rule,
not a permit all rule.
However, fail securely does not mean “close everything” in all cases;
if we are talking about a computer-controlled building access control
system, for example, in case of a fire the system should default to
“open doors” if humans are trapped in the building
In this case, human life takes priority over the risk of unauthorized
access, which may be dealt with using some other form of control that
does not endanger the lives of people during emergency situations.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
18 / 68
19. Universal security principles: Commonly used methods
Leverage Unpredictability
You should not publicize the details of your security measures and
defenses.
This principle should not be seen as contradicting deterrent security
controls—controls that basically notify everyone that security
mechanisms are in place and that violations will be resisted, detected,
and acted upon.
In practical terms, this means you can, for example, announce that
you are using a firewall that, in particular, logs all traffic to and from
your network, and these logs are reviewed by the organization—there
is no need to disclose the type, vendor, or version number of the
firewall; where it is located; how often logs are reviewed; and whether
any backup firewalls or network intrusion detection systems are in
place.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
19 / 68
20. Universal security principles: Commonly used methods
Segregation of Duties
The purpose of the segregation (or separation) of duties is to avoid
the possibility of a single person being responsible for different
functions within an organization, which when combined may result in
a security violation that may go undetected.
No single person should be able to violate security and get away with
it
Rotation of duties is a similar control that is intended to detect abuse
of privileges or fraud and is a practice to help your organization avoid
becoming overly dependent on a single member of the staff.
By rotating staff, the organization has more chances of discovering
violations or fraud.
Doors of the Shrine.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
20 / 68
21. What is access control?
It is granting or denying approval to use specific resources; it is
controlling access
The mechanism used in an information system to allow or restrict
access to data or devices
Illustrated via an exampled of a FedEx delivery man picking up a
parcel from a home
ECK/2013 (SLIIT)
FCCS
September 21, 2013
21 / 68
22. Terminology of access control
Object. An object is a specific resource, such as a file or a hardware
device.
Subject. A subject is a user or a process functioning on behalf of the
user that attempts to access an object.
Operation. The action that is taken by the subject over the object is
called an operation. For example, a user (subject) may attempt to
delete (operation) a file (object).
ECK/2013 (SLIIT)
FCCS
September 21, 2013
22 / 68
23. Roles in access control
ECK/2013 (SLIIT)
FCCS
September 21, 2013
23 / 68
24. Roles in access control ...
ECK/2013 (SLIIT)
FCCS
September 21, 2013
24 / 68
25. Types of Controls to ensure AIC principles are not
compromised
Central to information security is the concept of controls, which may
be categorized by their functionality (preventive, detective, corrective,
deterrent, recovery, and compensating, in this order) and plane of
application (physical, administrative, or technical).
Physical controls include doors, secure facilities, fire extinguishers,
flood protection, and air conditioning.
Administrative controls are the organization’spolicies, procedures, and
guidelines intended to facilitate information security.
Technical controls are the various technical measures, such as
firewalls, authentication systems, intrusion detection systems, and file
encryption, among others.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
25 / 68
26. Types of Controls
Preventive Control: Preventive controls try to prevent security
violations and enforce access control. Like other controls, preventive
controls may be physical, administrative, or technical: doors, security
procedures, and authentication requirements are examples of physical,
administrative, and technical preventive controls, respectively.
Detective Controls: Detective controls are in place to detect security
violations and alert the defenders. They come into play when
preventive controls have failed or have been circumvented and are no
less crucial than detective controls. Detective controls include
cryptographic checksums, file integrity checkers, audit trails and logs,
and similar mechanisms.
Compensating Controls Compensating controls are intended to be
alternative arrangements for other controls when the original controls
have failed or cannot be used. When a second set of controls
addresses the same threats that are addressed by another set of
controls, the second set of controls are compensating controls.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
26 / 68
27. Types of Controls ....
Corrective Controls: Corrective controls try to correct the situation
after a security violation has occurred. Although a violation occurred,
not all is lost, so it makes sense to try and fix the situation.
Corrective controls vary widely, depending on the area being targeted,
and they may be technical or administrative in nature.
Deterrent Controls: Deterrent controls are intended to discourage
potential attackers and send the message that it is better not to
attack, but even if you decide to attack we are able to defend
ourselves. Examples of deterrent controls include notices of
monitoring and logging as well as the visible practice of sound
information security management.
Recovery Controls: Recovery controls are somewhat like corrective
controls, but they are applied in more serious situations to recover
from security violations and restore information and information
processing resources. Recovery controls may include disaster recovery
and business continuity mechanisms, backup systems and data,
emergency key management arrangements, and similar controls.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
27 / 68
28. Access Control Models
Mandatory Access Control (MAC)
Takes a stricter approach to access control. Mandatory access
controls specified in a system-wide security policy are enforced by the
operating system and applied to all operations on that system. User
does not have the discretion.
It has two key elements
Labels. In a system using MAC, every entity is an object (laptops,
files, projects, and so on) and is assigned a classification label. These
labels represent the relative importance of the object, such as
confidential, secret, and top secret. Subjects (users, processes, and so
on) are assigned a privilege label (sometimes called a clearance).
Levels. A hierarchy based on the labels is also used, both for objects
and subjects. Top secret has a higher level than secret, which has a
higher level than confidential.
This is compartmentalization. As it is harder to implement
MAC-based systems are typically used in government, military, and
financial environments, where higher than usual security is required
and where the added complexity and costs are tolerated
ECK/2013 (SLIIT)
FCCS
September 21, 2013
28 / 68
29. Access Control Models
Mandatory Access Control (MAC) ....
The implementation strategy of MAC is as follows
It grants permissions by matching object labels with subject labels
based on their respective levels.
To determine if a file can be opened by a user, the object and subject
labels are compared.
The subject must have an equal or greater level than the object in order
to be granted access. For example, if the object label is top secret, yet
the subject only has a lower secret clearance, then access is denied.
Subjects cannot change the labels of objects or other subjects in order
to modify the security settings.
Major implementations of MAC are
Lattice model: Security levels for objects and subjects are ordered as a
lattice.
Bell-LaPadula confidentiality model: Advanced version of the lattice
model (actually this uses a mix of MAC and DAC)
ECK/2013 (SLIIT)
FCCS
September 21, 2013
29 / 68
30. Access Control Models
Mandatory Access Control (MAC) ....
A limited functional example of MAC is seen in Apple Mac OS X,
UNIX, and Microsoft Windows 7/Vista
Microsoft Windows implementation has four security levels—low,
medium, high, and system
Nonadministrative user processes run by default at the medium level
Specific actions (such as installing application software) by a subject
with a lower classification (such as a standard user) may require a
higher level (such as high) of approval
This approval invokes the Windows User Account Control (UAC)
function
A standard user needs to enter the admin password to elevate its
privileges to a higher level before installing
UAC attempts to match the subject’s privilege level with that of the
object
ECK/2013 (SLIIT)
FCCS
September 21, 2013
30 / 68
31. Access Control Models
Discretionary Access Control (DAC)
MAC is the most restrictive model, the DAC model is the least
restrictive. However widely used.
With the DAC model, every object has an owner, who has total
control over that object.
The owner (creator) of information (file or directory) has the
discretion to decide about and set access control (create and access
objects, owner can give permission to others to use them as well)
restrictions on the object in question—which may, for example, be a
file or a directory. (Unix: chmod)
Flexibility of a user deciding the access is an advantage. However its
a disadvantage too as users may take wrong decisions.
Example: with DAC, User X could access the files
EMPLOYEES.XLSX and SALARIES.XLSX as well as paste the
contents of EMPLOYEES.XLSX into a newly created document
MYDATA.XLSX. User X could also give User Y access to all of these
files, but only allow User Z to read EMPLOYEES.XLSX.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
31 / 68
32. Access Control Models
Discretionary Access Control (DAC) ...
Two significant drawbacks in this system
DAC relies on decisions by the end user to set the proper level of
security. As a result, incorrect permissions might be granted to a
subject or permissions might be given to an unauthorized subject
The subject’s permissions will be inherited by any programs that the
subject executes.
Attackers can take advantage of this inheritance as end users in the
DAC model often have a high level of privileges.
Examples: Malware that is downloaded onto a user’s computer would
then run in the same context as the user’s high privileges. eg. Trojans
are a particular problem with DAC.
One method of controlling DAC inheritance is to automatically reduce
the user’s permissions. For example, Microsoft Windows 7 uses
Internet Explorer Protected Mode which prevents malware from
executing code through the use of elevated privileges without the
users explicit recommendation.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
32 / 68
33. Access Control Models
Role-Based Access Control (RBAC)
Rights and permissions are assigned to roles instead of individual
users. Sometimes called Non-Discretionary Access Control
This added layer of abstraction permits easier and more flexible
administration and enforcement of access controls
Eg., access to marketing files may be restricted to the marketing
manager role only, and users Kamal, and Upul may be assigned the
role of marketing manager
Later, when Kamal moves from the marketing department elsewhere,
it is enough to revoke his role of marketing manager; no other
changes would be necessary
This is the permissions model used in Microsoft Exchange Server
2013.
Additionally there is another variant called Rule Based Access Control
(RBAC)
ECK/2013 (SLIIT)
FCCS
September 21, 2013
33 / 68
34. Implementing Access Control
Inclass independent work
Other models: Attribute based access control, Policy based access
control, Risk adaptive access control etc.
User unix setfacl and getfacl to test access control lists.
Identify the use of RADIUS (Remote Authentication Dial In User
Service) as a method to provide authentication based access control
(method, use and weaknesses)
Identify the use of kerberose as a method of access control via
identification and verification of network users (method, use and
weaknesses)
Identify Lightweight Directory Access Protocol (LDAP) (X.500 Lite)
as an access control mechanism (method, use and weaknesses)
X.800-191.3 standard services
Explain how the access control lists are managed via a matrix
ECK/2013 (SLIIT)
FCCS
September 21, 2013
34 / 68
35. Lets look at how the label based access control works
MLS or Multilevel security is an implementation of the label based
access control
The following example is based on confidentiality (unauthorized eyes
cannot see) and disregards integrity and availability
First step is categorize data
Objects are an ordered list with labels: Unclassified, Confidential,
Secret, Top-Secret
Subjects (these express a membership in an interest group) are an
unordered set with labels: Crypto, Nuclear, Janitorial, Personnel
Example of two documents with associated labels
(Secret: {Nuclear, Crypto})– contains somewhat sensitive information
related to the categories Nuclear and Crypto
(Top Secret: {Crypto})–contains very sensitive information in category
Crypto.
A question we suggested for confidentiality policies is: How do I
characterize who is authorized to see what?
ECK/2013 (SLIIT)
FCCS
September 21, 2013
35 / 68
36. Lets look at how the label based access control works ...
Then each individual desirous of accessing this information will have
a hierarchical security level indicating the degree of trustworthiness to
which he or she has been vetted;
a set of “need-to-know categories” indicating domains of interest in
which he or she is authorized to operate.
The labels on documents indicate the sensitivity of the contained
information; “labels” on humans indicate classes of information that
person is authorized to access.
The need-to-know-categories is an implementation of the Principle of
Lease Privilege
Now lets look at how the access is controlled
Clarence (Subject)
Sensitivity (Object)
(Secret:{Crypto})
(Confidential:{Crypto})
(Secret:{Crypto,Nuclear})
Top-Secret:{Crypto})
(Secret:{Nuclear})
(Unclassified:{})
ECK/2013 (SLIIT)
FCCS
Access
Yes
No
Yes
September 21, 2013
36 / 68
37. Lets look at how the label based access control works ...
To control access by Subjects to Objects, we need “labels” for both.
For Objects the labels indicate the sensitivity of the information
contained.
For Subjects, the labels indicate the authorization (clearance) to view
certain classes of information.
A Subject should be given the minimal authorization to perform the
job assigned. (Least Privilege)
Whether an Subject should be able to view a specific Object depends
on a relationship between the label of the Object and the clearance of
the Subject.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
37 / 68
38. Lets look at how the label based access control works ...
Mathematically, it uses the dominate rules: Given a set of security
labels (L, S), comprising hierarchical levels and categories, we can
define an ordering relation among labels.
(L1 , S1 ) dominates (L2 , S2 ) iff
L1 ≥ L2 in the ordering on levels, and
S2 ⊆ S1
We usually write (L1 , S1 ) ≥ (L2 , S2 ).
Note that this is a partial order, not a total order. I.e., there are
security labels A and B such that neither A ≥ B nor B ≥ A.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
38 / 68
39. Implementations Access Control Models
Bell-LaPadula model
Biba model
Clark-Wilson model
Chinese wall model
ECK/2013 (SLIIT)
FCCS
September 21, 2013
39 / 68
40. Bell-LaPadula Model (BLP)
The BLP is a state machine model used for enforcing access control
in government and military applications
It was developed by David Elliott Bell and Leonard J. LaPadula
The Bell-LaPadula model focuses on data confidentiality and
controlled access to classified information
It defines two MAC rules and one DAC rule:
The Simple Security Property (ss-property)- a subject at a given
security level may not read an object at a higher security level (no
read-up).
The ∗-property - a subject at a given security level must not write to
any object at a lower security level (no write-down). The ∗-property is
also known as the Confinement property.
The Discretionary Security Property (ds-property) - An individual (or
role) may grant to another individual (or role) access to a document
based on the owner’s discretion, constrained by the MAC rules. Using
an ACL.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
40 / 68
41. Bell-LaPadula Model (BLP) ....
Breach of confidentiality: Read from high and write to low
ECK/2013 (SLIIT)
FCCS
September 21, 2013
41 / 68
42. Bell-LaPadula Model (BLP) ....
Scenario: Consider a restrictive classification which has label ordering
Regular < Secret < Top-secret < Double-Z Top Secret.
The first rule says that you can’t read documents if you don’t have a
high enough classification level. Hence if you have a ”Secret”
clearance, then you can read ”Secret” and ”Regular” documents, but
you can’t read ”Top Secret” or ”Double-Z Top Secret” documents.
The simple description of this rule is ”no read-up.”
The second rule says that you can’t write documents lower than your
classification level. But you can write documents higher than your
classification level, you still can’t read them. So again if you have a
”Secret” classification level, you can write to ”Secret,” ”Top Secret,”
and ”Double-Z Top Secret”, but you can’t write to ”Regular”. The
simple description of this rule is ”no write-down.”
The third rule just allows for much more fine-grained access, within an
access level, the system uses an access control matrix to limit. For eg.
a Top-Secret label may not be allowed to access all documents
associated with Top-Secret label. According to the matrix can the user
delete the file? Write to the file? Read from the file? Is the user the
owner of the file?
ECK/2013 (SLIIT)
FCCS
September 21, 2013
42 / 68
43. Issues with Bell-LaPadula Model
The transfer of information from a high-sensitivity document to a
lower-sensitivity document may happen in the Bell-LaPadula model
via the concept of trusted subjects. Trusted Subjects are not
restricted by the *-property. Untrusted subjects are.
Suppose someone with access to a Top Secret document copies the
information onto a piece of paper and sticks it into an Unclassified
folder
Only addresses confidentiality, control of writing (one form of
integrity), ∗-property and discretionary access control
Covert channels such as Trojan horses and Requesting system
resources to learn about other users are mentioned but are not
addressed comprehensively
The tranquility principle (The tranquility principle of the
Bell–LaPadula model states that the classification of a subject or
object does not change while it is being referenced) limits its
applicability to systems where security levels do not change
dynamically.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
43 / 68
44. Issues with Bell-LaPadula Model ...
Assume you could somehow change an object’s label from (Top
Secret: { Crypto }) to (Unclassified: {}) independent of the object’s
contents. This would clearly violate confidentiality.
The Tranquility principle has two flavors
The Strong Tranquility Property: Subjects and objects do not change
labels during the lifetime of the system.
The Weak Tranquility Property: Subjects and objects do not change
labels in a way that violates the “spirit” of the security policy.
Suppose your system includes a command to lower the level of a
object in an unconstrained way. Does that violate the goals of simple
security or the ∗-property?
Suppose your system includes a command to raise the level of a
object in an unconstrained way. Does that violate the goals of simple
security or the ∗-property?
ECK/2013 (SLIIT)
FCCS
September 21, 2013
44 / 68
45. Bell-LaPadula Model in Mathematics
The Simple Security Property: Subject S with clearance (LS , CS ) may
be granted read access to object O with classification (LO , CO ) only if
(LS , CS ) ≥ (LO , CO ).
The ∗-Property: Subject S with clearance (LS , CS ) may be granted
write access to object O with classification (LO , CO ) only if
(LS , CS ) ≤ (LO , CO ).
ECK/2013 (SLIIT)
FCCS
September 21, 2013
45 / 68
46. Bell-LaPadula Model Example
Consider a BLP system with three subjects and three objects together
with the ordering H > L
Subjects
Level
Objects
Level
Sub1
(H:{A,B,C})
Obj1
(L:{A,B,C})
Sub2
(L:{})
Obj2
(L:{})
Sub3
(L:{A,B})
Obj3
(L:{B,C})
The corresponding access control matrix is
Obj1 Obj2 Obj3
Sub1
R
R
R
Sub2
W
R,W
W
Sub3
W
R
-
ECK/2013 (SLIIT)
FCCS
September 21, 2013
46 / 68
47. Lattice Based Models
The labels when they are organized in a structure is called a lattice.
It is a partial ordering which satisfies the transitive property (if a ≤ b,
and b ≤ c then a ≤ c) and antisymmetric property (if a ≤ b and
b ≤ a, then a = b)
A simple lattice example for factors of 60
ECK/2013 (SLIIT)
FCCS
September 21, 2013
47 / 68
49. The Biba Model
Similar to BLP but focus is on integrity, not confidentiality
On analogy with BLP, bad (low integrity) information can flow into a
good (high integrity) object if:
a low integrity subject writes bad information into a high integrity
object; or
a high integrity subject reads bad information from a low integrity
object.
Reviews distinction between military and commercial policy
Military policy focus on confidentiality
Commercial policy focus on integrity
For modeling purposes, as we did with the confidentiality, we assign
integrity labels
An object’s label characterizes the degree of “trustworthiness” of the
information contained in that object: Eg. Gossip overheard on the
roadside should have lower credibility than a report from a panel of
experts.
A subject’s label measures the confidence one places in its ability to
produce / handle information: Eg. A certified application may have
more integrity than freeware downloaded from the Internet.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
49 / 68
50. The Biba Model...
Intuitively, integrity relates to how much you trust an entity to
produce, protect, or modify data.
It uses the following principles: Separation of duty and functions,
Auditing.
Integrity labels look like BLP confidentiality labels.
A hierarchical component gives the level of trustworthiness.
A set of categories provides a list of domains of relevant competence.
Example: A lecturer might have integrity label: (Expert:
Management) meaning that he has a very high degree of credibility in
Management expertise. But there’s no particular reason to trust his
opinion on a matter of Politics or Drama.
This suggests, by analogy with the BLP rules, a subject shouldn’t be
allowed to “write up” in integrity or to “read down” in integrity.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
50 / 68
51. The Biba Model...
Ken Biba (1977) proposed three different integrity access control
policies.
The Low Water Mark Integrity Policy (for objects and subjects and
integrity audit)
The Ring Policy
Strict Integrity (this is the one which is called the Biba Integrity Model)
Mathematically the Biba model can be given as
Simple Integrity Property: Subject S can read object O only if
I (S) ≤ I (O)—subject can only read objects at its own integrity level or
above.
Integrity ∗-Property: Subject S can write to object O only if
I (O) ≤ I (S)—a subject can only write objects at its own integrity level
or below.
This means that a subject’s integrity cannot be tainted by reading
bad (lower integrity) information; a subject cannot taint more reliable
(higher integrity) information by writing into it.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
51 / 68
52. Biba Model Example
Since this is an access control policy, it can be represented as an
access control matrix. Suppose H > L are hierarchical integrity levels.
Level
Objects
Level
Subjects
Sub1
(H:{A,B,C})
Obj1
(L:{A,B,C})
Sub2
(L:{})
Obj2
(L:{})
Sub3
(L:{A,B})
Obj3
(L:{B,C})
The corresponding access control matrix is
Obj1 Obj2 Obj3
Sub1
W
W
W
Sub2
R
R,W
R
Sub3
R
W
To protect confidentiality and integrity, one could use both BLP and
Biba’s Strict Integrity policy: That is, need confidentiality labels and
integrity labels for all subjects and objects. Access is allowed only if
allowed by both the BLP rules and the Biba rules.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
52 / 68
53. BLP versus Biba Model
The Bell-LaPadula model is used to provide
confidentiality. The Biba model is used to
provide integrity. The Bell-LaPadula and Biba
models are informational flow models because
they are most concerned about data flowing
from one level to another. Bell-LaPadula uses
security levels and Biba uses integrity levels.
A tip to remember: if the word “simple” is used,
the rule is talking about reading. If the rule uses
a “star”, it is talking about writing.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
53 / 68
54. Independent Work on Security Models
Biba’s Low Water Mark Integrity Policy
Bibas Ring Policy
Take-Grant Model (this is a DAC model)
Lipner’s Integrity Matrix Models (which combines the Bell model and
the Biba model)
Graham Denning Model
Note that most of the above policies are commercially applicable policies
so they are more DAC than MAC.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
54 / 68
55. Biba Model Mandatory Policies
Low-Watermark Policy for Subjects
Its a relaxed “no-read-down”
It does not restrict a subject from reading objects however it
dynamically lowers the integrity level of a subject based on what
objects are observed.
One short coming of the policy: If a subject observes a less trusted
object, it will drop the subjects integrity level to that of the object.
Later for any reason the subject cannot observe a higher integrity
level object even though one legitimately needs too.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
55 / 68
56. Biba Model Mandatory Policies
Low-Watermark Policy for Objects
Its a relaxed “no write-down”
Any subject may modify any object, regardless of integrity levels and
its not prevented In reality policy is not very practical.
If a subject modifies an object at higher integrity level (a more trusted
object), it results in the transaction being recorded in an audit log.
The object integrity level is lowered.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
56 / 68
57. Biba Model Mandatory Policies
Ring policy
The Ring Policy allows any subject to observe any object. This policy
is only concerned with direct modification.
The drawback to this policy is it allows improper modifications to
indirectly take place.
A subject can read a less trusted object. Then the subject could
modify the data it observed at its own integrity level.
An example of this would be a user reading a less trusted object, then
remember the data that they read and then at a later time writing
that data to an object at their own integrity level.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
57 / 68
58. Clark and Wilson Model
David Clark and David Wilson (1987) argued that commercial
security has its own unique concerns and merits a model crafted for
that domain.
Clark and Wilson claimed that the following are four fundamental
concerns of any reasonable commercial integrity model:
Authentication: identity of all users must be properly authenticated.
Audit: modifications should be logged to record every program
executed and by whom, in a way that cannot be subverted.
Well-formed transactions: users manipulate data only in constrained
ways. Only legitimate accesses are allowed.
Separation of duty: the system associates with each user a valid set of
programs they can run and prevents unauthorized modifications, thus
preserving integrity and consistency with the real world.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
58 / 68
59. Clark and Wilson Model.....
The policy is constructed in terms of the following categories:
Constrained Data Items: CDIs are the objects whose integrity is
protected
Unconstrained Data Items: UDIs are objects not covered by the
integrity policy
Transformation Procedures: TPs are the only procedures allowed to
modify CDIs, or take arbitrary user input and create new CDIs.
Designed to take the system from one valid state to another.
Integrity Verification Procedures: IVPs are procedures meant to verify
maintenance of integrity of CDIs.
It uses two kinds of rules, viz certification and enforcement, to control
access.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
59 / 68
60. Clark and Wilson Model Policy Rules
C1: IVP Certification – The system will have an IVP for validating the
integrity of any CDI. i.e. All IVPs must ensure that CDIs are in a valid
state when the IVP is run.
C2: Validity – The application of a TP to any CDI must maintain the
integrity of that CDI. CDIs must be certified to ensure that they result
in a valid CDI. i.e. All TPs must be certified as integrity-preserving.
C3: Modification – A CDI can only be changed by a TP. TPs must be
certified to ensure they implement the principles of separation of duties
and least privilege. i.e. Assignment of TPs to users must satisfy
separation of duty.
C4: Journal Certification – TPs must be certified to ensure that their
actions are logged. i.e. The operation of TPs must be logged.
C5: TPs which act on UDIs must be certified to ensure that they
result in a valid CDI
E1: Enforcement of Validity – Only certified TPs can operate on CDIs
E2: Enforcement of Separation of Duty – Users must only access CDIs
through TPs for which they are authorized
E3: User Identity – The system must authenticate the identity of each
user attempting to execute a TP
E4: Initiation – Only administrator can specify TP authorizations
ECK/2013 (SLIIT)
FCCS
September 21, 2013
60 / 68
61. Clark and Wilson Model Implementation
Permissions are encoded as a set of triples of the form: (user, TP,
{CDI set}), where user is authorized to perform a transaction
procedure TP, on the given set of constrained data items (CDIs).
Each triple in the policy must comply with all applicable certification
and enforcement rules.
Handling of untrusted inputs: Any TP that takes as input a UDI may
perform only valid transformations, or no transformations, for all
possible values of the UDI. The transformation either rejects the UDI
or transforms it into a CDI
For example, in a bank ATM, numbers entered at the the keyboard are
UDIs so cannot be input to TPs as such. TPs must validate numbers
(to make them a CDI) before using them; if validation fails, TP rejects
UDI
ECK/2013 (SLIIT)
FCCS
September 21, 2013
61 / 68
62. Clark and Wilson Model Implementation ...
Separation of duties: Only the certifier of a TP may change the list of
entities associated with that TP. No certifier of a TP, or of an entity
associated with that TP, may ever have execute permission with
respect to that entity
Enforces separation of duty with respect to certified and allowed
relations.
Ensuring integrity:
Provides an assurance that CDIs can be modified only in constrained
ways (Ensured by rules C1, C2, C5, and E1 and E4)
Provides an ability to control access to resources (Ensured by rules C3
and E2 and E3)
Provides an ability to ascertain after the fact that changes to CDIs are
valid and the system is in a valid state (Provided by rules C1 and C4)
Provides an ability to uniquely associate an user to her/his action
(Enforced by rule E3)
ECK/2013 (SLIIT)
FCCS
September 21, 2013
62 / 68
63. Chinese Wall Model
Brewer and Nash (1989) proposed the policy which addresses a
conflicts of interest problem. Strictly speaking, this is not an integrity
policy, but an access control confidentiality policy.
The Chinese Wall Model is an idea that stems from the ability to read
or write information. The main idea is that you are able to access any
information you want from any company but once you access that
information, you are no longer allowed to access information from
another company within that class of companies.
The security policy builds on three levels of abstraction.
Objects such as files. Objects contain information about only one
company.
Company groups collect all objects concerning a particular company.
Conflict classes cluster the groups of objects for competing companies.
For example, consider the following conflict classes:
{ Dialog, Mobitel, Airtel }
{ Central Bank, HNB, HSBC }
{ Microsoft }
ECK/2013 (SLIIT)
FCCS
September 21, 2013
63 / 68
64. Chinese Wall Model....
We have a simple access control policy: A subject may access
information from any company as long as that subject has never
accessed information from a different company in the same conflict
class.
For example, if you access a file from Dialog, you subsequently will be
blocked from accessing any files from Mobitel or Airtel. You are free
to access files from companies in any other conflict class.
Notice that permissions change dynamically. The access rights that
any subject enjoys depends on the history of past accesses.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
64 / 68
65. Policy Rules for Chinese Wall Model
(Chinese Wall) Simple Security Rule: A subject s can be granted
access to an object o only if the object:
is in the same company datasets as the objects already accessed by s,
that is, “within the Wall,” or
belongs to an entirely different conflict of interest class.
(Chinese Wall) ∗-property: Write access is only permitted if:
access is permitted by the simple security rule, and
no object can be read which is:
in a different company dataset than the one for which write access is
requested, and
contains unsanitized information.
The Chinese Wall is an access control policy in which accesses are
sensitive to the history of past accesses.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
65 / 68
66. The concept of a Trusted System
The models described above are all aimed at enhancing the trust that
users and administrators have in the security of a computer system.
Some definitions
Trust: The extent to which someone who relies on a system can have
confidence that the system meets its specifications (i.e., that the
system does what it claims to do and does not perform unwanted
functions).
Trusted system: A system believed to enforce a given set of attributes
to a stated degree of assurance.
Trusted computing base (TCB): A portion of a system that enforces
a particular policy. The TCB must be resistant to tampering and
circumvention. The TCB should be small enough to be analyzed
systematically.
The trust domain security rules specify the conditions:
for generating information
for maintaining information privacy
for maintaining information integrity
ECK/2013 (SLIIT)
FCCS
September 21, 2013
66 / 68
67. The reference monitor
Initial implementations used something called a Reference Monitor to
implement a TCS.
The reference monitor enforces the security rules (no read up, no
write down) and has the following properties:
Complete mediation: The security rules are enforced on every access,
not just, for example, when a file is opened.
Isolation: The reference monitor and database are protected from
unauthorized modification.
Verifiability: The reference monitor’s correctness must be provable.
That is, it must be possible to demonstrate mathematically that the
reference monitor enforces the security rules and provides complete
mediation and isolation.
It is noted that these are very restrictive and the solution may be
partly hardware based.
ECK/2013 (SLIIT)
FCCS
September 21, 2013
67 / 68