Security Policies and Implementation Issues
Chapter 11, Data Classification and Handling Policies and Risk Management
Policies
Chapter 12, Incident Response Team (IRT) Policies
Objectives:
This lesson covers chapters 11 and 12. It discusses policies that relate
data classification, general risks, and risk assessment. It also discusses
policies that relate to incident response teams and procedures. Objectives
important to this lesson:
Data classification policies
Data handling policies
Risks related to information systems
Risk assessment policies
Quality assurance and quality control
Incident definition
Incident response policy definition
Incident classifications
Incident response teams
Procedures for IRTs
Concepts:
Chapter 11
Data Classification
This chapter begins with some ideas about classifying data. Most
people and most organizations classify data by its purpose, its use, or
some other sorting method. There is more to it, however, when you include
the idea that some data must be kept secure or secret.
The site that provided the video above (there is no such thing as an
"above video"), provides a starting point for learning
abou security. You may want to look into it.
Security classification usually involves putting data into several
categories, each more sensitive than the last. The text tells us
that the time and expense of classifying an unending stream of data often
leads organizations to choose a scheme that limits the number that must
be examined for classification. It presents three methods that provide
this kind of filter:
Classify only the most
important data, the highest risk data for the organization. Put
all other data in a default (unclassified) status.
Classify data by its storage location
or location of origin. This
is a bit imprecise. The text offers the example of classifying all data
stored in a particular database as confidential. This would probably
result in classifying some data that does not actually need to be classified
at that level, and missing some that was not put in the correct database.
Classify every document or data cluster at the time it is created or used. This would seem to require
more classification, done by people who are not necessarily trained to
do that job.
The text moves on to a more basic topic: what do we mean by classification
of data? (Should have hit this one first.) The text explains three
classification concepts that may not mean what you think:
Classification based on the
need to protect data - This is probably what you were thinking
of when the chapter started. The text says this kind is called security
classification, which makes sense. It includes concepts about who
is allowed to see, use, and know about such
data, as well as rules for protecting it. Users wanting to access data
classified in this manner will require different kinds of authorizations to gain that access.
Classification based on the
need to retain data - Most records do not need to be
kept forever. Organizations may have regulations about how long
to keep various kinds of data, which means that the data must be
classified properly to meet this requirement. The organization may also
require that records older than the retention standard must be deleted,
erased, or destroyed. This seems counter to
logic. Why would it necessary to delete older records? The reason may
be security, but it may be storage cost that drives the
decision.
Classification based on the need to recover
data - Information may also be classified according to the need
for it in a disaster recovery. The text recommends a classification
scheme that does not have so many categories that staff are confused
about what is needed first or next.
The illustration on page 298 uses three classifications, each with a
different deadline for restoring that kind of data. They are classified
as needing restoration in 30 minutes, in 2 days, or in 30 days. These
are, obviously, values for Maximum Tolerable Outage (MTO) for
that data. The text observes, sensibly, that these time frames will
not fit all businesses. You must establish the time frames that make
the most sense depending on the business you are in.
The text returns to classifications that address secrecy
on page 299. It is probable that a security classification like the
ones described would be used by a company that is regulated by law,
such as any company that takes payment information from customers. The
example given on page 299 may be typical of organizations regulated by
FERPA:
Prohibited information - laws or regulations
require the protection of this information; this is the most
restricted category
Restricted information - information that would be
prohibited, but wider access is needed for use inside the
organization
Confidential information - information not made
public, but not sensitive enough to be in the first two
categories
Unrestricted information - information that may be
released to the general public
The discussion continues with the general US military classification
scheme. It is also called the National
Security Classification. Note that although it has five levels
(in this text), the adjectives used in the three most sensitive levels
are not defined, so it would be impossible to classify information under
this system without more guidance. There is more guidance in Executive
Order 13526.
Unclassified -
information that is available
for general release; this is the least restricted category
Sensitive but unclassified - information that is
sensitive enough that it is not subject to FOIA, but
also not sensitive enough to fit another category; may also be called For
Official Use Only (FOUO)
Confidential -
information whose disclosure would cause
damage to national security
Secret -
information whose disclosure would cause
serious damage to national security
Top Secret -
information whose disclosure would cause
exceptionally grave damage to national security
Classified material should be examined
before it is classified,
and it should be reexamined
periodically to consider changing
its classification category. Your text lists three ways a document
classified by the US government may be declassified. The ITS 421 text
lists four ways and describes them better:
National Security
Classification (US government) declassification methods:
Automatic declassification
- classified documents that are 25 yearss old may be automatically
declassified and placed in the national archives; there are exceptions
to this rule, established by the Department of Justice
Systematic declassification
- documents less than 25 years old may bbe reviewed for historical
importance, and may be declassified
Mandatory declassification
review - if an authorized
holder requests that a document be declassified, the owning agency must review the request and respond that the request is approved, the request is denied, or that the agency cannot confirm or deny the existence
of the document; denials may be appealed
Freedom of Information Act
(FOIA) request - anyone in the general
public may request that a document be declassified by filing a FOIA
request; as the video below explains, there are limits to the kinds of
requests that can be made.
Documents classified by business organizations
typically follow a similar classification scheme, but it is worded
specifically for the organization's needs.
Common Corporate Security
Classification
Public -
information that may be given to the public
Internal -
information not given to the public, but disclosure would not damage the
company; information is restricted to employees
Sensitive -
information whose disclosure would cause
serious damage to the company; network infrastructure
information, customer lists, and vendor lists fit this category
Highly Sensitive -
information whose disclosure would cause
extreme damage to the company; customer PII is an example of
information that fits this category
Sometimes a commonly used classification scheme doesn't fit.
The text tells us on page 303 that there are guidelines for creating your
own classification scheme in the COBIT and PCI DSS
models. The text sensibly recommends that we pay more attention to the definition
of each level, and less to the label we use for it. The definition
tells our employees how to classify information and how to treat
information that has been classified. The steps on page 303 are a good
start:
How many classifications do you need? How many kinds
of secrets or sensitivity do you work with?
Define each classification. You may need to revise
your answer to step 1.
The text recommends assigning a score to breaches based on how
much they violate Confidentiality, Integrity, and Availability.
The variety of scores or the way they cluster may help
you decide how to define them, and how many definitions you need. On a
scale of 0 to 10, is a 10 on Confidentiality the same as a 10 on
Integrity or Availability?
Name each classification. A name that describes and
differentiates each classification is a good idea.
Assign a protocol to each classification that tells
your staff how to handle information in that classification.
Set audit and reporting requirements as
required by law, or as needed in your organization.
Data Handling
Let's move on to the next section on page 306, about data
handling policies. The text returns to the idea of encryption,
referencing laws that require private data to be encrypted,
and that require breaches of unencrypted data to be reported.
The text tells us that security policies must be clear about when to
use encryption. The question may be better stated as, when do we not
have to use it?
In
the two scenarios presented, starting on page 307, two different exploits
are illustrated. In both, data at rest on the network is encrypted.
In the first scenario, the hacker breaks into an application,
then makes a request from that application to an encrypted
database. The application has permission to retrieve the
decryption key. It does so, and is allowed to retrieve the data. The
example tells us that the the key might not be needed if the
application does the decryption itself. Either way, the breach of the
application led to the exposure of encrypted data.
In the second scenario, the hacker breaks into the operating
system of a workstation or a server. The hacker then steals
an encrypted data file. Since the file is still encrypted, and
the hacker has not stolen the key, the data file is not yet
exposed. The hacker still has to decrypt the file to get any data from
it.
To make this situation more secure, the text recommends three
rules of protection:
Encryption keys should be stored separately
from encrypted data.
Encryption keys must be retrieved by a secure
process, separate from data requests.
Administrator rights to the operating system will not, in
and of themselves, give unencrypted access to an encrypted database.
The text continues with an observation that some organizations
encrypt laptop hard drives, but fail to encrypt other avenues
through the door, such as email, memory sticks, and optical
discs. It is the duty of IT professionals to stay informed about
laws and regulations regarding the handling of data, and to pass that
information to their employers to gather support for appropriate
policies.
On page 309, the text describes a life cycle that we could apply
to most data. As we have already discussed, data must be used,
stored, and eventually disposed of. The cycle described
in the text applies to most data, but the time it should stay in any one
status will vary with its type, use, and requirements that apply to it.
Policies should be developed about data in each of the states listed:
Creation
Access
Use
Transmission
Storage
Physical transport
Destruction
Business Risks
It is hard to tell what the text is worried about in this
section. The author seems to be making a list of types of risks while
still talking about data classification. In the graphic on page 312,
the text is not addressing risk types at all. It tells us that the
organization has identified three classes of data and chosen backup
strategies that lead to a recovery time appropriate to the timeframes
associated with those classes.
Mission critical data
- needed within 30 minutes of an attack;; should be backed up by
mirrored disks for live recovery
Normal operations -
needed within 48 hours of an attack; should use a combination of
on-site and off-site storage; note the slope
of the curve for this category that indicates off-site storage
for items on the high side
Optimized operations
- who knows what the category name meanss, but the data will be needed
within thirty days of the attack; this is data we do not need for daily
operations, but will need within a month, so off-site storage is fine
The point of this transition from the classification section
seems to be that classification is needed to tell us which data
elements are most important to operations, which tells us that the
risks they are exposed to are most important to us.
The discussion turns to risks and policies that are concerned
with risks. Really, they should all be concerned with risk, but some
organizations lose sight of that goal. The text informs us on page 313
that some regulators want more than legal compliance. They are
looking for effective efforts to reduce risk. The graphic on
page 314 tells us that we should be reassessing risks on a regular
basis (hence, the circle) and that we should follow the steps over and
over to be sure we are not missing anything. The book's circle starts
in a different place. You should start wherever the company happens to
be. The list below starts from scratch:
Identification
Assessment
Prioritization
Response and policy development (This includes development
of new policies and responses.)
Monitoring the effectiveness of policies and responses, and
improving them.
Return to 1. Start again.
Risk Assessment Policies
The text discusses several steps that are commonly used when
assessing the risks an organization faces. It begins several steps into
the traditional process. Let's remember that you start by identifying your assets, you continue by determining
the vulnerabilities of those assets, and then you determine the exploits that the assets are subject
to.
Assuming you have followed the first three steps, there are
still some things to do before
we can obtain the value on page 316.
Each asset needs to
be given a value, based on its
replacement cost, its
current value to the
organization, or the value of the income
it generates. Pick one. This is the Asset
Value. Let's choose $100 as an example for Asset Value..
Next, we need to determine, for eachexploit, what the probable loss would be if that
exploit occurs successfully. Would we lose the entire asset? Half of it? Some other percentage? Which percentage
we pick tells us the Exposure Factor
of a single occurrence of that exploit for this asset. Let's choose 50% as an example for Exposure Factor.
We are still not where we want to be. Asset Value
times Exposure Factor equals the Single Loss Expectancy. This matches what the text calls Impact if the event occurs. In this example, it is $50.
Now, to do the problem in the book, we still need the Likelihood the event will occur.
The classic way to do this is to consult your staff about the frequency
of successful attacks of this type, or to consult figures from vendors
like Symantec, McAfee, or Sophos
about expected attack rates for your industry or environment. Let's
assume we have done that, and we are confident that we expect 10
successful attacks per year in our example. This is the Annualized Rate of Occurrence.
Taking the numbers we have so far, we should multiply the Annualized Rate of Occurrence times the Single Loss Expectancy, which will give us the Annualized Loss Expectancy for this asset from this kind of attack. This corresponds to the Risk Exposure shown on page 316. In the example we are considering, that amounts to $500.
All that work led us to just one loss expectancy for one
asset from one kind of attack. That gives you an idea of the work
involved in calculating the numbers for each asset, each asset vulnerability,
and each kind of attack on those vulnerabilities.
On page 317, the text presents some classic strategies for
managing risk. These are not the only ones ever used, but they
represent four well known strategies.
Risk avoidance-
make every effort to avoid your vulnerabilities being exploited; make
the attack less possible, make the threat less likely to occur; avoid
risk by avoiding the activity associated with the risk, and by
providing an active defense against it; the text calls this a business decision
Risk transference - in general, letting someone else worry about it In the ITIL model, this is included in the definition of a service: "A
service is a means of delivering value to customers by facilitating
outcomes customers want to achieve without the ownership of specific
costs and risks." A
reader might misunderstand this statement, thinking that the customer
does not pay anything. That is not the case. An IT service provider
would assume the costs and risks of anoperationin return for the customer's payment for theservice. This can be done in-house or by outsourcing.
Risk mitigation - this method seeks to reduce theeffectsof
an attack, to minimize and contain the damage that an attack can do;
Incident Response plans, Business Continuity plans, and Disaster
Recovery plans are all part of a mitigation plan; a list of mitigation
methods appears on page 318
Risk acceptance-
this counterintuitive idea makes sense if the cost of an incident is
minimal, and the cost of each of the other methods is too high to
accept; the basic idea here is that it costs less just to let it happen
in some cases, and to clean up afterward; this can also be the case
when the risk cannot be managed other than to be aware of it; the text
says this is either a business or a technology decision
The text turns to a discussion of vulnerability assessment, which
you would need to do in order to perform the calculation a few pages
ago. As the text explains, there are several ways to assess
vulnerability, and you should pursue as many of them as may apply to
your situation. Some recommendations are offered:
Penetration testing, not just on firewalls, but on systems as well
Scanning source code of an application for known vulnerabilities
Scanning your network and all devices for open ports, which can be a guide to hardening your systems
The list at the bottom of page 318 is not very helpful. The
next page describes automated processes that are often used to test
systems for open ports and code problems. Of course, you could hire a
penetration testing company or a hacker to test you systems as well.
Chapter 12
Incidents
Okay, let's turn this around. Defining an IRT before you define an incident
is nonsense. Sorry, Mr. Johnson.
Let's start at the bottom of page 330. The short version is that an incident
(actually, a security incident, because there are other
types) is an event that significantly violates a security policy.
It can be any kind of disruption of service or violation of the CIA standard,
as long as it puts the organization at risk. In other environments, the
simple word incident is more generic, so be aware that this chapter
is about security incidents.
Incident Response Policy
That being understood, the chapter actually opens with the idea that
there should be a multidisciplinary team in place to handle significant
security issues. This is confusing because it is not uncommon for there
to be a general incident response team on the organization's help
desk, whose job is to handle any incident that does not have significant
security implications. It is important that when, for instance, there
is an ongoing attack on some aspect of the organization that the right
people be available to defend our assets. The text also discusses the
idea that there will be smaller violations of security policies that will
not require the attention of this team of specialists. Those problems,
such as sharing a password, will be handled by local management, without
needing to involve the security incident response team (SIRT).
This is why there is a need for a security incident response policy,
a set of criteria that make it easier to determine when there is
a security incident, and when the current problem is only a security
infraction, which will be handled and properly reported. The essence
of the distinction is this: is this an emergency? If so, the SIRT should
be consulted
Incident Classification
Your security incident policy should include a classification
method, otherwise the SIRT will receive the wrong trouble calls, and will
fail to receive the right ones. The text informs us that there is no definitive
triage
list for this purpose. This is probably because the duties of the SIRT
staff vary from one organization to another.
The
text tells us that the Visa company's requires a report of any breach
of customer information, and that it tracks these reports by exploit type.
Tracking this information provides a rough estimate of how many attacks
of each type we might expect to see in a given time period. Compare the
list of types tracked by Visa on page 331 to the list of types tracked
by NIST standards on page 332. There are unique items on each list. Do
the Visa merchants never encounter a DoS attack? Do the agencies using
NIST standards never have misconfigured networks? Perhaps those
issues are infrequent in their respective environments, or they are unlikely
to cause security problems for those organizations.
Regardless of tracking the type of incident, it is a common practice
to handle minor incidents at the help desk or within the work area. However,
there must be a definition of what is a minor incident, and what is a
major incident. As the text says, it is easy to see the difference at
the top of the scale. It is more difficult in the middle, so there must
be measurements we can use. When there is a potential for loss
of life, it is a major incident. When it affects all
or a significant number of users,
it is a major incident. In practice, the number of users affected
or threatened is often a measure
of the severity of the incident. If incidents can be measured on that
scale, that is a good method. However, there may be other factors. In
small, outlying locations, it may be more meaningful to measure the percentage
of staff affected, rather than the raw number. If there are only 10 staff
at one location, and 100 at another, the effect of 5 people being unable
to work is more significant in the first location than in the second.
Incident Response Teams
On page 333, the text turns to the ways an incident response team might
be organized and empowered. It may begin with a charter,
which is a commonly used business document that establishes the purpose
of a group or project, and the extent
of the authority granted to the staff involved in it. The reason
for a charter is to make it clear to all staff that the security incident
response team has been given authority to take charge of an incident,
to act to resolve it, and to expect cooperation from all concerned staff.
The text lists three scopes that
a security incident response team might be empowered to act under.
On-site response - The security
incident response team is empowered to take a hands-on
approach to incidents, taking charge of them and performing
the necessary tasks to resolve them. The text explains that political
realities may require the SIRT staff to advise
a local expert about what to do. This is still within the scope of this
scenario.
Support role - This scope
is more likely when the organization is complex,
having many systems whose maintenance is done by experts
who are the appropriate staff to handle their problems. It is also appropriate
when other staff have expertise in handling incidents. The SIRT members
will provide advice and management of the situation.
Coordination role - When the
organization is larger than one geographic location, it may be best
to have the SIRT staff act as a central authority whose role is to manage
the activities of local staff at every location. This is a problem if
there are many locations, few staff to distribute among them, and no
remote management software.
The text moves on to consider the specialties that might be useful for
the members of a SIRT. If you are not reading carefully, you might miss
the fact that security staff form the core of the team, so I will add
a bullet point for them:
SIRT core members who are security experts
Experts in systems that are affected
Human resources staff when needed, such as when there is an internal
attack
Legal staff who may interface with police agencies and/or advise the
organization about legal and regulatory responsibilities
The other staff listed on page 336 are more useful to the business
side of the organization than to the technical
solution side. Even the data owner may be a business person who
has official control of the data,
but who does nothing on the technical side. The text beats this concept
to death for a few more pages, but we don't have to watch the beating.
On page 340, the text presents a case for having Business
Impact Analysis done. It relates to a concept we have seen before,
establishing what resources we
need to restore, with what speed,
and in what order in various incident
scenarios. Once this information is prepared, we can write incident policies
and procedures that describe what
must be done in particular circumstances.
As
usual, we find the actual steps to perform during or after an incident
in the procedures associated with it. The graphic on page 342 shows a
circular set of processes, each of which would have procedures to follow
to achieve the desired result. The information in this text in the remaining
pages in the chapter was summarized better in our last text. Documentation
should take place at all stages:
Business
Impact Analysis - Thegreenhighlight
on this bullet is to show that this step should be done when times are
good and we can examine our systems performing normally.
Before you can plan for what to do, you have to figure out what is normal
for your business, what can go wrong, and what can be done to minimize
the impact of incidents and problems/disasters (see the bullets below).
What are the business'scriticalfunctions?
Can we construct aprioritizedlist
of them?
What are theresources(IT
and other types as well) that support those functions?
What would be the effect of asuccessfulattack
on each resource?
Whatcontrolsshould
be put in place to minimize the effects of an incident or disaster?
(Controlsare
proactive measures to prevent or minimize threat exposure.)
Incident
Response Planning - Theredhighlight
on this bullet is to acknowledge that the plans made in this step are
used when there is an emergency for one or more users. (Shields up,
red alert? Why were the shields down?)
The text is consistent with theITILguidelines
that call a single occurrence of a negative event anincident.
An incident response plan is a procedurethat
would be followed when a single instance is called in, found, or detected.
For example, a user calls a help desk to report a failure of a monitor
that is under warranty. (Note that this is an example of an IT incident,notan
IT security incident. What further details might make this part of a
security incident?) There should be a common plan to follow to repair
or replace the monitor. Incident Response Plans (Procedures) may be
used on adailybasis.
Business
Continuity Planning - Theorangehighlight
is meant to indicate that these plans are not concerned with fighting
the fire, but with conducting business
while the fire is being put out.
Business continuity meanskeeping
the businessrunning,
typicallywhilethe
effects of a disaster are still being felt. If we have no power, we
run generators. If we cannot run generators (or our generators fail),
we go where there is power and we set up an alternate business site.
Or, if the scope of the event is small (one or two users out of many)
maybe we pursue incident management for those users and business continuity
is not a problem.
Disaster
Recovery Planning - Theyellowhighlight
here is to indicate that the crisis should be over and we are cleaning
up the crime scene with these plans.
A disaster requires widespread
effects that must be overcome. A disaster might be most easily understood
if you think of a hurricane, consequent loss of power, flooding that
follows, and the rotting of the workplace along with the ruined computers
and associated equipment.
Adisaster planis
what we do torestorethe
businesstooperationalstatusafterthe
disaster is over. There may be specific plans to follow for disasters
under the two bullets above, but the disaster recovery plan is used
after the crisis,unlessthis
term is applied differently in your working environment.
By the way, in ITIL terms, aseries
of incidentsmay
lead us to discover what ITIL calls aproblem,
something that is inherently wrong in a system that might affectallits
users. When a problem knocks out a critical service, we have adisaster.
The organization you work for may use all three terms, or any two of
them to mean differentscopesof
trouble. You need to know the vocabulary to use in the setting where
you work, and you need to call events by the names they use.
The text also mentions analysis
of the incident and our response.
Analysis of the incident should
begin during the incident, to
lead us to a good solution. Analysis after
the incident can examine what actually happened,
whether the steps we took were
effective, and what we should
recommend or require
to avoid such an event in the future.