ITS 305 - Security Policies and Auditing

Chapter 11, Data Classification and Handling Policies and Risk Management Policies

Objectives:

This lesson covers chapter 11. It discusses policies that relate data classification, general risks, and risk assessment. Objectives important to this lesson:

  1. Data classification policies
  2. Data handling policies
  3. Risks related to information systems
  4. Risk assessment policies
  5. Quality assurance and quality control

Concepts:

Chapter 11

Data Classification

This chapter begins with some ideas about classifying data. Classification usually involves putting data into several categories, each more sensitive than the last. The text tells us that the time and expense of classifying an unending stream of data often leads organizations to choose a scheme that limits the number that must be examined for classification. It presents three methods that provide this kind of filter:

  • Classify only the most important data, the highest risk data for the organization. Put all other data in a default (unclassified) status.
  • Classify data by its storage location or location of origin. This is a bit imprecise. The text offers the example of classifying all data stored in a particular database as confidential. This would probably result in classifying some data that does not actually need to be classified at that level.
  • Classify every document or data cluster at the time it is created or used. This would seem to require more classification, done by people who are not necessarily trained to do that job.
The text moves on to a more basic topic: what do we mean by classification of data? (Should have hit this one first.) The text explains three classification concepts that may not mean what you think:
  • Classification based on the need to protect data - This is probably what you were thinking of when the chapter started. The text says this kind is called security classification, which makes sense. It includes concepts about who is allowed to see, use, and know about such data, as well as rules for protecting it. Users wanting to access data classified in this manner will require different kinds of authorizations to gain that access.
  • Classification based on the need to retain data - Most records do not need to be kept forever. Organizations may have regulations about how long to keep various kinds of data, which means that the data must be classified properly to meet this requirement. The organization may also require that records older than the retention standard must be deleted, erased, or destroyed. This seems counter to logic. Why would it necessary to delete older records? The reason may be security, but it may be storage cost that drives the decision.
  • Classification based on the need to recover data - Information may also be classified according to the need for it in a disaster recovery. The text recommends a classification scheme that does not have so many categories that staff are confused about what is needed first or next.

    The illustration on page 298 uses three classifications, each with a different deadline for restoring that kind of data. They are classified as needing restoration in 30 minutes, in 2 days, or in 30 days. The text observes, sensibly, that these time frames will not fit all businesses. You must establish the time frames that make the most sense depending on the business you are in.

The text returns to classifications that address secrecy on page 299. It is probable that a security classification like the ones described would be used by a company that is regulated by law, such as any company that takes payment information from customers. The example given on page 299 may be typical of organizations regulated by FERPA:

  • Prohibited information - laws or regulations require the protection of this information; this is the most restricted category
  • Restricted information - information that would be prohibited, but wider access is needed for use inside the organization
  • Confidential information - information not made public, but not sensitive enough to be in the first two categories
  • Unrestricted information - information that may be released to the general public

The discussion continues with the general US military classification scheme. It is also called the National Security Classification. Note that although it has five levels (in this text), the adjectives used in the three most sensitive levels are not defined, so it would be impossible to classify information under this system without more guidance. There is more guidance in Executive Order 13526.

  • Unclassified - information that is available for general release; this is the least restricted category
  • Sensitive but unclassified - information that is sensitive enough that it is not subject to FOIA, but also not sensitive enough to fit another category; may also be called For Official Use Only (FOUO)
  • Confidential - information whose disclosure would cause damage to national security
  • Secret - information whose disclosure would cause serious damage to national security
  • Top Secret - information whose disclosure would cause exceptionally grave damage to national security

Classified material should be examined before it is classified, and it should be reexamined periodically to consider changing its classification category. Your text lists three ways a document classified by the US government may be declassified. The ITS 421 text lists four ways and describes them better:

National Security Classification (US government) declassification methods:

  • Automatic declassification - classified documents that are 25 yearss old may be automatically declassified and placed in the national archives; there are exceptions to this rule, established by the Department of Justice
  • Systematic declassification - documents less than 25 years old may bbe reviewed for historical importance, and may be declassified
  • Mandatory declassification review - if an authorized holder requests that a document be declassified, the owning agency must review the request and respond that the request is approved, the request is denied, or that the agency cannot confirm or deny the existence of the document; denials may be appealed
  • Freedom of Information Act (FOIA) request - anyone in the general public may request that a document be declassified by filing a FOIA request; as the video below explains, there are limits to the kinds of requests that can be made.

Documents classified by business organizations typically follow a similar classification scheme, but it is worded specifically for the organization's needs.

Common Corporate Security Classification

  • Public - information that may be given to the public
  • Internal - information not given to the public, but disclosure would not damage the company; information is restricted to employees
  • Sensitive - information whose disclosure would cause serious damage to the company; network infrastructure information, customer lists, and vendor lists fit this category
  • Highly Sensitive - information whose disclosure would cause extreme damage to the company; customer PII is an example of information that fits this category

Sometimes a commonly used classification scheme doesn't fit. The text tells us on page 303 that there are guidelines for creating your own classification scheme in the COBIT and PCI DSS models. The text sensibly recommends that we pay more attention to the definition of each level, and less to the label we use for it. The definition tells our employees how to classify information and how to treat information that has been classified. The steps on page 303 are a good start:

  1. How many classifications do you need? How many kinds of secrets or sensitivity do you work with?
  2. Define each classification. You may need to revise your answer to step 1.
    The text recommends assigning a score to breaches based on how much they violate Confidentiality, Integrity, and Availability. The variety of scores or the way they cluster may help you decide how to define them, and how many definitions you need. On a scale of 0 to 10, is a 10 on Confidentiality the same as a 10 on Integrity or Availability?
  3. Name each classification. A name that describes and differentiates each classification is a good idea.
  4. Assign a protocol to each classification that tells your staff how to handle information in that classification.
  5. Set audit and reporting requirements as required by law, or as needed in your organization.
Data Handling

Let's move on to the next section on page 306, about data handling policies. The text returns to the idea of encryption, referencing laws that require private data to be encrypted, and that require breaches of unencrypted data to be reported. The text tells us that security policies must be clear about when to use encryption. The question may be better stated as, when do we not have to use it?

In the two scenarios presented, starting on page 307, two different exploits are illustrated. In both, data at rest on the network is encrypted.

  • In the first scenario, the hacker breaks into an application, then makes a request from that application to an encrypted database. The application has permission to retrieve the decryption key. It does so, and is allowed to retrieve the data. The example tells us that the the key might not be needed if the application does the decryption itself. Either way, the breach of the application led to the exposure of encrypted data.
  • In the second scenario, the hacker breaks into the operating system of a workstation or a server. The hacker then steals an encrypted data file. Since the file is still encrypted, and the hacker has not stolen the key, the data file is not yet exposed. The hacker still has to decrypt the file to get any data from it.

To make this situation more secure, the text recommends three rules of protection:

  • Encryption keys should be stored separately from encrypted data.
  • Encryption keys must be retrieved by a secure process, separate from data requests.
  • Administrator rights to the operating system will not, in and of themselves, give unencrypted access to an encrypted database.

The text continues with an observation that some organizations encrypt laptop hard drives, but fail to encrypt other avenues through the door, such as email, memory sticks, and optical discs. It is the duty of IT professionals to stay informed about laws and regulations regarding the handling of data, and to pass that information to their employers to gather support for appropriate policies.

On page 309, the text describes a life cycle that we could apply to most data. As we have already discussed, data must be used, stored, and eventually disposed of. The cycle described in the text applies to most data, but the time it should stay in any one status will vary with its type, use, and requirements that apply to it. Policies should be developed about data in each of the states listed:

  • Creation
  • Access
  • Use
  • Transmission
  • Storage
  • Physical transport
  • Destruction
Business Risks

It is hard to tell what the text is worried about in this section. The author seems to be making a list of types of risks while still talking about data classification. In the graphic on page 312, the text is not addressing risk types at all. It tells us that the organization has identified three classes of data and chosen backup strategies that lead to a recovery time appropriate to the timeframes associated with those classes.

  • Mission critical data - needed within 30 minutes of an attack;; should be backed up by mirrored disks for live recovery
  • Normal operations - needed within 48 hours of an attack; should use a combination of on-site and off-site storage; note the slope of the curve for this category that indicates off-site storage for items on the high side
  • Optimized operations - who knows what the category name meanss, but the data will be needed within thirty days of the attack; this is data we do not need for daily operations, but will need within a month, so off-site storage is fine

The point of this transition from the classification section seems to be that classification is needed to tell us which data elements are most important to operations, which tells us that the risks they are exposed to are most important to us.

The discussion turns to risks and policies that are concerned with risks. Really, they should all be concerned with risk, but some organizations lose sight of that goal. The text informs us on page 313 that some regulators want more than legal compliance. They are looking for effective efforts to reduce risk. The graphic on page 314 tells us that we should be reassessing risks on a regular basis (hence, the circle) and that we should follow the steps over and over to be sure we are not missing anything. The book's circle starts in a different place. You should start wherever the company happens to be. The list below starts from scratch:

  1. Identification
  2. Assessment
  3. Prioritization
  4. Response and policy development (This includes development of new policies and responses.)
  5. Monitoring the effectiveness of policies and responses, and improving them.
  6. Return to 1. Start again.
Risk Assessment Policies

The text discusses several steps that are commonly used when assessing the risks an organization faces. It begins several steps into the traditional process. Let's remember that you start by identifying your assets, you continue by determining the vulnerabilities of those assets, and then you determine the exploits that the assets are subject to.

Assuming you have followed the first three steps, there are still some things to do before we can obtain the value on  page 316.

  • Each asset needs to be given a value, based on its replacement cost, its current value to the organization, or the value of the income it generates. Pick one. This is the Asset Value. Let's choose $100 as an example for Asset Value..
  • Next, we need to determine, for each exploit, what the probable loss would be if that exploit occurs successfully. Would we lose the entire asset? Half of it? Some other percentage? Which percentage we pick tells us the Exposure Factor of a single occurrence of that exploit for this asset. Let's choose 50% as an example for Exposure Factor.
  • We are still not where we want to be. Asset Value times Exposure Factor equals the Single Loss Expectancy. This matches what the text calls Impact if the event occurs. In this example, it is $50.
  • Now, to do the problem in the book, we still need the Likelihood the event will occur. The classic way to do this is to consult your staff about the frequency of successful attacks of this type, or to consult figures from vendors like Symantec, McAfee, or Sophos about expected attack rates for your industry or environment. Let's assume we have done that, and we are confident that we expect 10 successful attacks per year in our example. This is the Annualized Rate of Occurrence.
  • Taking the numbers we have so far, we should multiply the Annualized Rate of Occurrence times the Single Loss Expectancy, which will give us the Annualized Loss Expectancy for this asset from this kind of attack. This corresponds to the Risk Exposure shown on page 316. In the example we are considering, that amounts to $500.

All that work led us to just one loss expectancy for one asset from one kind of attack. That gives you an idea of the work involved in calculating the numbers for each asset, each asset vulnerability, and each kind of attack on those vulnerabilities.

On page 317, the text presents some classic strategies for managing risk. These are not the only ones ever used, but they represent four well known strategies.

  • Risk avoidance - make every effort to avoid your vulnerabilities being exploited; make the attack less possible, make the threat less likely to occur; avoid risk by avoiding the activity associated with the risk, and by providing an active defense against it; the text calls this a business decision
  • Risk transference - in general, letting someone else worry about it
    In the ITIL model, this is included in the definition of a service:
    "A service is a means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs and risks." 
    A reader might misunderstand this statement, thinking that the customer does not pay anything. That is not the case. An IT service provider would assume the costs and risks of an operation in return for the customer's payment for the service. This can be done in-house or by outsourcing.
  • Risk mitigation - this method seeks to reduce the effects of an attack, to minimize and contain the damage that an attack can do; Incident Response plans, Business Continuity plans, and Disaster Recovery plans are all part of a mitigation plan; a list of mitigation methods appears on page 318
  • Risk acceptance - this counterintuitive idea makes sense if the cost of an incident is minimal, and the cost of each of the other methods is too high to accept; the basic idea here is that it costs less just to let it happen in some cases, and to clean up afterward; this can also be the case when the risk cannot be managed other than to be aware of it; the text says this is either a business or a technology decision

The text turns to a discussion of vulnerability assessment, which you would need to do in order to perform the calculation a few pages ago. As the text explains, there are several ways to assess vulnerability, and you should pursue as many of them as may apply to your situation. Some recommendations are offered:

  • Penetration testing, not just on firewalls, but on systems as well
  • Scanning source code of an application for known vulnerabilities
  • Scanning your network and all devices for open ports, which can be a guide to hardening your systems

The list at the bottom of page 318 is not very helpful. The next page describes automated processes that are often used to test systems for open ports and code problems. Of course, you could hire a penetration testing company or a hacker to test you systems as well.