ITS 305 - Security Policies and Auditing

Chapter 7, Security Management Practices


This lesson presents an overview of several information security models. Objectives important to this lesson:

  1. Benchmarking
  2. Due care and due diligence
  3. FASP and CERN references
  4. Baselining
  5. Diagnostic questions

The chapter opens with a discussion about benchmarking, which it defines as a establishing s security blueprint by modeling it on what another organization, similar to yours, has already done. Benchmarking can also be done by adopting some reputable set of standard practices.

The text discusses a company having to adopt a security standard, perhaps to meet a contract or legal obligation. When this is done, the company can argue in court that it adopted a security standard of due care. As with several other business practices in the text, this is presented as defensible legal position, one that a company following reasonable precautions would take.

A related concept is presented next, that the maintenance of such security standards must be pursued, or the company can still be found at fault. Maintaining such a standard can be called performing due diligence. That phrase is often used in common business discussions to mean that a company is conducting an investigation of some sort. The meaning is different here, although it should be expected that one would have to investigate and inspect a system in order to maintain it properly.

You may wonder why the text is telling us about pursuing a goal that does not sound like very good protection. It suggests that organizations often must make decisions that are based on less than optimal funding. As such, you should still take care to make a choice that provides the best protection you can reasonably obtain, and that still proves that you showed due care and due diligence.

Moving on from the a set of standards that are merely adequate, the text discusses recommended practices and best practices. We should remember while reading this discussion that "best" is a relative term that can only be applied to something until something better comes along. With that understood, the URL on page 249 is valid, and it will lead you to the Federal Agency Security Practices page on the NIST web site. A quick look at some of the documents on that site will show you that most are several years old, which may indicate that new best practices take a while to develop.

The FASP site is not just for posting standards that have been accepted at the federal level. It allows organizations to post their own standards for consideration. There is a section on that web site for submissions from public, private, and academic institutions, which is not very densely populated. It seems likely that the people running security services for any organization might have second thoughts about posting their security practices for the whole world to see, unless they were confident that such a posting would not increase the risk of an attack. We should, therefore, expect that standards posted on this site will be somewhat general, to avoid providing a script book to attackers.

 On page 252, the text presents six diagnostic questions to ask when you are considering adopting a recommended security practice. They really ask the same question with regard to six different parameters: are we similar enough to the recommending organization on these scales to think their practices will work for us? The six scales are:

  • organized in a similar way
  • part of the same business/industry/service
  • in a similar state of security program development
  • organizations are of similar size
  • budgets for security are similar
  • list of threats we must deal with are similar
The discussion brings up an interesting point about the last bullet. We may face similar threats to those faced by the organization that posted a recommended practice. If, however, our environment and technology, or the skills or methods of the threat agents have changed significantly since the recommendation was made, that practice may no longer be of any use to us.

The text recommends another best practice resource, one available from CERT at Carnegie Mellon University. The URL in the text has changed. This one will take you to the download page for the pdf the text wants you to see. It is a high level document that presents some material that is also in your text. The benefit to gain from it may be to see this material as a set of slides in a pdf, and to make sure you have seen these concepts.

On page 253, the text describes baselining, which is the practice of determining what a system is like under normal conditions. If you were put in charge of a network you knew nothing about, you would not be able to tell what events were abnormal if you had no idea what the system looked like when it was not abnormal. Don't believe me?

Situation: there is a coworker about five cubicles away who has a service dog. She remarks that the dog feels hot to her. Quickly, how hot is hot? By some method, she determines the dog's temperature is 101.5 degrees F. Are we alarmed? What is the baseline we are comparing the current data to? You may know that a normal temperature for a human runs about 98.6 degrees F, but you may not know that a normal temperature for a dog can be 101 or 102 degrees F. ("Happiness is a warm puppy", Charles Schulz) If you do not know what is normal for the system you are protecting, you can't know if current measurements are abnormal. (If you are worried about the dog, get more information here.)

Of course, some things are obvious: it is not normal for the dog or the web server to be on fire. It is also true that the "normal" state of a system may be unacceptable. Baselining may tell us where the system is not under control, instead of telling us that everything is fine. When we find such information, baselining should lead into an evaluation of our current controls and a recommendation for change.

Twelve questions attributed to the Gartner Group appear on pages 254 and 255. (The same questions appear on these pages from a slightly different edition of the text posted by Google.) They are basic and there are only four questions in each of three categories. Are these enough questions to define and improve a security program? Probably not, but they should lead to more questions, to dialogs with responsible parties, to plans and actions. When I saw this set of questions I was reminded of an admonition from Theodore Sturgeon, to "ask the next question". Often, the answer to a technical question should cause us to ask another question. This is also true of security, and true of any topic whose cutting edge is constantly moving.