CSS 111 - Introduction to Information System Security
Chapter 3, Legal, Ethical, and Professional Issues; Chapter 4, Risk
This lesson introduces the student to laws about information security,
ethics, and concepts associated with risk management. Objectives important
to this lesson:
- Identify Laws, regulations and Professional Organizations that relate
to the practice of information security.
- Understand role of culture, laws and ethics in relation to information
- Identify and assess risk in relation to Information security.
- Definition of Risk Assessment, assess risk based on likelihood of
occurrence and impact of an occurred risk.
- Understand contingency planning and incident response.
- Understand Disaster recovery plan (DRP) and Business continuity plan
The text discusses some differences between morals (personal
beliefs and values), laws (rules of a governing body),
and ethics (rules about socially acceptable behavior).
Laws may be broken and lawbreakers may be punished. Morals
may be ignored, but there is typically no social judgment in many cultures.
The text moves on to discuss a middle ground, ethics
that have been accepted by or imposed on businesses.
Let's consider a few definitions that may be helpful:
- morals - personal beliefs about
what is right and what is wrong; this may vary greatly from one person
- morality, mores - conventions about right and wrong
that are the consensus of some social group; in theory,
there will be agreement about these rules within the group that follows
- virtues - habits that are or lead to acceptable
- vices - habits that are or lead to unacceptable
- ethics - beliefs about right and
wrong, shared by a group (social, professional, political,
etc.), that describe expected behaviors
- laws - formal statements about prohibited behavior
and the penalties for engaging in them; typically issued by a law-making
body of a government
- integrity - behaving consistently with one's personal
morals, beliefs, principles
Ethics can be part of a standard that is promoted by a profession,
or by several of them. Professional organizations often make public statements
about the ethics that their members follow.
The text explains that organizations make rules about procedures and
the conduct of their members/employees that become policies.
This use of the word "policy" may be unfamiliar to some who have only
thought of a policy as a general guideline. In fact, policies within a
company become work rules, and policies within a professional organization
become standards of behavior. A policy does no good, however, if the people
it applies to do not know about it. The sequence of events presented on
page 91 should be followed for a policy to work:
- full distribution - make the policy known to those who are required
to follow it or know about it; may be done by mail, email, memo, or
other means customary to your organization
- make it available for review - an effective policy must be posted,
or made available to all employees in the proper language; it must be
made available to staff who need to reference it (web page, audio, video,
and/or other modes as required by rules like the Americans with Disabilities
- reach understanding - the organization must be able to show that employees
understand a policy before they can legally state that the employee
is responsible for following it; this may take the form of stating that
the policy was written and published in words that met a common clarity
and grade level standard
- obtain agreement - the employee must agree to comply with the policy,
which is often done like a software EULA,
requiring the user/employee to click a button on a screen that signifies
agreement with the policy shown on the screen
- maintain application and enforcement - the organization must apply
the policy to all employees and enforce any penalties uniformly, or
the policy loses its validity
Why should a company care about all of this? To understand, it will help
to consider the legal term respondeat
superior, the principle that an employer can be held responsible
for the actions of an employee. The principle goes back to the seventeenth
century. It establishes a reason for an employer to be aware of what employees
do. A company with an established ethics program can make a case that
they tried to get all employees to act ethically, and the company should
not be held responsible for the unethical actions of a particular employee.
Policies are internal to the organization that makes them. Laws
apply to much larger populations, typically to all persons living or working
in the jurisdiction of the law making body. That brings us to a strange
place. Most people know that laws are passed by legislative bodies (statutory
law), but may not know that laws can be created by court rulings, by agencies
creating regulations, and by other
mechanisms outside our discussion.
On page 92, the text presents four types of laws that may be relevant
to computer security. This is not an exhaustive list, but it is useful.
Note that there is some overlap from one type to another.
- civil law - focuses on federal, state, and local laws
- criminal law - focuses on behaviors that are defined as causing harm
- private law - can focus on commercial and labor laws, and on conflicts
between individuals and organizations
- public law - focuses on the actions of government agencies
The text discusses a few laws that are relevant to information system
security. Be aware that there are many others. The first few are general
laws about computer related crime:
- Computer Fraud and Abuse Act
1986) - provides for penalties for unauthorized access or interference
with a computer used by the government, a financial institution, or
for interstate/international commerce
- National Information Infrastructure
Protection Act (NIIPA, 1996) - modified the CFAA, adding more
- Foreign Intelligence Surveillance Act (FISA, 1978)
- established a separate court system to approve requests for electronic
surveillance on foreign powers and their agents for up to a year; amended
by the Patriot Act to include persons involved in terrorism not backed
by a foreign government
- FISA amendment (2008) - added legal protection for
communication vendors who are required to provide information under
FISA to the NSA and CIA
- USA PATRIOT Act (2001, 2006) - This act extended
the power of government to access electronic information. A particular
point is the extension of the usage of National Security
Letters by the FBI to obtain data without a court being involved. It
is done by stating that the data is needed for an ongoing investigation.
- Computer Security Act (1987)
- established that the National Bureau of Standards and the NSA would
develop security standards for federal agencies
The next items are more about privacy.
Note the quote on page 93 that privacy, in this context, is defined as
being free from "unsanctioned intrusion",
meaning that it is not a violation if you gave them permission to do it.
The text continues with a section on identity
theft. A definition from the Federal Trade Commission says that identity
theft occurs when "someone uses your personally identifying information,
like your name, Social Security number,or credit card number, without your
permission, to commit fraud or other crimes". Identity theft happens often
enough that the text presents a list of things
a person should do when identity theft happens:
- Communications Act of 1934 - established the Federal
Communications Commission (FCC), giving them jurisdiction over interstate
telecommunications (among other things) by broadcast, wire, satellite,
or cable, and over communications that begin or end in the US
- Gramm-Leach-Bliley Act (1999) - also called the Financial
Services Modernization Act; deregulated banks and financial services,
allowing each institution to offer banking, investments, and
Included three rules that affect privacy. The Financial Privacy
Rule allows people to opt out of having their data shared with
partner companies, but it is usually implemented so that it is easier
to allow the sharing. The Safeguards Rule requires
that companies have data security plans. The Pretexting Rule
tells institutions to implement procedures to keep from releasing information
to people who are trying to gain information under false pretenses (pretexting).
(They had to be told to do that?)
- Title III of the Omnibus Crime Control and Safe Streets Act
(1968) - related to the court case Katz v. United States
(which extended fourth amendment protection to wired communication),
established regulation of domestic wiretaps, requiring that they be
authorized by a warrant limiting their duration and scope; also called
the Wiretap Act
- Electronic Communications Privacy Act (1986) - amended
the act above; provides the protections of the Wiretap Act to faxes,
email, and other messages sent over the Internet;
provides protections to stored communications such as social message
sites, instant messages, email mailboxes if they are not publicly available;
allows the FBI to issue a National Security
Letter to an ISP to obtain data about a subscriber if the person
is believed to be a spy; provides for court approval to use a pen
register (recorder of outgoing call numbers) and trap
and trace (recorder of incoming call numbers) and tracking
information for email messages
- Health Insurance Portability and Accountability Act
(HIPAA, 1996) - Establishes a large,
complicated rule set for storing health information in a common format,
making it sharable, and making it a crime to share it with people who
should not have it.
- Report to the three dominant consumer reporting companies (Experian,
Equifax, and Trans Union) that your identity is threatened. Note that
if you file a fraud
alert with one, they are required by law to contact the other two.
- If an account has been compromised, call your vendor (e.g. bank, credit
union), inform them, and close it.
- Dispute any account you did not open.
- File a report
at the FTC's identity theft web site. Note that the FTC says to file
an affidavit with them first, then contact the police to file a report.
The two elements constitute your identity theft report.
- File a police report, as noted above.
The text briefly mentions several other laws about different areas of
- Economic Espionage Act (EEA, 1996) - covers the protection of trade
- Security and Freedom Through Encryption Act (SAFE, 1999) - covers
common usage of encryption, prohibits the government from requiring
it, assigns penalties for using encryption in a crime
- copyright law - extensions were passed to include works stored in
electronic form and to allow fair
- Sarbanes-Oxley Act (Sarbox, 2002) - a reaction to corporate fraud
and corruption; provides penalties up to $5,000,000 and 20 years in
prison for officers who file false corporate reports
- Freedom of Information Act (FOIA, 1966) - allows anyone to request
information from a federal agency that is not restricted by national
security or other exemptions;
does not apply to state or local governments, but they may have their
own version of the law; the text does not mention that a requester may
be charged for the time it takes to fulfill the request and for duplication
A short list of international laws and law bodies is discussed:
on Cybercrime (2001) - created by the Council
of Europe to reach a consensus on cybercrime laws
- Agreement on
Trade-Related Aspects of Intellectual Property Rights (TRIPS, 1994)-
created by the World Trade Organization to establish common rights to
- Digital Millennium Copyright Act (DMCA, 1998) - an amendment to US
copyright law that includes elements of two treaties from the World
Intellectual Property Organization
- It says that an Internet Service Provider is not liable for any
crimes that a subscriber might commit on the Internet, but that
they must respond to reports of copyright infringement
- It also makes it a crime to bypass encryption or other means of
On page 108, the text continues with a section on ethics as defined by
five professional organizations whose members work in computer related
jobs. It notes that these organizations have published codes of ethics,
but the organizations do not have the means to enforce those codes. Students
should review this section of the text to meet the portion of objective
1 (listed above) relating to professional organizations.
The chapter ends with a section on several US
federal agencies that have some jurisdiction over the investigation
of threats and attacks on information resources.
- Department of Homeland Security - this
link will take you to their Cybersecurity page
- National Infraguard Program is now just called Infraguard - an outreach
program of the FBI to connect to "businesses, academic institutions,
state and local law enforcement agencies"
- National Security Agency - the NSA
is primarily interested in codes and data transmissions
- Secret Service - the Secret
Service's web site states that one of its missions is to safeguard the
nation's financial infrastructure and payment systems to preserve the
integrity of the economy
The fourth chapter is about risk management. The text quotes Sun
Tzu (the third quote on the linked page), and to use his observation
that you must know yourself and your enemy as a theme for managing risk.
The quote is from the end of chapter 3 of The
Art of War. It is not accidental that the authors refer to a classic
source that is used as a management text and as a guide for warfare. Their
point is that we must know our assets, know their weaknesses and strengths,
and know the attacks that are likely to occur if we hope to defend against
those attacks. The authors might have quoted the beginning
of chapter 3 to give us more hope. Sun Tzu wrote that "the worst policy
of all is to besiege walled cities". We can make it our goal in mounting
a defense to present such a wall that the enemy will not waste its effort
in an attack. We will return to this thought.
The text presents risk management as having three areas of activity,
each of which has separate parts. The three areas are:
- risk identification - identify assets; prioritize
assets; identify threats
- risk assessment
- identify vulnerabilities; risk determination
- risk control - pick a strategy/mix;
choose controls; apply and monitor
On page 122, table 4-1 shows three ways
of looking at components of an information system. The five traditional
components are viewed as seven components by the Security SDLC method,
and as sixteen categories from the text's Risk Management point of view.
The point is not to have a definitive set of pigeon holes. It is to have
a system of classifying your assets that prevents leaving out any that
The first process, identifying assets, creates a catalog
of our IT assets. Note that any list represents only a snapshot
in time. The procedures used to create such lists must be available to
appropriate staff any time a new asset is added, or an old one is changed
or removed. A related process is the one on page 124 that involves assigning
meaningful names to assets and recording attributes that are relevant
to their use and service. Consistent naming standards need to evolve over
time, but they add a lot. Being able to recognize some of an object's
characteristics from its name can be very helpful.
Moving ahead to page 129, the text discusses several scales on which
assets might be rated to assign a "value to the organization". It may
be that one of the questions on pages 130 and 131 will
be more important than the others to your organization, but it is more
likely that a composite score makes the most sense if
several of the questions apply. In the example on page 133, five different
assets are rated on three factors, each of which has been assigned a relative
importance for this comparison. This leads to a score for each of those
assets that shows its importance relative to the other four. Note that
it might not be fair to compare numbers from this chart
to numbers from another chart that used different criteria,
unless those criteria were of equal importance to the organization.
The text moves on to identifying threats. On page 134, the chart of fourteen
threat categories appears again. The text makes a point, over several
pages, that some assets are threatened only by specific
threats, and some are much more likely to occur than others. Which ones?
The text asks us to consider which threats are hazardous to our company
and which of those are the most dangerous. It is hard for us to say about
a hypothetical, so let's harvest some opinions.
See the chart on page 136, in
which the threat categories are sorted by their significance as potential
problems, as perceived by surveyed IT professionals. Is this chart meaningful?
It is not the opinion of one author, it is a composite of perhaps a thousand
opinions. Perhaps? The text says over a thousand executives were polled.
It does not say how many responded. The ACM website won't let me read
the article, which is from an eleven year old issue. In fact, the article
was written by one of the authors of our text. It is available here,
and it presents several of the points of this chapter.
The text discusses vulnerability identification next. It is unclear whether
the authors think it is part of risk identification or risk assessment,
but it is sensible to do it next. You need to look at each identified
threat, and determine which vulnerabilities of which assets they actually
threaten. Look at table 4-7 on page 141. That entire table is an analysis
of the identified threats against the vulnerabilities of one
The text turns to a method that will lead us to a numeric value for risks.
Let's consider some vocabulary that will help:
- likelihood (L)-
the probability that a threat will be realized
(actually happen); the text says it will be a number from .1 to 1.0.
Well, that's how we measure probability, isn't it? 0 means it won't
happen, 1 means it will, and
anything in between is how probable the event is.
- value (V)
- the monetary value of the
asset; this may be expressed as the income we lose if it is compromised
and/or the cost to replace the asset; alternatively, this may be a relative
value as calculated in the Prioritizing Assets section of the chapter
- mitigation (M)
- the percentage of the risk that we have protected against
- uncertainty (U)
- a fudge factor to express our confidence (or lack of it) in the other
The text observes that some risks have well known values. If we have
to calculate one, we might
do it like this:
Risk = (V * L) - (V * M) + U * ( (V * L) - (V * M) )
Assume the Value of an asset
If the Likelihood of a threat
being realized is 60%, the
first term in this equation would be 200
* .6 = 120
Let's assume the amount of protection (Mitigation)
for this asset is 40%, so
the second term would be 200 * .4
The calculation for U depends on the rest of the equation. If we are
only 90% sure of our Mitigation
protection, the Uncertainty
for this calculation is 10%, but what do we do with it? We
multiply the uncertainty factor (10%) times the rest of the
equation. So the third term would be (
(V*L) - (V*M) ) * .1 = 40 * .1 = 4
So for this example, Risk = (200 * .6) - (200 * .4) + .1 * ( (200
* .6) - (200 * .4) ) = 44
Another way of looking at this might be to say that V * L is our likely
loss if unprotected. V * M is the amount of the loss that we
are protecting. The difference between the two is our
probable loss, if we protect it. Finally, we add a percentage
to the probable loss to reflect our uncertainty in the figures.
This method will give us a number for each risk, so we can compare
them to each other, and spend the most effort defending the right
The next step is to identify what controls
we might apply to reduce our risk. A control might be a hardware or software
solution, a change in procedures, or any change we can make to try to
improve the situation. This step is for finding all controls we might
choose to apply. We will make a decision on them in the following steps.
The discussion of control strategies begins on page 146. The text presents
five options for dealing with risks, some known by multiple names:
- avoidance, defense - make
every effort to avoid your vulnerabilities being exploited; make the
attack less possible, make the threat less likely to occur; avoid risk
by performing the activity associated with the risk with greater care
or in a different way
- transference - in general,
letting someone else worry about it
In the ITIL model, this is included in the definition of a service:
"A service is a
means of delivering value to customers by facilitating outcomes customers
want to achieve without the ownership of specific costs and risks."
A reader might misunderstand this statement, thinking that the customer
does not pay anything. That is not the case. An IT service provider
would assume the costs and risks of an operation
in return for the customer's payment for the service.
This can be done in-house or by outsourcing.
- mitigation - this method seeks
to reduce the effects of an
attack, to minimize and contain the damage that an attack can do; Incident
Response plans, Business Continuity plans, and Disaster Recovery plans
are all part of a mitigation plan
- acceptance - this counterintuitive
idea makes sense if the cost of an incident is minimal, and the cost
of all of the other methods is too high to accept; the basic idea here
is that it costs less just to let it happen in some cases
- terminate - simply stop the
business activities that are vulnerable to a given threat; we cannot
be exposed to a threat if we do not do what the threat affects
The text briefly discusses the plans (mentioned above) that are part
of mitigation. We will hit this material in more detail next week:
- Business Impact
Analysis - The green
highlight on this bullet is to show that this step should be done when
times are good and we can examine our systems performing normally. Before
you can plan for what to do, you have to figure out what is normal for
your business, what can go wrong, and what can be done to minimize the
impact of incidents and problems/disasters.
- Incident Response Planning -
The red highlight on this bullet
is to acknowledge that the plans made in this step are used when there
is an emergency for one or more users. (Shields up, red alert? Why were
the shields down?) Incidents are what happen to individual customers.
Incident response is what we do about it.
Continuity Planning - The orange
highlight is meant to indicate that these plans are not concerned with
fighting the fire, but with conducting business while the fire is being
Business continuity means keeping the
typically while the effects of a disaster are still
being felt. A disaster has a larger scope than an incident.
- Disaster Recovery Planning -
The yellow highlight here is
to indicate that the crisis should be over and we are cleaning up the
crime scene with these plans. For something to be called a disaster,
it must have widespread effects that must be overcome. Your text says
multiple incidents can become a disaster, or may lead us to realize
that there is one, especially if there is no plan to overcome them.
The choice of a strategy should be made for a good reason. The best method
presented in the text for making your decision is Cost
Benefit Analysis. To do it, we need several values for a formula.
- Asset Value (AV):
the value that an asset has for the next several calculations; this
value may be different depending on the context of its use
- Exposure Factor (EF):
the percentage of the value that would be lost
in a single successful attack/exploit/loss; this accommodates the idea
that an entire asset is not always lost to an attack; note that this
value is the inverse of the
Mitigation value used to calculate Risk
- Single Loss Expectancy (SLE):
this is a number that can be obtained by multiplying AV
times EF. In the first chart
on page 169, the column labeled Cost per Incident corresponds to SLE
- Frequency of Occurrence (FO):
this number tells you how many
attacks to expect in some time period;
this is ambiguous if we are not told whether this is the rate for all
such attacks, or the rate for all such successful
In the second chart on page 169, for example, we might assume that the
number given is the rate at which successful
- Annualized Rate of Occurrence
(ARO): the known frequency of
occurrence may be expressed in days or hours, but the executive you
report to want the numbers expressed in years.
This is understandable if, for example, we are talking about establishing
a yearly budget for IT Security. Reporting is often done based on calendar
or fiscal years, which is another argument for making this conversion.
So, as an example, if your FO is once a month, your ARO is 12. An FO
of once a week is an ARO of 52.
- Annualized Loss Expectancy
(ALE): the final number explained
on page 154 stands for the dollar value of our expected loss for a given
asset in one year; provided you have calculated the numbers so far,
ALE equals SLE
Take a deep breath, children, we are not home yet.
All of the figures above are needed to begin the Cost
Benefit Analysis described on page 155. The text tells us that
there are several ways to determine a Cost Benefit Analysis. It recommends
that we calculate a value for
CBA with regard to two values
of ALE and a new concept, Annualized
(or Annual) Cost of Safeguard (ACS).
The safeguard in question is a procedure, a process, a control, or another
solution that will provide some measure of protection to our asset from
the threat under consideration.
CBA = ALE
(without the safeguard) - ALE
(with the safeguard) - ACS of
The value of CBA is defined as the ALE if we do not use
the control, minus the ALE if we do use the control,
minus the annualized cost of the control.
If the pre-safeguard ALE is 5000, and the post-safeguard ALE is 4000,
how much can the safeguard cost and still justify the new safeguard?
CBA may also be called economic
feasibility. The text mentions some other types as well that may
be considerations or limiting factors when considering safeguards and
controls. Each may be a factor in deciding whether a project request may
- organizational - Will the new solution fit the way
our company works? Will this solution make us more effective or efficient?
- operational/behavioral - Will the new system work
for us? Can we use it, can the users use it, is there any problem that
will prevent it from being of value to us? Does our corporate culture
pose a problem for this solution?
- technical - Do we have the hardware or software to
use this system? Can we upgrade as needed to use it? Does the system
limit our future choices or expand them?
- economic - What will the system cost to build, implement,
and use? What associated costs, such as training and personnel, are
needed for it?
The chapter continues with a discussion about benchmarking,
which it defines as a establishing s security blueprint by modeling it
on what another organization, similar to yours, has already done. Benchmarking
can also be done by adopting some reputable set of standard practices.
The text discusses a company having to adopt
a security standard, perhaps to meet a contract or legal obligation. When
this is done, the company can argue in court that it adopted a security
standard of due care. As with
several other business practices in the text, this is presented as defensible
legal position, one that a company following reasonable precautions would
A related concept is presented next, that the maintenance
of such security standards must be pursued, or the company can still be
found at fault. Maintaining such a standard can be called performing
due diligence. That phrase is often used in common business discussions
to mean that a company is conducting an investigation of some sort. The
meaning is different here, although it should be expected that one would
have to investigate and inspect a system in order to maintain it properly.
You may wonder why the text is telling us about pursuing a goal that
does not sound like very good protection. It suggests that organizations
often must make decisions that are based on less than optimal funding.
As such, you should still take care to make a choice that provides the
best protection you can reasonably obtain, and that still proves that
you showed due care and due diligence.
Moving on from the a set of standards that are merely adequate, the text
discusses recommended practices
and best practices. We should
remember while reading this discussion that "best" is a relative term
that can only be applied to something until something better comes along.
With that understood, the URL on page 158 is valid, and it will lead you
to the Federal
Agency Security Practices page on the NIST web site. A quick
look at some of the documents on that site will show you that most are
several years old, which may indicate that new best practices take a while
The FASP site is not just for posting standards that have been accepted
at the federal level. It allows organizations to post their own standards
for consideration. There is a section on that web site for submissions
from public, private,
and academic institutions, which
is not very densely populated. It seems likely that the people running
security services for any organization might have second thoughts about
posting their security practices for the whole world to see, unless they
were confident that such a posting would not increase the risk of an attack.
We should, therefore, expect that standards posted on this site will be
somewhat general, to avoid provided a script book to attackers.
On page 159, the text presents some
diagnostic questions to ask when you are considering adopting a
recommended security practice. They really ask the same question with
regard to six different parameters: are we similar enough to the recommending
organization on these scales to think their practices will work for us?
The questions are:
- organized in a similar way
- part of the same business/industry/service
- in a similar state
of security program development
- organizations are of similar size
- budgets for security are similar
- list of threats we must deal
with are similar
The discussion brings up an interesting point about the last bullet.
We may face similar threats to those faced by the organization that posted
a recommended practice. If, however, our environment and technology, or
the skills or methods of the threat agents have changed significantly
since the recommendation was made, that practice may no longer be of any
use to us.
On page 161, the text describes baselining,
which is the practice of determining what a system is like under normal
conditions. If you were put in charge of a network you knew nothing about,
you would not be able to tell what events were abnormal if you had no
idea what the system looked like when it was not abnormal.
Don't believe me?
Situation: there is a coworker about five cubicles away who has a service
dog. She remarks that the dog feels hot to her. Quickly, how hot is hot?
By some method, she determines the dog's temperature is 101.5 degrees
F. Are we alarmed? What is the baseline we are comparing the current data
to? You may know that a normal temperature for a human runs about 98.6
degrees F, but you may not know
that a normal temperature for a dog can be 101 or 102 degrees F. ("Happiness
is a warm puppy", Charles Schulz) If you do not know what is normal for
the system you are protecting, you can't know if current measurements
are abnormal. (If you are worried about the dog, get
more information here.)
Of course, some things are obvious: it is not
normal for the dog or the web server to be on fire. It is also true that
the "normal" state of a system may be unacceptable. Baselining may tell
us where the system is not under control, instead of telling us that everything
is fine. When we find such information, baselining should lead into an
evaluation of our current controls and a recommendation for change.