Chapter 9, User Domain Policies
Chapter 10, IT Infrastructure Security Policies
Objectives:
This lesson covers chapters 9 and 10. It discusses policies
that relate to users and to the portions of our network that were
introduced in earlier chapters. Objectives important to this lesson:
Weak links in the security chain
Types of users
User policies
Acceptable use policies
Privileged-level access agreement
Security awareness policies
Least privilege or best fit
Part of an infrastructure policy
Workstation policies
LAN policies
LAN to WAN policies
WAN policies
Remote access policies
System/Application policies
Telecommunications policies
Concepts:
Chapter 9
This chapter begins with the popular concept that the biggest
problems we have in IT security are associated with people. The text
tells us that people are subject to problems that automated defenses do
not share:
People have different skill
levels, causing their actions to vary; identical devices with
the same software and configuration do not have this problem
People get tired
or distracted and do things
they should not; systems undergoing a Denial of Service attack may seem
tired or distracted, but they won't open a door or tell someone a
password because of it
People need sleep
and vacations; systems
typically do not need these, but they do need to be patched and rebooted from time to time (Have you
ever had a reboot yourself? I would like to ask a question about it.)
People, on the other hand, can think and
react to a new situation by devising a new solution; machines typically
do not do this, nor do some people
The text continues with a discussion of three areas in which
humans cause IT security breaches. These are not the only areas to be
concerned about, but they are a good start.
Social engineering is
a label that is applied to any attempt to convince someone to do
something that is to your benefit. In the context of IT security, a
social engineer is often a con artist who is asking, fooling,
convincing, or otherwise manipulating people into revealing secrets or
granting access to systems. The author lists some classic social
engineering methods:
Make a friend -
Friends tend to confide in friends, do favors for them, and show off
what they know or can do. A hacker may try to become a friend to
someone with the next level of access to harvest information from them.
Pretext - I think
the author is a little off the mark with this one. A pretext is a pretense, a lie of some sort. A
pretexting attacker might pretend to be from the IT department, as the
author says, but he/she might instead pretend to be a new user, an
assistant to a high level executive, or any other role that seems to
fit the situation. Think of Leonardo DiCaprio in Catch Me If You Can,
interviewing an airline official to get the information he needed to
impersonate a pilot. He was pretexting with the airline official when
he pretended to be a reporter for a student newspaper. He then
pretended to be a pilot in order to pass bad checks at banks, hotels,
and airline counters, which we could say was the real exploit that his
initial pretexting led to.
Ask for information
- The author describes a social engineerr asking a user to log in to a
"test page", which in reality has the purpose of collecting the user's
ID and password. This is similar to phishing,
sending email to users that ask them to do the same or similar things.
The author remarks that social engineering is often preferred
to more difficult hacking, because it is usually easy, fast, and
effective. That is true for someone with the right skill set. Many
hackers are not accomplished actors, but social engineers need to be.
Think about it the next time someone calls your home "from Microsoft"
and tells you they have noticed problems on your computer. Then hang up
the phone, there is no point in talking to them.
Human mistakes come from many causes. The text lists several,
but only discusses mistakes that it collects under the heading of carelessness. In fact, the examples
of carelessness the author gives to us might easily be called by
different names.
Leaving credentials in
plain sight - This one is pretty careless, if we assume the
users have been told to keep
their passwords secret.
Failing to read a screen,
and just clicking OK - This may be carelessness, but it may also
be a behavior we have taught
to the users. If our log on
process, our print process, or
any other frequently used process
typically requires this behavior to continue with one's work day, users
can fall prey to a hacker whose trigger event looks like one of the
expected screens.
Giving in to a superior's
request to counter security - This one is not carelessness, it
is coercion on the part of the higher up who thinks security rules only
apply to other people. In fact bullying
is a classic social engineering technique.
Lack of common computer
knowledge - If the person in question has been taught better, then it is careless
not to think and use that training. If it is as the author describes,
that we have a person who was never taught what to do, that is a failure on management's part, not
carelessness on the employee's part.
Insiders is the title
of the third item about users. The text refers to a study that said 31%
of data breaches in 2013 were due to employees, consultants,
contractors, or vendors. The text points out that
most of these people are in a position to know the information in the
security framework, so they might each know about some of the weak
spots in the defenses of an organization. They are more informed, and
more likely to avoid detection should they decide to attack the
organization. Every disgruntled employee has their own motivation. The
text offers an example, stating that the average experienced
help desk agent in the Philippines may make as little as $4000 per
year. Selling company data may be very tempting. Stolen credit card
information may be worth a very large sum to a reseller.
The text proceeds with a discussion of seven types of users.
The author presents some thoughts about each group, and adds two more
in the list on page 240.
Employees - Salaried and hourly employees typically
need access to business applications and related information.
Their access should be determined by the needs of their work.
The text points out that access should be reviewed and changed
with each change in a person's job or duties. This is one of the areas
that typically receives less care. It is more common that an employee
who moves from department to department will accumulate the
access rights of each new department, but may not lose the
rights necessary for the old department. The text refers to this as
"privilege creep". It sneaks up on you. This phenomenon may be due to
lack of oversight, to the higher priority of assigning
the new rights, or to a need to finish the workload from the
last job. It is an area that presents some risk because an ID may have
rights that the current manager is not aware of. The better practice
would be to analyze all rights a person has from time to time,
and to clear out those that are no longer needed.
System administrators - Need access to most aspects
of the systems, and may be the staff who create and manage user
accounts. This kind of job typically requires all rights to some
systems from time to time. The text recommends that such staff be given
only those ongoing rights they need for their usual duties, and that
they be given elevated rights when the situation requires it. This is a
good idea, but it is not practical as stated. In an emergency it may
not be possible to access the administrator's account to grant such
rights. A common practice is to have a second ID which has been given
elevated rights proactively. I can be Clark Kent as long as that
identity serves my purpose, but sometimes, this is a job for Superman.
That special ID is used only when elevated rights are required, and may
require a report about its use. Some environments do not require this
kind of reporting. The text presents a different concept:
assigning rights related to a specific trouble ticket. Again, this is a
more ideal situation, but is means a system must be accessible to
assign new rights, and that the ticket creator must have correctly
diagnosed the problem the caller is having. The diagnosis may be wrong,
and the administrator will be better off having more rights than
initially predicted.
Security personnel - Need access to infrastructure
devices to monitor and control operations and incidents; may be
the staff who create and manage user accounts. These are the staff who
monitor our systems for attacks, and who analyze them for
vulnerabilities. Staff in these positions may need elevated rights for
most of their work.
Contractors - Need the same access as other
staff in whatever category they are hired, but typically on a temporary
basis. Some contractors continue without specific termination dates.
Even so, contractor accounts should be given access rights that match
their jobs, and their user accounts should have expiration dates which
will trigger a review of their rights and of the status of their
accounts.
Vendors - The text means employees stationed
with our organization by a vendor, such as a specialist from
Microsoft who is assigned to help us roll out a new product, or upgrade
an existing one. They may need access to specific systems, to
an email account, to a shared data area, or other resources that allow
them to work with our organization's staff. The text points out that
this kind of employee is really the employee of another entity, and we
need to know when there is a status change for this person, such as a
promotion, a transfer, or a dismissal. Any of those actions on the part
of the real employer should trigger a change or suspension of the
rights that person's ID has been granted in our environment.
Guests and the public - May need access to ordering
and querying applications, restricted to data that is located
in the DMZ of the organization, or to data that applications in that
zone are allowed to access. The text reminds us that such public facing
devices need to be hardened, to have all unnecessary software
and functionality removed from them. The text also proposes that all
user inputs to our systems should be constrained to only the kind of
input we wish to process, to avoid users breaking into other programs
by SQL injection or other common means. This link to a Wikipedia
page provides a few examples that make the warning clearer.
Control partners - Auditors and other people
tasked with reviewing logs and records will need
extensive read access to systems.
Contingent IDs - IDs with extensive rights
that are only used in an emergency or in the case of restoring
a system. May require full access to various systems in order
to complete the restoration of a system or several of them. The text
says that it may be customary to keep these IDs and their passwords
completely secret until they are needed. Perhaps they are stored in
sealed orders, to be opened only by authorized personnel in case of
emergency.
System accounts - Systems, networks,
and applications need rights sufficient to call other programs
or processes that support their functions. These accounts are never
meant for a user to actually log in with such credentials, even if such
an event is theoretically allowed. When logging in as such an entity is
allowed, the account is called interactive. When it is not
allowed (not possible to do so) the account is called noninteractive.
This is a safer way to set up such an account, because it is very
unlikely that an attacker could use it as desired.
The text makes the point that if we cannot determine which
user takes an action on a system, we should at least be able to narrow
the search down to the user IDs that had the rights to take the action
in question. This makes us more aware that we need to control the
number of people who have access and control over sensitive information.
The text returns to the idea of an Acceptable Use Policy
(AUP), which it has presented before. Such a policy might best
be viewed as a statement of a principle, and a starting point for
ongoing discussion. No such policy can be exhaustive, it cannot
cover all possible abuses of company equipment, because someone is
always finding a new way to use company equipment for a purpose that it
is not meant for. When new circumstances come up, the text recommends
that they be discussed in awareness training, which could be
video moment, a newsletter, or a topic to bring up at staff meetings.
These methods are examples, and the nature of the business
should lead to an appropriate choice that will allow discussion without
threat or boredom.
The text provides a list of topics you may want to cover in an
AUP on page 252.
Computer defense measures
Password requirements
Software license standards
Use of email
Privacy policy
Noncompliance policy
Your organization may require that users who are to be assigned
elevated rights must sign or agree to a Privileged-Level Access Agreement (PAA). This agreement should lay out
the duties to be carried out when using elevated access, the company
policy on handling sensitive information, and other rules as deemed
necessary. The text presents a list of statements and promises that the
employee may be expected to agree to on page 253. These are some
highlights from that list:
Understand the risk
to to company if this set of credentials is breached or stolen
Promise to use the credentials only as required by the company
Promise not to violate
any other security policy with these credentials
Promise to protect
the information created or gathered with these credentials
Another policy related area covered in this chapter is a Security Awareness Program (SAP). Page 254 lists several laws we
have covered already, and some new ones, that require that companies
must have an SAP to instruct their employees in security issues.
Following one or more of the schedules required by these laws will give
a company the defense that they tried to follow the rules, should there
be a breach of their data.
Page 255 returns to several recommendations the text has
already made, and some standard recommendations that may or may not
help. For example, the idea that you should never open an email
attachment unless you know the person who sent it. That is not really
useful, since email accounts can be spoofed, and business email often
comes from someone we have not met or do not know. The advice about
encrypting all sensitive data is more useful, regardless of the medium
used to store or send it.
The last topic in the chapter seems like a small point. The
author draws a distinction between two methods of managing access that
are almost identical:
Least access
privileges - only the
privileges necessary to perform a job are assigned, user by user
Best fit access
privileges - the privileges necessary to perform a job are assigned, to
classes or groups, sometimes giving more privileges for simplicity or job coverage
In the second case, as in the example in the text, more
privileges are assigned to users who do not always need them so that
they can all be assigned to the same group. The benefit to the
administrator is obvious. The benefit to the manager of these employees
is that they can all use the same resources on the network if the need
arises for them to do so. This method should not be used when there is
high security on the data that might be accessed by the users, or when
trust in the employees is not certain.
Chapter 10
Chapter 10 takes us to some hardware related material. First
the author discussed three ways
to organize policies. He seems
to like organizing by functional area,
but warns us that this is disrupted with every reorganization a company
goes through. He considers the second method, organizing by layers of security, a
difficult one to use. Companies are free to classify various kinds of
security as falling in various layers of their own choosing. There is
no standard to naming the layers or placing products in them.
That leaves the third method, organizing
policies by the seven domains the author introduced us to
earlier. The author acknowledges that this method has a problem as
well: some security issues cross over several domains. This requires
policies that address the differences between those domains, and the
similarities between them. The author recommends that a compliance
oriented approach may be adopted by choosing an industry standard
model, such as the ISO or COBIT models.
Th author presents a topic outline of a standards document on
pages 266 and 267. This outline is similar, but not identical to the
template he gave us in chapter 7. This material does not seem to fit
the topic of this chapter, so we should move on.
The author begins a discussion of Workstation Domain policies on page
267. He reminds us that encryption
is typically required for company information on portable devices, and
that this is an example of a workstation policy. Other policies for portable devices might include remote wiping if lost or stolen, and
lockout or data wipe on too many failed login
attempts. The methods to attain each of these results should be
documented in control standards
that relate to such policies. Standards define how something will be
done that a policy requires. As you should recall, that also leads to procedures and guidelines where they are needed.
The text reminds us that baseline
standards are the basic requirements for device types, such as
workstations, which may be modified by specific
standards for users who require more advanced features.
Some features to
expect in baseline standards:
Secure VPN software
Antivirus protection
Patch management and device management processes
Hardening standards
Encryption standards (portable device, or all devices)
The text continues with a section on LAN Domain policies, which deal with
connectivity and traffic flow. This includes
policies, standards, and procedures about firewalls, switches, DoS protection, and WiFi Security. The text presents a
list of control standards for this domain on page 274. Note that it
includes security controls for routers
and configuration change
controls. A list of baseline standards for this domain appears on page
276.
The LAN-to-WAN Domain
section includes policies, standards, and procedures about DMZ controls, Internet Proxy controls (do we use
them? which devices are they?), content
blocking and filtering
controls, and intrusion detection and
prevention controls.
In the section on the WAN
Domain, a bit of a flaw emerges in the plan for this chapter.
There is a bit of crossover between this domain and the last one. We
find switches and routers in both, so we are told that
policies relating to those components may
be handled in the LAN-to-WAN Domain, or in another domain, instead of
this one. The text suggests that this domain may contain policies on DNS, on WAN management, on router security, and on web services. DNS policies may
include creation of domain names
in our registered domain.
Moving ahead to the Remote
Access Domain, which concerns remote
connections, security
and encryption of devices and data , and remote authentication. Standards
should include VPN software
and gateways, VPN IDs, and RADIUS server issues.
The last domain in the list of seven is the System/Application Domain. This
domain has some unique issues, among them determining who is the owner of programs and data, who will
grant access to them, and
who is responsible for their functioning.
Oddly, the text includes both cryptography
standards and physical security
standards in this domain. Maybe we need another domain or two?
The text adds one more domain, this one on Telecommunications
policies. This can include telephone
and data traffic, the wiring
that supports both of those technologies, the end
user devices and infrastructure
devices that interact with those technologies, and crossover technologies
like Voice over IP (VoIP).
Week 5 Assignments:
Turn in answers to the questions on the posted Review for
Test 2 that relate to these chapters: numbers 11 through 23.
Assignment
2: Consider the list of security topics below. Write an acceptable
use policy that relates to a hypothetical company which you may
invent. This is an individual assignment.
email
Internet use
system configurations (of workstations and other equipment, such as Point of Sale devices)
rules about hacking, including rules about installing unapproved software
approved use of company equipment at home
allowed use of personal equipment on company networks
allowed use of networks/telephones for company or personal business
allowed use of photocopiers
prohibited uses of company resources
State thetopicfrom the list above.
State thegeneraluses that areacceptableto the company. Give at least oneexampleof an appropriate, specific use.
State thegeneraluses that areunacceptableto the company. Give at least oneexampleof an inappropriate, specific use.
State thebusiness reasonfor the policy to exist.
State
the outcome of an employee being found in violation of the policy.
(Note: not all policy violations require capital punishment.)