NET 102 - Networking Essentials II
Chapter 16, Protecting Your Network; Chapter 17, Virtualization; Chapter 18, Network Management
This lesson covers three more chapters, and includes several
important to this lesson:
- Network threats
- Securing accounts
- Using firewalls
- Reasons to virtualize
- Virtualization methods
- Managing configuration
- Monitoring and optimizing performance
actual material for the chapter begins on page 459, where the author begins a discussion of eight common kinds of network threats. He discusses some items at length.
- system crashes and hardware failures
It may be hard to think about hardware failure as a threat, since there is no villain involved, but a threat is simply something bad that could happen, which does not require a villain. Systems can fail because components wear out, reach their predicted end of life, or suffer some kind of damage. The text reminds us that critical systems need fault tolerance, the ability to continue working despite the failure of one or more components.
- administrative access control weakness
The administrator of a system is intended to be a trusted, knowledgeable operator. When an attacker obtains administrator level access to a system, they have the ability to change almost anything they want to change. Default accounts should be renamed or removed, accounts with admin rights should be protected and default passwords should be changed as soon as a system is brought up.
The text has a long discussion about many kinds of malware. You should know the basics, and will discuss them in other classes as well.
- A virus typically requires a carrier to infect a system, like an email, an instant message, or a program that the user runs. A virus typically has two tasks: replicate and damage.
- There is a major difference between worms and viruses: once it is started, a worm can replicate itself across connected computer systems by itself. It does not need a carrier once the initial program has been executed. A worm can attack any running computer that is connected to a network that an infected computer is on: it does not require cooperation from the user.
- A macro virus is not really a macro, which is a saved procedure in some application. A macro virus uses or is written in the macro language of an application, which means it can be sent as part of an actual document used in or by an application.
- Trojan horse programs are named for the myth of a wooden horse that was used to smuggle Greek soldiers inside the walls of Troy. A program of this sort has two aspects: what we are told it does, and what it actually does. In some cases, Trojans may do what they say, but they also have a hidden malicious purpose which is what puts them in this category.
- At first, a rootkit sounds like a resident virus that replaces operating system files with its own. There are similarities, but one difference is that a rootkit is much more extensive, and another is that the rootkit obtains elevated privileges to carry out its stealth actions.
- Spyware is defined as software that violates a user's security. Spyware typically has one of three missions: advertising, collection of personal information, or changing configuration settings. Adware is spyware of the first type.
- Malware may be handled by any of several solutions. The text gives the impression that the four programs named on page 462 are all you need, which may not be true. For instance, what is wrong with downloading the newest version of Malwarebytes to fix you client's computer? Nothing, unless they are a commercial entity, in which case it is unlawful to use it without paying for it.
- social engineering
We have talked about social engineering, the art of getting someone to tell you things they should not.
Many attackers begin their research on a target by trying to harvest information with social engineering techniques. The text mentions phishing, a specific form of social engineering that asks a target for specific information while pretending to be a trusted partner or authority.
- man in the middle attacks
The text discusses a Man-in-the-Middle attack. Students should be able to find information about this kind of attack online with regard to voting machines. A passive attack intercepts messages, saves and transmits them to an attacker, then passes the messages on to the intended recipient right away. An active attack would intercept a message, change it, and then send the changed version along. You can see how this kind of attack on election data would have effective results.
- denial of service attacks
A Denial of Service (DoS) attack tries to tie up a system so that it cannot respond to legitimate requests. Multiple computers are typically used to tie up all available connections to a system, preventing real users from making a connection or receiving service. When a botnet (a number of enslaved/coopted devices) is used, the attack can be called a Distributed Denial of Service (DDoS) attack. A famous version of a DoS attack is called a smurf attack. The attacker ties up the target system with ping requests. Typically, pings are sent to many other systems, with the return address of the target system. This floods the target with ping replies. (Why a smurf? Are pings blue?)
- physical intrusion
The author talks about restricting physical access to equipment by standard means like doors and locks, passwords, and screen savers. It should be noted that most workstations can be locked with a simple key sequence that most users neglect to use before they walk away for a few minutes.
- wireless attacks
The text lists four methods for attacking a wireless access point:
- leeching - stealing wireless bandwidth; the text says to be aware of the terms war driving (driving around looking for unprotected access points) and war chalking (marking found access points for other intruders)
- cracking wireless encryption - This is the reason we don't use WEP an more, remember? The text mentions that WPA can be cracked as well, so you should use longer, mixed character pass phrases when setting them up.
- rogue access point - a wireless access point that a user has added to the network because he or she wanted to have wireless access to the company network. The label "rogue" means that it is unauthorized. The problem is that it is unprotected, unsecured, and provides access to the network like an open network jack would.
- evil twin - a rogue access point that masquerades as a real, legitimate access point; the text calls this a wireless phishing attack; it seems to me that it is more like a man in the middle attack
The text turns to user accounts. Keeping accounts and the resources that they access secure is what passwords and authentication are about,
- authentication - proving your identity to a system
- multifactor authentication - providing more than one kind of proof, such as using a password and swiping an ID card; each factor must be of a different kind; the standard kinds of factors are something you know (password or PIN), something you have (an ID card, a dongle, a one-time password like an RSA SecurID), and something you are (biometric proof, like a fingerprint, hand print, retina scan, or iris scan
- Most systems depend on passwords alone, so they should be changed regularly, and should be sufficiently complex that they won't fail with a dictionary attack or a simple guess.
The text discusses using Active Directory in a Windows network to assign rights to network assets. This subject takes more time to discuss if you want to understand it well. For the purposes of this class, be aware that users can be assigned rights as individuals, or as members of a group. Adding and removing members from authorized groups is an effective way to make sure that new staff are given the same rights as the people who have been around for a while.
The text talks about firewalls for a page or so, and confuses the topic by discussing Network Address Translation, which we talked about in the chapter about assigning IP addresses. This really does not relate to firewalls, but the author wants you to recall that material.
The author does talk about firewalls eventually, telling us that they can work in several ways.
- Traffic on a network is broken into packets, smaller message units. Each
packet must hold at least two addresses: that of the sender and that of the recipient. A packet-filtering
firewall will hold a database of rules that tell it what to do with
packets. Often the rules are based on the addresses mentioned above and
the protocol (network rules) the packet is being sent under.
- Packets can be stopped based on the port that they are being sent to. We might set a rule to block all traffic to a port that is not supposed to be in service.
- Packets can be stopped if a port is in the wrong state. This means that a port on a server may be ready to receive legitimate traffic, but we would reject any traffic from any source that had not opened a legitimate session with the server.
- Traffic may be filtered based on the MAC address or the IP address of the source device.
- A network may be broken into zones, the most common example being a DMZ (Demilitarized Zone), which is a poor name for the part of the network that we expose to public web access. Firewalls are configured at the DMZ border to prevent access to the secure zones. (DMZ is a poor name because we do protect that part of the network as well as the rest of it.)
The chapter ends with a brief mention of vulnerability scanners. It tells us the Nmap is a well known program that is often used by network administrators (and hackers) to find open ports in network that might be vulnerable to attack.
This chapter is about virtualization, which means running a program on a computer that acts like a separate computer. On a large server, you might do this several times, allowing each virtual machine to act like a separate device that will not affect the others if anything goes wrong.
Skipping ahead to page 492, the text lists some reasons for running virtual machines.
- reduced power cost - running several virtual machines on one device takes no more power than running the device without the virtual machines on it
- reduced hardware cost - this is debatable, but the concept is that we buy one good computer that will server as several slightly lesser computers
- system recovery - the best thing about a virtual device is that it can be reloaded quickly if it fails or is compromised; if it is corrupted or taken over, just kill the virtual device, start it again, and you are back in business; unlike a virus ridden computer, the virtual computer is saved as an image file that should have no error, problem, or infection.
- quick setup - once you have an image file for a virtual device, you can copy the file to as many other real machines as you like and use it there, provided there is no problem with licensing
The text mentions Microsoft's Virtual PC, which is free, and VMware, which is not. I am surprised that it does not mention VirtualBox, from Oracle, which is also free.
Virtual devices require management software to run them. The text mentions two variations.
- The management software of a server may be a hypervisor from VMware called ESX or one from Microsoft called Hyper-V. These run virtual servers.
- The management software on a workstation is intended to run a virtual machine, and it may be VMware, Virtual PC, KVM, or VirtualBox.
- The virtual software for a workstation can run a virtual server, but this is typically something we do in a class, not in the real world. In the real world, we would want a high end server to act as several servers.
The last chapter we will cover is about network management. The discussion begins with what most IT staff consider the most hated topic: documentation.
If you don't document how tasks are to be done, how a network and workstations are to be configured, how devices should be added and removed from the network, you have chaos. Technicians will all do things differently from each other, which may not always matter, and that's what makes it dangerous. When someone finally does what they think is right, what they think is what they always do, and it fails, they will have nothing to refer to that will tell them what they forgot, what they missed, what they should have done, because you never made them any documentation to refer to. What matters is having a repeatable procedure that everyone can follow, which you will never have if you don't write it down!
That is the lesson the author forgot to include in this chapter. He begins his lesson on page 505. The lesson begins with a thrilling set of lists of various sorts of things you should have documentation about. The short answer is "everything". The lists may be more informative:
network connectivity documents
- wiring schemes, wire schemes - notes about which standard we use with our RJ-45 connectors, the type of cable used on what runs, and what devices those runs should connect to
- network diagrams - a schematic diagram of runs, switches, printers, servers, and other devices connected to our network; note the icons shown on page 507 for a firewall (a brick wall), a router (a cylinder with two arrows meeting in the center, and two arrows hitting opposite ends of a diameter), and a switch (a Starburst candy, with two arrows pointing northwest, and two arrows pointing southeast). The icons for workstations, servers, and printers are more recognizable. Take a look at the slides a kind person has posted for us.
- network maps - the difference between a diagram and a map is a matter of detail: a map has more
The text mentions baselines, but provides no list for them. (Thank you, Mr. Meyers). What you keep track of will vary depending on what you can track. The text suggests keeping records of CPU usage, network utilization, and whatever else you can measure regularly.
policies, procedures, configurations
- acceptable use policy - statements about what users are allowed to do with our equipment, and on our networks
- security policy - including password complexity and expiration rules, general rules about keeping data safe and avoiding social engineering
- configuration - rules about how to set up devices; history of patches and software installed; copies of profiles, copies of routing tables and firewall rules
The text spends several pages talking about the use of Windows Performance Monitor. It also notes in the margin on page 512 that the the Network+ test does not cover it except to know what its baseline functions are for. Be aware that you want to monitor activity on the network and to record data in log files.
The text moves on to improving performance on a network. There are some bold headings that give you a clue to three areas to study for the test:
- caching - The author writes about the main virtue of caching web content: the ability to deliver a stored copy to the requester faster than it can be retrieved over the Internet; the downside that he does not mention is that some sites change frequently, such as news sites, in which case caching is something we must overcome
- controlling throughput - Caching has another benefit when using streaming media: if we store data in a sufficiently large buffer, storing it more rapidly than we deliver it, we can seem to never have a pause in our delivery; this is the principle of a bagpipe: blow up the bag, and continue to refill it while you play, using it as a buffer to hide the pauses and dropouts caused by breathing (as you watch the video, think of the bag as a buffer on the network, and the pipe as playing video from YouTube or Netflix)
- keeping resources available - This topic is worded oddly so that it covers hardware and software ideas.
- back up your data - There are several methods that are commonly used for making backup copies of network data, and our author confuses the issue by calling one by an odd name. We need to understand a couple of terms first.
- Archive bit - a bit in a file that is turned ON when the file is changed; in backup strategies, it is used to flag files that have changed since the last backup
- Target - the device, volume, folder, or group of files being backed up
Okay now the strategies:
- Full, Normal - a backup of all files in the target; sets the
archive bit of each file to OFF; the text calls this method Normal, but everyone else calls it Full
- Copy - like a Full backup, but it does not change the archive
bits of files it copies. This is typically not part of a standard backup
strategy, but an option to work around the system.
The text says it is used to make extra copies of a full backup. Why don't you just copy the backup?
- Incremental - a backup of target files that are new or changed since the last backup; depends on the fact
that programs that change files typically set the archive bit to ON when a change is made; sets archive bit to OFF for all files it copies
- Differential - a backup of all files new or changed since the last Full backup; copies all files whose archive bit
is set to ON; does not change the archive bit of files it copies
- Daily - makes copies of only the files changed on a particular day (the current day); essentially, this is just running an incremental every day, so it is strange to have it as a separate bullet.
The text is a bit confusing, so review this list, ordered by how many archive bits are reset:
- a Full backup copies everything. Resets all archive
- an Incremental backup copies everything different from the
last backup. Resets the archive bits of files it copies.
- a Differential copies everything "different from Full".
(Different from the last Full backup.) Does not reset any archive bits.
- a Copy makes a Full backup, and does not reset any archive
- UPS - no, not the brown-truck people; Uninterruptible Power Supplies are used to prevent damage in brownouts, and to allow graceful shutdowns in blackouts
- Backup Generators - they have nothing to do with backing up data, backup generators are used when the normal power supplier can't supply power; a lesson learned by the state a few years ago was that you should run your backup generators for a while every week to find potential problems and to prevent your stored fuel from going bad (use a bit of the fuel, then top off the tank)
- RAID - The text discusses RAID, which has been defined several ways. Eventually, all hard drives fail, and RAID allows a system to continue in spite of that, in most cases. One common meaning is Redundant Array of Independent Drives. The word "independent" seems unnecessary, and is in fact misleading. Hard drives set up in a RAID array perform functions that relate to each other. Several kinds of RAID exist to provide for redundant storage of data or to provide for a means to recover lost data. The text discusses three types. Follow the link below to a nice summary of RAID level features
not listed in these notes, as well as helpful animations to show how they work. Note that RAID 0 does not provide fault tolerance, the ability to survive a device failure. It only decreases the time required to save or read a file.
RAID levels and features:
- RAID 0: Disk striping - writes to multiple disks, does not provide fault tolerance.
Performance is increased, because each successive block of data in a stream is
written to the next device in the array. Failure of one device will
affect all data. This will provide a performance enhancement by striping
data across multiple disks. This will not improve fault tolerance, it
will in fact decrease fault tolerance.
- RAID 1: Mirroring and Duplexing - provides fault tolerance by writing the same data to two drives. Two mirrored drives use the same controller
card. Two duplexed drives each have their own controller card. Aside from that difference, mirroring and duplexing are the same: Two drives are set up so that each is a copy of the other. If one fails, the other is available.
- RAID 5: Parity saved separately from data - Provides fault tolerance by a different method. Data is striped across several drives, but parity data for each stripe is
saved on a drive that does not hold data for that stripe. Workstations
cannot use this method. It is only supported by server operating systems.
- RAID 6: An improvement on RAID 5; it uses another parity block to make it possible to continue if two drives are lost
- RAID 0+1: Striping and Mirroring - uses a striped array like RAID 0, but mirrors the striped array onto another array, kind of like RAID 1
The chapter ends with a mention of clustering servers. This technique creates failover servers that take over if a critical server goes down. Clustering can also allow a system to react to heavy loads by offloading tasks to various servers in a cluster.