We left off with this section of the chapter last week. I will remind you that this section of the chapter contains an unrelated nugget of information. The author defines a bandwidth domain as any set of devices that share bandwidth or compete for access to it. In a classic wired Ethernet, there was one bandwidth domain, because all devices on that LAN competed for access with each other. In an Ethernet with switches, the text tells us that each device that is wired to a switch is on its own bandwidth domain, but this is a little specious, since the there is no point in communicating only with the switch.
The text also defines a broadcast domain as the set of devices that can receive each other's broadcasts frames. This is better definition than we usually see. We are reminded (or told, if we did not know) that the broadcast address for layer 2 is a MAC address that is all Fs: FF:FF:FF:FF:FF:FF
The text turns a corner and steers toward VLANs. Users anywhere on your network can be made members of a common Virtual LAN, which lets them communicate as easily as if they were on the same LAN. This was the original use of VLANs. They are not often used for this purpose any longer. Usually, VLANs are used, as the text says, to make a large switch act as though it was really several switches, so that it can be used to separate groups of ports into different VLANs. This has the benefit of having each VLAN act as a separate broadcast domain, minimizing broadcast intrusions for all devices plugged into ports on that switch. A virtual router on the switch connects the separate VLANs the same way a real router would.
As you might imagine from the description of VLAN users being anywhere in your network, a VLAN can exist on specific ports of multiple switches. When this is done, the connections between the switches that contain the parts of a VLAN are called trunks or trunk links. Frames traveling from one such switch to another are given a header identifying the VLAN it belongs to. The header is called a VLAN tag. As the illustration on page 144 shows, we can place multiple VLANs on a switch, they can all span to other switches across trunk links.
The text offers some general advice about placing wireless access points, and positioning antennas. The text mentions that most WAP antennas are isotropic antennas, also called omnidirectional antennas. This means that they should radiate signals in a spherical pattern, equally in all directions. The reality is that the patterns are not perfect spheres. Think of an antenna as a stick that points up. Think of the signal as a disk with a hole in its center that the antenna has been pushed through. The strongest signal coming from such an antenna will radiate like a disk that is centered on the antenna. If the antenna is mounted vertically, the plane of that disk will be strongest horizontally. This article on the Cisco web site has several illustrations of antenna patterns, showing us that they can vary a great deal, and that they often have a complicated signal radiance. The disk will not actually be flat. It will be more like a bagel, and sometimes a very misshapen bagel.
The text also mentions that mobile devices like cell phones and laptops have a variety of antenna types and alignments. It mentions that the antennas of those devices may be large or small, and may be oriented in any direction. The text suggests that a given WAP may offer connections to too many wireless devices if the WAP's signal is too strong. It recommends that we may want to reduce signal strength to limit the operating distance, which will limit the number of stations that can connect, which may improve the user experience of those who can connect.
The text also recommends that when we set up multiple WAPs on a campus,
we should make all wireless users members of a particular VLAN, which
will simplify subnet addressing for those devices, and may provide an
advantage when roaming from one wireless cell to another.
The text also recommends redundant WAPs when high availability is desired. When using Cisco equipment, the second WAP in each pair would be placed in Hot Standby mode, which monitors the primary WAP in the pair, and causes the standby WAP to take over if the primary WAP fails.
Redundancy and Load Sharing
In a previous section, the text discussed redundant switches and spanning tree protocol to control which switch is active in a redundant situation. It tells us here that this solution does not support load sharing. It recommends a newer protocol from Cisco, Per VLAN Spanning Tree+ (PVST+), which constructs a separate logical tree for every VLAN.
The illustration on page 148 shows a recommended model for VLAN redundancy. Note that each switch holds elements of two VLANs, and each switch is linked to both switches in the hierarchical layer above it. This matches the conceptual models on pages 123 and 126.
The text lists several kinds of servers that should be considered for redundancy:
Workstation to router redundancy
Workstations will typically need access to routers for any information
not on their own networks. Routers, like other devices, go down from time
to time, so redundant routers are recommended.
The text ponders how the workstations will find the redundant routers, once their default routers are down. One recommendation, on page 151, is the use of the Router Discovery Protocol (RDP) which causes routers to multicast their addresses and services every 7 to 10 minutes.
This is a Cisco book, so a Cisco specific solution to default gateway failure appears on page 152. Hot Standby Router Protocol (HSRP) is explained as a protocol that allows for a primary and a backup router, both of which would act on requests sent to a virtual router (also called a phantom router), whose IP address and MAC address would be delivered as the default router for a network by a DHCP server.
Click the image above to open an article on the Cisco web site about this protocol. In this illustration, each of the hosts would have the address of the virtual router designated as its default gateway by DHCP.
Designing the enterprise edge
The text suggests that we should have some redundant connections to our WAN links that are actually redundant. We are warned that we should ask for circuit diversity from our data carriers, so that a backup circuit is actually different from our primary connection to their data service. It would not do any good to have a backup system that is taken down by the same threats that could take down our primary system.
The text continues the idea of redundancy and diversity. On page 155, we see a table of four options that an enterprise might be able to choose for access to the Internet. It would be nice to have all the choices in that table. Be aware that your choices may be limited.
The text presents some terms you may know, but uses some in different ways than you might know:
Virtual Private Networking
You probably know that VPNs are used to make secure connections over the Internet, over leased data line, and over regular network lines. The text provides some background on VPN functions. It mentions that VPNs often use tunneling, the practice of encapsulating packets in other kinds of packets so they can pass across a network that does not understand their native packet type.
That seems like enough from chapter 5. Let's move on.
The chapter opens with two examples of customer requirements that can lead to optimization problems, high bandwidth requirements and delay-sensitive applications.
A multicast transmission is intended to go to several addresses on a network. The delivery of such data streams can be done by sending a separate stream to each intended recipient. This duplicates the stream, and may take too much bandwidth. It might be done by sending a broadcast across a network, which would take up all its bandwidth, and which would also send the data to many hosts that have no interest in it.
The text tells us on page 369 that multicasting is limited to a specific range of IP addresses. Devices are configured to share a particular address in this range. The text almost avoids telling you that the range is defined as all Class D addresses, which means that the first (leftmost) byte in the address will be between 224 and 239, inclusive, regardless of the IP addressing scheme used in your network. The text explains that sending messages to multicast addresses saves on network performance because devices whose addresses are outside the class D block can ignore those transmissions. This works on MAC addresses as well. The text tells us that IANA has reserved MAC addresses 01:00:5E:00:00:00 through 01:00:5E:FF:FF:FF as addresses that can stand for multicast addresses on layer 2 of the ISO model.
On page 372, the text returns to the concept of transmission (serialization) delay, reminding us that it is the time it takes to output a packet, which will change with the size of the packets an application uses, and with the characteristics of the network. The author addresses two methods of dealing with delay caused by serialization delay.
Meeting QoS requirements
The text discusses some methods to meet Quality of Service requirements that your customer may have.
The chapter ends with a discussion of features of Cisco equipment that may be used to improve traffic flow on a network. The are all good ideas, but we will leave them for your research for your projects.