NET 226 - Designing Internetwork Solutions

Chapter 5, Designing a Network Topology, part 2; Chapter 13, Optimizing Your Network Design

Objectives:

This lesson continues the discussion of designing a network from the perspective of its components and connections. Objectives important to this lesson:

  1. Virtual LANs
  2. Wireless LANs
  3. Redundancy and load sharing
  4. Server redundancy
  5. Workstation to router redundancy
  6. Designing the enterprise edge

  7. Multicasting
  8. Serialization delay
Concepts:

Chapter 5

Virtual LANs

We left off with this section of the chapter last week. I will remind you that this section of the chapter contains an unrelated nugget of information. The author defines a bandwidth domain as any set of devices that share bandwidth or compete for access to it. In a classic wired Ethernet, there was one bandwidth domain, because all devices on that LAN competed for access with each other. In an Ethernet with switches, the text tells us that each device that is wired to a switch is on its own bandwidth domain, but this is a little specious, since the there is no point in communicating only with the switch.

The text also defines a broadcast domain as the set of devices that can receive each other's broadcasts frames. This is better definition than we usually see. We are reminded (or told, if we did not know) that the broadcast address for layer 2 is a MAC address that is all Fs: FF:FF:FF:FF:FF:FF

The text turns a corner and steers toward VLANs. Users anywhere on your network can be made members of a common Virtual LAN, which lets them communicate as easily as if they were on the same LAN. This was the original use of VLANs. They are not often used for this purpose any longer. Usually, VLANs are used, as the text says, to make a large switch act as though it was really several switches, so that it can be used to separate groups of ports into different VLANs. This has the benefit of having each VLAN act as a separate broadcast domain, minimizing broadcast intrusions for all devices plugged into ports on that switch. A virtual router on the switch connects the separate VLANs the same way a real router would.

As you might imagine from the description of VLAN users being anywhere in your network, a VLAN can exist on specific ports of multiple switches. When this is done, the connections between the switches that contain the parts of a VLAN are called trunks or trunk links. Frames traveling from one such switch to another are given a header identifying the VLAN it belongs to. The header is called a VLAN tag. As the illustration on page 144 shows, we can place multiple VLANs on a switch, they can all span to other switches across trunk links.

Wireless LANs

The text offers some general advice about placing wireless access points, and positioning antennas. The text mentions that most WAP antennas are isotropic antennas, also called omnidirectional antennas. This means that they should radiate signals in a spherical pattern, equally in all directions. The reality is that the patterns are not perfect spheres. Think of an antenna as a stick that points up. Think of the signal as a disk with a hole in its center that the antenna has been pushed through. The strongest signal coming from such an antenna will radiate like a disk that is centered on the antenna. If the antenna is mounted vertically, the plane of that disk will be strongest horizontally. This article on the Cisco web site has several illustrations of antenna patterns, showing us that they can vary a great deal, and that they often have a complicated signal radiance. The disk will not actually be flat. It will be more like a bagel, and sometimes a very misshapen bagel.

The text also mentions that mobile devices like cell phones and laptops have a variety of antenna types and alignments. It mentions that the antennas of those devices may be large or small, and may be oriented in any direction. The text suggests that a given WAP may offer connections to too many wireless devices if the WAP's signal is too strong. It recommends that we may want to reduce signal strength to limit the operating distance, which will limit the number of stations that can connect, which may improve the user experience of those who can connect.

The text also recommends that when we set up multiple WAPs on a campus, we should make all wireless users members of a particular VLAN, which will simplify subnet addressing for those devices, and may provide an advantage when roaming from one wireless cell to another.

The text also recommends redundant WAPs when high availability is desired. When using Cisco equipment, the second WAP in each pair would be placed in Hot Standby mode, which monitors the primary WAP in the pair, and causes the standby WAP to take over if the primary WAP fails.

Redundancy and Load Sharing

In a previous section, the text discussed redundant switches and spanning tree protocol to control which switch is active in a redundant situation. It tells us here that this solution does not support load sharing. It recommends a newer protocol from Cisco, Per VLAN Spanning Tree+ (PVST+), which constructs a separate logical tree for every VLAN.

The illustration on page 148 shows a recommended model for VLAN redundancy. Note that each switch holds elements of two VLANs, and each switch is linked to both switches in the hierarchical layer above it. This matches the conceptual models on pages 123 and 126.

Server redundancy

The text lists several kinds of servers that should be considered for redundancy:

  • file servers
  • web servers
  • DHCP servers - The text reminds us that DHCP requests are broadcast requests. If your DHCP server serves more than one network or subnet, you must configure appropriate routers to forward this kind of traffic.
  • name servers - Servers for DNS, WINS, and NetBIOS Name Service (NBNS).
  • database servers
Workstation to router redundancy

Workstations will typically need access to routers for any information not on their own networks. Routers, like other devices, go down from time to time, so redundant routers are recommended.

The text ponders how the workstations will find the redundant routers, once their default routers are down. One recommendation, on page 151, is the use of the Router Discovery Protocol (RDP) which causes routers to multicast their addresses and services every 7 to 10 minutes.

This is a Cisco book, so a Cisco specific solution to default gateway failure appears on page 152. Hot Standby Router Protocol (HSRP) is explained as a protocol that allows for a primary and a backup router, both of which would act on requests sent to a virtual router (also called a phantom router), whose IP address and MAC address would be delivered as the default router for a network by a DHCP server.

Cisco HSRP

Click the image above to open an article on the Cisco web site about this protocol. In this illustration, each of the hosts would have the address of the virtual router designated as its default gateway by DHCP.

Designing the enterprise edge

The text suggests that we should have some redundant connections to our WAN links that are actually redundant. We are warned that we should ask for circuit diversity from our data carriers, so that a backup circuit is actually different from our primary connection to their data service. It would not do any good to have a backup system that is taken down by the same threats that could take down our primary system.

The text continues the idea of redundancy and diversity. On page 155, we see a table of four options that an enterprise might be able to choose for access to the Internet. It would be nice to have all the choices in that table. Be aware that your choices may be limited.

The text presents some terms you may know, but uses some in different ways than you might know:

  • multihoming - the practice of providing multiple routes to the Internet
  • default route, gateway of last resort - the router to use when no other router is specified
  • best route - the most efficient and fastest route to a destination, which the text warns us we cannot expect over the Internet due to fact that Border Gateway Protocol (BGP) does not offer a feature to choose such routes
Virtual Private Networking

You probably know that VPNs are used to make secure connections over the Internet, over leased data line, and over regular network lines. The text provides some background on VPN functions. It mentions that VPNs often use tunneling, the practice of encapsulating packets in other kinds of packets so they can pass across a network that does not understand their native packet type.

That seems like enough from chapter 5. Let's move on.

Chapter 13

The chapter opens with two examples of customer requirements that can lead to optimization problems, high bandwidth requirements and delay-sensitive applications.

Multicasting

A multicast transmission is intended to go to several addresses on a network. The delivery of such data streams can be done by sending a separate stream to each intended recipient. This duplicates the stream, and may take too much bandwidth. It might be done by sending a broadcast across a network, which would take up all its bandwidth, and which would also send the data to many hosts that have no interest in it.

The text tells us on page 369 that multicasting is limited to a specific range of IP addresses. Devices are configured to share a particular address in this range. The text almost avoids telling you that the range is defined as all Class D addresses, which means that the first (leftmost) byte in the address will be between 224 and 239, inclusive, regardless of the IP addressing scheme used in your network. The text explains that sending messages to multicast addresses saves on network performance because devices whose addresses are outside the class D block can ignore those transmissions. This works on MAC addresses as well. The text tells us that IANA has reserved MAC addresses 01:00:5E:00:00:00 through 01:00:5E:FF:FF:FF as addresses that can stand for multicast addresses on layer 2 of the ISO model.

Serialization delay

On page 372, the text returns to the concept of transmission (serialization) delay, reminding us that it is the time it takes to output a packet, which will change with the size of the packets an application uses, and with the characteristics of the network. The author addresses two methods of dealing with delay caused by serialization delay.

  • Link-layer fragmentation and interleaving (LFI) - This technique takes advantage of the fact that all transmissions over IP networks are broken into packets. This method sends packets for transmissions that are time sensitive along with packets from other applications, interleaving them in one stream, like shuffling the cards in a deck and dealing them out to players who are each interested in reconstructing one particular suit.
  • Compressed real-time transport protocol (RTP) - As the method above is more of a workaround for sending time sensitive data across a network, this one is more focused on the delivery of that data. RTP is meant to be paired with UDP, which has the general ability to provide quick delivery of packets. RTP adds reliable delivery that is associated with layer 4, the Transport layer. It also works with IP, which is responsible for finding routes to networks.
Meeting QoS requirements

The text discusses some methods to meet Quality of Service requirements that your customer may have.

  • IP Precedence and Type of Service - This part is historical, so bear with it.
    The text explains that IP packets have always had bits in their headers to tag their service types, so some packets could be given precedence over others. This link will take you to a page that diagrams the header portion of several types of packets. The bottom line is that if you used routers and applications that could handle this data, you could hope to prioritize packets from applications that needed time sensitive delivery.

    Note the discussion in the text, explaining that the Type of Service field is subdivided in to a Precedence (priority level) field and a Type field. Note also that the value in the Precedence field lets the router make choices between packets that are queued for the same interface (port). What does this mean? That packets queued for different interfaces are not in competition with each other, which gives us more incentive for a router to have multiple routes to the same destination, starting with the port at which a packet leaves the router. This may be more important than the rest of the details in this part of the discussion, since the author ends it by telling us that no protocols did a good job of using this information, and the next discussion is more important.

  • IP Differentiated Services Field - The text explains that this was the next evolution of the Type of Service field in IP packets. The confusing illustration on page 377 uses two methods to number the bits in a packet. The lower part of the illustration numbers them consecutively from the beginning of the packet. The upper part of the illustration numbers them from the beginning of the Differentiated Services Codepoint (DSCP) field. The purpose of this field is the same as it always was, this is just the newer version of coding it.

  • Resource Reservation Protocol (RSVP) - RSVP is not a very good acronym. It is a protocol that can be used by a host to request a quality of service from a network. Routers can make this request to other routers to set up channels of a particular service level. The text explains that using RSVP is an example of an out of band request for a service level. Using the bits in the DSCP field to mark packets for a service level is an example of an in band request for service.

The chapter ends with a discussion of features of Cisco equipment that may be used to improve traffic flow on a network. The are all good ideas, but we will leave them for your research for your projects.


Week 5 Assignment: Chapter 5

  • From Chapter 5:
    • Review Questions 1 - 4 on page 165
  • From Chapter 13:
    • Review Questions 1 - 5 on page 390
  • Read Chapters 6 and 7