NET 224: Advanced Routers and Routing

Chapter 7: Layer 2 Switching

Objectives:
 

This chapter introduces concepts specific to switches:

  1. Comparing collision and broadcast domains
  2. Advantages to using switches
  3. Layer 2 switch functions
  4. Spanning Tree protocol
  5. LAN Switching methods
Concepts:

The text opens with a few comments about switches: they operate on Layer 2 of the OSI model, because they use MAC addresses (Layer 2 addresses) to decide about delivery of data. Routing protocols that provide methods to stop routing loops are not effective at this layer, so switches must use other methods to prevent switching loops.

The text discusses what networks were like before switches: essentially a single collision domain, limited to one transmitting device at a time. The text offers another definition of a collision domain: "one network segment with two or more devices contending for bandwidth". The solution to this problem is to break up a collision domain into as many separate domains, with as few contending devices on each one, as possible.

Bridges break network segments into separate collision domains, but the number of domains created is limited to the small number of ports on a bridge. Several devices are connected to each port, and those devices are still in a single collision domain. A switch has the virtue of establishing a separate collision domain on each of its several ports. When switches were first marketed, they were expensive, and often were used in conjunction with hubs. Remember, devices plugged into a hub are on a single collision domain. Now, switches are commonly used in place of hubs, providing multiple collision domains without the congestion of hubs.

Switches can run faster than routers, in that they do not change the address data in packets they process. Remember this discussion of routers from chapter 5:

Routers pass signals from one network to another. Routers use software addresses instead of hardware addresses. This makes them independent of protocols used at lower layers. Almost. Example: a transmission is sent from a station on network 1 to a station on network 50. It could pass along any number of routes. What happens is like this

  • The Network Layer header of the outgoing message has a place to write information about the sender and the intended receiver. Assume we are talking about IP addresses. The sender's IP address is saved in the Network Layer header, along with the IP address for the recipient. This data stays in the Network Layer header until the intended recipient breaks down the header.
    Layer Source info Destination info
    Network layer Sender's IP Receiver's IP
    Data Link layer    
  • The Data Link layer header also has a place to write down the address of the sender and the receiver, the difference being that this layer uses MAC addresses. Since the intended recipient is not on the sender's network, the sending station sets the Data Link Layer address of the recipient to the MAC address of the router (default gateway) on his network, and sends the message as a frame to that router. If necessary, an ARP signal is sent to determine the MAC address of the default gateway router.
    Layer Source info Destination info
    Network layer Sender's IP Receiver's IP
    Data Link layer Sender's MAC Default Gateway MAC
  • The router on the sender's network gets the frame, erases the sender and recipient addresses in the Data Link Layer, and decides on a route to the recipient's network (which is written on the header of the Network layer, remember?). The next router in a logical chain is selected. If necessary, ARP is used to find the MAC address of the next router. The next router's MAC address is written in the Data Link Layer header as the "recipient", and the current router's MAC address is written to the Data Link Layer header as the "sender". The frame is forwarded to the next router.
    Layer Source info Destination info
    Network layer Sender's IP Receiver's IP
    Data Link layer Default Gateway MAC Next router's MAC

  • The process in the step above is repeated until a router on the intended recipient's network gets the frame. Then, the final router's MAC information and the receiver's MAC information is written to the Data Link Layer header, and the frame is delivered, where it is unpacked and handed to the IP protocol on the Network layer.
    Layer Source info Destination info
    Network layer Sender's IP Receiver's IP
    Data Link layer Final router's MAC Receiver's MAC

Switches do not have to do any of this kind of readdressing when the data they pass does not leave their network. They simply pass data along a path that leads to the MAC address originally written for the addressee in the data link frame.

The text explains that bridges create address tables in RAM, doing so by the software they run. Switches do a similar thing, but they build their address tables with hardware circuits: application-specific integrated circuits, called ASICs. The text lists this as one of the virtues of a switch, along with related characteristics:

  • ASICs provide "hardware based bridging"
  • wire speed - the speed of transmission is the speed of the medium used
  • low latency - no processing of data or rewriting of addresses, so no waiting for such things
  • low cost - switches cost little more than hubs, and less than routers.

The text reviews bridges again, reminding us of the 80/20 rule. A network segment is properly bridged when 80% of the traffic originating on each port does not need to travel across that port. Bridges do not break up broadcast domains, they only break up collision domains, so broadcast traffic can still congest the network. Multicasts also travel across bridges and switches, adding more congestion. These types of transmissions are limited by routers, not by Layer 2 devices.

The text tells us, repetitively, that switches are like bridges with many more ports. The author continues his irritating habit of needless foreshadowing: he tells us that bridges have one spanning-tree instance per bridge, but switches have many instances, but he does not explain what spanning-tree is, or why we would care. In a few pages, he will reveal all, once we have hungered enough for his knowledge.

Switches perform three functions on Layer 2 that summarize their purpose:

  • Address learning - switches learn the MAC address of the sender of each frame they receive, and they record that address, associated with the port on which it was received
  • Forward/filter decisions - switches look at the destination MAC address of received frames, and forward the frame only to a port associated with that address, if such an association has been recorded. If no port is associated with the destination address, the switch sends the frame out every port except the one it was received on.
  • Loop avoidance - switches can be linked to one another redundantly, which can cause data transmission loops. This is the same kind of looping problem that bridges have: the devices can be continuously informed that a device is on one port, then another, then the first again, causing constant updating of the address table and multiple passing of the same frame. Such looping behavior is avoided with Spanning Tree Protocol. (Oh joy, another foreshadowing...)

Spanning Tree Protocol (STP) was invented by Digital Equipment Corporation (DEC). The IEEE developed a version of it called IEEE 802.1D. Cisco devices use this version of STP. STP avoids loops by building a topology model of the network, marking links to MAC addresses, and rejecting redundant links. STP uses a method called Spanning Tree Algorithm (STA) to determine which ports will be set in which state (see below).

The text explains spanning tree protocol in terms of bridges. The discussion in the text wanders back and forth in terminology, so we must assume that Cisco switches act just like this kind of bridge, as far as the Spanning Tree Protocol goes. We might consider a switch to be a bridge with many more ports than usual. So, in the discussion about STP, every time the author uses the word "bridge", assume that the word "switch" can be used instead. Typically, bridges that use this protocol are called transparent or learning bridges. They have the following characteristics:

A bridge is transparent to a sending device if the sending device is unaware of the bridge or unaware that the receiving device may be across a bridge. This type of bridge requires little setup: it learns which segment devices are on when they send packets through it. Transparent bridges are also called learning bridges because they learn what segments devices are on by receiving traffic from them, and they store the knowledge in a filtering database. While a transparent bridge is learning, it forwards frames to all segments except the segment they come from. Once it learns which segment a device is on, traffic to that device is only forwarded to the proper segment (unless the traffic is already on the right segment, in which case it is not forwarded at all). Bridges may connect more than two segments. Connection is made through physical ports on the bridge. The ports can be in one of five states, described below.

  • Disabled (off) - the port is off line
  • Blocking (mostly off) - a standby mode, used by backup bridges. Frames are ignored unless they are addressed to the multicast address of this bridge (which gives us a way to change the state). The text also refers to a port in this state as a nondesignated port.
  • Listening (get ready...) - a waiting state; the bridge is preparing to learn or forward, but assumes that traffic may have misinformation in it at this time. This should only be in effect for a specific amount of time.
  • Learning (get set...) - the bridge is paying attention to traffic, modifying its filtering database, but not forwarding. This is a timed state.
  • Forwarding (go!) - normal operation, frames are forwarded, based on the filtering database. Learning (modifying the filtering database) also takes place in this state. Note that this is the ONLY port state in which frames are forwarded. The text also refers to a port in this state as a designated port.

The intelligent part of the bridge, the part that decides whether to forward a frame to a specific segment, is called the Relay Entity. In order for a frame to be forwarded, this list of requirements must be met:

  • the frame must be addressed to a device on a segment other than the one it started on - no trick here, every cable in a star-wired network is a separate segment
  • the frame must have data in it from a layer above the MAC sublayer
  • there must be a CRC in the frame
  • the frame must not be addressed to the bridge

Transparent bridges store the incoming frames, check the above criteria, check the CRC for errors, and forward frames that need forwarding. Because the frame is stored and processed this way, the bridges are operating in a store-and-forward mode. There is an inevitable delay while the checks are made, referred to as the latency of the bridge. The larger the frames, the longer the latency.

Now for the problem: a bridging loop. First, you need to know that redundant bridges can be put between segments, in case one breaks. In the case of switches, the various switches in a network may have redundant connections to each other, so that there are alternate paths available should a port stop working.

A bridging loop can be created when packets pass endlessly from one segment to the next across the two bridges. It can also happen if the bridges generate a broadcast storm of new packets. An example: Assume two segments are connected by two bridges. A frame is generated on Segment A from workstation W1, and hits both bridges. Both bridges copy the frame, learn that W1 is on Segment A, and both forward the frame to Segment B. However, each bridge will receive the copy that the other bridge forwards to Segment B. This will cause the bridges to update their databases to show workstation W1 as being on Segment B, and they will forward each these frames back to Segment A. Then the process repeats, again and again and again. This is not good.

To avoid the bridging loop problem, IEEE (Institute of Electrical and Electronics Engineers) standard 802.1d gives us the spanning tree protocol. This says that in each redundant pair of bridges, one is the designated bridge, and the other is the backup bridge. Bridges communicate with bridge protocol data units (BPDUs) to determine which is the designated bridge, and when the backup bridge must take over. The network should be diagrammed like a tree. One bridge is chosen to be the root bridge, which sends configuration messages to designated bridges.

The root bridge is chosen by its bridge ID, which is an eight byte (16 hex digits) number composed of two bytes assigned by the administrator and all six bytes from the MAC address of the port adapter (NIC). The portion assignable by the administrator is referred to in the text as the priority value of that bridge. The default priority value for all devices using Cisco STP is 32,768. This value can be set with the command:
spanning tree vlan number priority number
The first number is the number of the virtual LAN that the switch represents. The second number is the number assigned as the priority for that switch. The text provides an example of this command which shows that the priority value must be a multiple of 4096. Setting one switch to this value would make its bridge the lowest on your network, making it the root bridge.

The bridge in the tree with the lowest bridge ID number is the root bridge. Bridges elect a root bridge by sending packets to all ports proclaiming themselves to be the root bridge. If packets are received from bridges with lower IDs, each bridge will acknowledge by changing its opinion, and sending packets that identify the new candidate as the root bridge. (Note that these packets contain the address for the bridge sending the packets in one field, and the address of the bridge it believes to be the root bridge in another field.) A root bridge will continue to send BPDUs every two seconds, by the IEEE standard, even after the election is over. The text compares BPDUs to hello packets, which makes it clearer why they continue as long as the device is running.

The text describes several types of switches, characterized by different behaviors that lead to different latencies. Be aware that a frame may contain over 1500 bytes:

  • Cut-through - also called FastForward and real time; this kind of switch reads the destination MAC address in an incoming frame, looks up the port to send the message out on, and sends it. This means that the switch begins sending the frame after receiving only 13 bytes of it.
  • Fragment free - also called modified cut-through; this kind of switch waits until it receives 64 bytes of a frame before sending it out. The reason for doing so is to determine if there is an error in the frame, which can be expected to occur within 64 bytes if it is going to occur. Model 1900 series switches use this method by default.
  • Store-and-forward - this type of switch saves an entire frame in internal buffers, checks for errors with a Cyclic Redundancy Check, and sends the frame on. This provides the most error free method, but also provides the longest latency.

The text describes configuring a 1900 switch and a 2950 switch. Your simulation software will allow you to experiment with both.

1900 2950
Power on. POST runs. LEDs turn green.
If all is well, LEDs turn off.
If there are bad ports, the System LED and the LEDs for those ports turn amber.
Power on. POST runs. System LED is off.
If all is well, SYST and STAT LEDs are green.
Menu appears asking whether you want to go to M - Menu interface, K - Command Line Interface, or I - IP Configuration. Switch is ready to use. You can enter configuration mode, but it is not necessary.
enable
Until the enable password is set, enter enable mode with the command enable.
config t
Enter global configuration mode.
 
Enable password level 1 password
Set the level 1 password for user mode.
Enable password level 15 password
Set the level 15 password for the enable mode.
Passwords are set like they are set on a router.
  • Enable password
  • enable secret - password for the privileged mode
hostname name
You should set a name for each switch as a visual check for which one you are connected to.
hostname name
exit
Use the exit command to leave configuration mode.
exit

The text explains that you do not have to configure IP settings for a switch, but you will want to do so if you want to do any of the following:

  • manage the switch with Telnet (or Hyperterminal)
  • use management software
  • configure VLANs

To view the switch's IP settings use the command show ip.

Set a 1900 switch's IP address:
ip address xxx.xxx.xxx.xxx mmm.mmm.mmm.mmm
In the command above, xxx.xxx.xxx.xxx stands for the IP address, and mmm.mmm.mmm.mmm stands for the subnet mask you are assigning to the switch.

Set a 1900 switch's default gateway:
ip default-gateway xxx.xxx.xxx.xxx
In the command above, xxx.xxx.xxx.xxx stands for the IP address of the router to use to send signals outside your network.

To set the IP address of a 2950 switch, you must configure a VLAN interface. Silly, isn't it? VLANs are not explained until the next chapter. The command sequence sets the address for VLAN1 which is the default VLAN on any switch.
config t
int vlan1
ip address xxx.xxx.xxx.xxx mmm.mmm.mmm.mmm
no shut
exit

As above, xxx.xxx.xxx.xxx stands for the IP address, and mmm.mmm.mmm.mmm stands for the subnet mask you are assigning to the switch. No shut means that the switch is running (not shut down).

Names for interfaces can be created with the description command. On a 1900 switch, you cannot use spaces in the descriptive names, but you can use underscores as visual separators. 2950 switches allow spaces in descriptive names.

Ports on Cisco switches can be configured so that only devices with specific MAC addresses may be plugged into those ports. Use the command
switchport port-security mac-address address

When you make changes to a 1900 switch, those changes are automatically stored in NVRAM for the next boot. The text reminds us that routers do not work this way: they require us to copy the running-config file to the startup-config file if we want to save the running configuration.

Like a router, a 2950 switch has running-config and startup-config files. To copy the running configuration to the startup file, the command is
copy run start

You can delete the stored configuration from a 1900 switch with the command delete nvram. If you do, the switch loses the running configuration as well, and runs with factory default settings. On a 2950 switch, the command erase startup-config will delete the stored configuration, but the switch does not lose its running configuration unless you restart it without saving.