This chapter discusses many things that happen in the Central Processing Unit of a computer. First, we are reminded that a CPU will contain at least three components: the Arithmetic Logic Unit (which does the computation and comparisons), the Control Unit (which moves data and instructions to and from secondary storage, RAM, and registers), and registers (volatile memory in the CPU itself).
We are told that the CPU performs various actions that belong to one of two cycles: fetch and execution. The fetch cycle is also called the instruction cycle because its primary task is to load instructions. Consider the four tasks performed by the Control Unit in a fetch cycle:
Page 107 also lists the tasks performed by the ALU in an execution cycle:
The bullet points above are summarized in the diagram on page 108. As the text notes, the CPU alternates between fetch and execution cycles while any program is running.
As the text stated in chapter 2, an instruction is a command to the computer to do something. Chapter 4 explains that an instruction may contain more than the actual command. The part of the instruction that is the command is called the Op Code. The instruction may also contain values that are to be processed by the command. These portions of the instruction are called the Operands. That's the math word for the quantity that you do an operation on.
The kinds of instructions that can be processed by a CPU are determined by the type of CPU. The collection of instructions that any particular CPU can process is called its Instruction Set. As the text explains in a few pages, processors can be classed as using Complex Instruction Set Computing (CISC) or Reduced Instruction Set Computing (RISC). Before discussing this point, the text explains some basic instructions that would be found in either kind of set.
The text describes three sequence control operations on page 115 that support the idea of executing commands out of their linear sequence in a program.
The text offers a few examples of combining simple instructions to
create more complex
instructions. The examples are not terribly clear, but the concept is.
On page 119, the text begins a discussion of instruction
formats, which are patterned templates for the various parts of
instructions. Note the two examples on page 120, in which some Op Code of a
set length is followed by three operands. The size of the third operand
varies between the two examples. This illustrates the idea that
instructions may be fixed-length
for a processor or they may be variable-length.
Fixed length instructions
make it easy to increment the pointer that points to the next
instruction. It will always be incremented by the same amount. Variable-length instructions require
that the current instruction be measured,
and the pointer incremented by that instruction's length.
On page 124, the text brings up a new subject that affects system performance. The system clock is like a pacemaker for the heart of a computer, like a conductor for the orchestra that all the hardware of the computer makes up. It sends pulses that serve as time synchronization signals to the devices inside the computer. The system clock runs at a particular frequency (its clock rate) which is measured in hertz. One hertz is one cycle per second. A system clock's cycles are more likely to be measured in megahertz (millions of hertz) or gigahertz (billions of hertz). Instead of measuring the number of cycles per second, we can also express the frequency of the system clock by the length of time it takes to run one cycle. The math to calculate this value, the cycle time, is to take the multiplicative inverse of the clock rate (1 divided by the clock rate).
The text tells us that these basic measures of a system are important, but others are more important. A system might be measured by how many millions of instructions per second (MIPS) it can complete or how many millions of floating-point operations per second (MFLOPS or megaflops) it can perform. FLOPS can also be measured in billions (GFLOPS, gigaflops), trillions (TFLOPS, teraflops), or quadrillions (PFLOPS, petaflops), These measures are going to be different from the clock rate of a system because it may take multiple cycles for most operations. The more complex an operation is, the more clock cycles it is likely to take to complete. To make the concept more confusing, the text tells us that can take 2 to 10 cycles to access RAM, and thousands or millions of cycles to access secondary storage. Each of these factors slows down the throughput of the system. Each cycle the processor spends waiting for another component to complete a request is called a wait state.
The text has discussed registers in the CPU for many pages. We are reminded on page 128 that a CPU uses two types of registers. General-purpose registers are used by running programs as we have already discussed. Special-purpose registers are used for three other purposes, two of which have been discussed.
A concept that relates to other points in the chapter is word size. It relates to the system
bus size because the bus size limits the number of bits that can be
delivered to the CPU at once, and the word size is the number of bits
the CPU can process at once. The word size of the CPU also determines
the number of addresses the CPU can manage, and the number of bits that
can be transferred to memory at once.
The text discusses some methods of enhancing the performance of a processor.
The chapter spends the next several pages discussing
electronic notation and physical factors that affect all electrical
equipment, such as heat, electrical resistance, and circuit length. It
also discusses some possible improvements in computing that have in
fact been discussed for years but have yet to be realized.