Concepts:Chapter 11The chapter begins with a basic description of an operating system. It is typically a suite of programs that provide services to the system users:
At some time in the past it was decided that using an operating system for these services is more efficient than having to include them in every application you load on a system. This approach saves time for developers and makes it possible to port applications from one operating system to another with less recoding. On page 405, there is a more detailed chart of five functional areas of an operating system, and several five major concerns within each functional area. Unfortunately, the chart in the text is barely legible, so I have reproduced it here.
There are relationships across the four rows that I have used colors to show, and relationships down the five columns as well. For example, the text tells us that the items in the Resource Allocation column relate to users, processes started by them and for them, and resources used by those processes. It is important for the operating system to allocate time cycles, memory, storage, input, and output, and to release those resources for reallocation when they are no longer needed. The text mentions Interrupt Processing and remarks that it is discussed in chapter 6. You may want to review that chapter if you are not familiar with the concept. An interrupt is like a direct channel to the processor, which can be used for communications that take precedence over whatever the processor might be doing. The four types of interrupts in the chart above are four kinds of events that require the processor's immediate attention. Starting on page 406, the text talks about an operating system being organized in layers, which is another way of saying that it is modular, that it is made of lots of parts that interact with each other. The advantage of a layered system is that you can make updates to various parts of it without having to update the whole thing each time an update is needed. The diagram in Figure 11.3 may be confusing to you, so let's discuss it for a moment:
The text backtracks on page 409, discussing the history of resource allocation. At one time, computers could only run one application at a time (in addition to running the operating system), which gave that one application access to all of the system's resources: processor cycles, memory, and devices. Computers improved and so did operating systems, which led to several methods of sharing a computer's resources between several running programs. Sometimes a resource is allocated for a short time to each application, and sometimes a portion of a resource (like a section of RAM, or a part of the processor) is allocated to each of several running programs, which is an example of a true multitasking state. More often, resource allocation is more like sharing and taking turns for short time intervals. Memory, for example, can be allocated to a program (committed), then deallocated (released) when it is no longer needed, making it available for allocation to the next program that needs such a resource. On page 411, the text describes real and virtual resources. In short, a system's real resources are those that are actually installed or connected to it: memory, processor abilities, storage capacity, input devices, and output devices are examples. Each of the system's resources may appear to a running application as one hundred per cent available to that application, regardless of the number of applications actually competing for resources. That is an example of virtual resources. Every resource appears to be available to every application at all times, and it is up to the operating system to allocate and deallocate the real resources so that they are shared appropriately. The text talks about the positive aspects of this approach, but there are negative ones as well. You may have turned on a computer at some time and been confronted with several windows proclaiming that there were updates you needed to install for several programs, all of which wanted attention before the system was actually communicating on your network. This is an artifact of granting applications virtual access to system resources. The concept works smoothly as long as no application wants a resource for very long, and as long as the resource is actually available. In a Technology Focus sidebar on pages 412 and 413, the text discusses running virtual machines, which means using specialized software to make your computer act like two or more computers. The advantage to doing this is that you can run a different operating system in each virtual machine, and that if anything causes one of them to crash, you just restart it from the saved image that it boots from. On a large server, you might do this several times, allowing each virtual machine to act like a separate device that will not affect the others if anything goes wrong. The generic term used in the text for an OS feature that can run virtual machines is hypervisor. Back to the concept of resource management, the text describes process control blocks (PCBs) on page 413. These are data structures that hold information about resources that are allocated to each process running in memory. We are told that a process is a unit of software that is being executed. When a process is loaded in memory and run, a PCB is created for it. The PCB is updated every time resources are allocated or deallocated to that process. The PCB also holds the current state of the process and the name of the process's owner. All current PCBs can be held in a process queue (process list) that is searched each time a user logs off, so processes owned by that user can be stopped. Processes can start (spawn) other processes, which creates parent/child relationships between them. Processes can also be subdivided into threads, which is done when those threads can be managed and run independently. When this is done, data on each thread is kept in a thread control block (TCB). A process that splits into multiple threads is called a multithreaded process. Figure 11.7 on page 415 show how several threads might share time slices on a single processor, depending on the priority assigned to each thread. The text refers to this method as concurrent execution and as interleaved execution. The second phrase seems more apt, since no two threads are actually being executed at the same time in this case. On page 416, the text discusses thread states. The three shown are important:
Scheduling of threads and processes is discussed on page 418. The author talks about three types:
On page 427, the text discusses memory allocation, which is the last major topic in the chapter. It is defined as assigning specific memory addresses to specific programs (OS programs or applications) and data. The text notes that this activity applies to RAM and to storage devices as well. The text begins by reminding us that we can imagine a computer's memory as a series of adjacent, numbered cells, and that the numbers are applied sequentially. Data elements, such as variables, usually occupy more than one byte. We can say that an object stored in a sequence of bytes has a most significant byte and a least significant byte. The least significant byte is usually stored at the lowest memory address allocated to the object. This is called the little endian method. The reverse, storing the most significant byte at the lowest available address is called the big endian method. Addressable memory is the total series of addresses that your operating system can use. It is defined by highest address your operating system supports. The text says that your computer may contain less installed memory (physical memory) than this number, but it cannot contain more. It would be more accurate to say that the operating system cannot use more memory than its address space supports. Once it has established these basics, the text tells us that
the OS tracks the memory
allocations it makes to processes and threads in tables, which is reminiscent of
method it uses to track the states
of processes and threads. The text has given us the impression that memory is always allocated in sequential (contiguous) address ranges, but this is not always possible. Memory is allocated, used, and released for reallocation constantly, which leads to the state in which enough memory for a purpose is available, but it is only available in separate chunks. Even when contiguous memory is available, it may not be available in range that the program prefers, so the OS makes the allocation to the process, makes it look like it is located in low memory, but uses an offset value that refers to the real starting memory address. The text refers to this value as the process offset for that process. The text also discusses virtual memory management, which uses
storage device memory as swappable space for RAM. This is often done on
devices running Windows. A process is divided into parts called pages. Space in RAM that holds a
page (part of a process) is called a page
frame. Pages that are not currently in use can be copied to
space on a hard drive called swap
space, swap files, or page files. This allows the OS to
free up the page frame that the copied page was using, and to load a
page that is needed next into that page frame.
|