CS 231 - Micro Electronics

Chapter 7: Input/Output Technology

Objectives:

This lesson discusses several input and output devices and technologies. Objectives important to this lesson:

  1. Text and image representation
  2. Video display devices
  3. Printers
  4. Manual input technologies
  5. Optical input devices
  6. Audio devices
Concepts:

This chapter begins by engaging the interest of the reader: it is about systems take input from the user and others that communicate output to the user. It is about how we talk to and control a computer, and how a computer talks to us.

The text tells us that we have inherited characteristics of the publishing industry that have evolved over the last six hundred years. The printing industry is actually older, but the invention of printing presses coincides with some of the current measurements and terms used in printing to paper and screens.

The text describes some features that are common to displays and to printing technologies. We can say that display screens and printer output can both be considered as areas in which images of various sorts are placed. The quality of the output of a display or a printer can be described by the size of the smallest element used to create that image, and by the number of such elements used. The text tells us to think of the output area as a field divided into rows and columns, which it calls a matrix. The intersection of a row and a column can be called a cell, as it is in a spreadsheet, but since we are creating pictures,  we will call each cell in the output a picture element, or pixel. We would normally think of pixels being associated with displays, but the concept applies to printer output as well.

We see a formula on page 241 that could be a bit more useful. The text tells us that we can calculate the area of a printing or display output, then divide that answer by the area of a pixel in that output to get the number of pixels in the output. The math is cleanest when we measure the output area and the pixels in the same terms.

Height times width, divided by the area of one pixel equals the number of pixels in the output.

The more pixels in the output, the smoother the output can look. Smaller pixels and more pixels increase the size of the file that is used to store an image, but make that image more detailed. Note the two examples of print output on page 241. The upper example has fewer pixels, causing the curved edges in its letters to be shown as jagged lines (jaggies).

The text mentions that displays (and printers, for that matter) can be described as having a particular number of pixels per linear, horizontal measure. This is often expressed as a number of dots per inch. Printed material can also be described in terms of points. A point is 1/72 of an inch, the system described in the text. There are other systems, as you will see if you follow the link in the previous sentence.

The text moves on to depicting characters on display or on printed output. It is describing alphabetic characters, but it could just as well mean characters in non-alphabetic languages like Mandarin. On page 242, we are shown selected characters in five different fonts, collections of characters that stylistically resemble each other. Note that a font is a particular style, not a size. Size is determined by a character's pitch, or point size.

The text moves on to discuss colors. Color on a screen is created by generating combinations of red, green, and blue light. Color in ink may be created with combinations of red, yellow, and blue ink, or combinations of cyan, magenta, yellow , and black ink. The difference comes from the way we see the color in question. Color that is created with light is generative color, what the text calls additive color. We see the combination of the actual frequencies that come from the light source.

The color we see that is associated with anything other than a light source is reflected color, which the text calls subtractive color. A beam of light strikes an object, and we see the light that is reflected from the object. We do not see the light that is absorbed by the object, which is the component of the light that is subtracted. If they taught you to make green paint in kindergarten by mixing blue and yellow paint, you were using subtractive color to do so. Your may have never mixed light to make colors. Follow this link to a web page that simulates mixing light and mixing paint to get a feel for it.

The actual methods used to show us colors can be more complex. T

  • The text tells us that a monitor can be monochrome (one color) which means it can display any pixel as that color or black. Ooh, cool.
  • The next most complicated kind of monitor is a grayscale monitor that can typically show a pixel as black, white, or any attainable shade of gray in between. In the system described by the text, there are 256 (numbered 0 through 255) total shades, including black and white, so there are 254 shades of gray possible on it. This is similar to older Macintosh systems. It was cool at the time, too.
  • Color monitors have gone through several evolutions. The text mentions, out of order, that early color capable systems could only present a few colors at a time, using a palette, which was a table of colors. An example of this was the Color Graphics Adapter (CGA) system. You could only use the colors that were present on whichever palette was active at a given moment. You could switch from one palette to another when your repainted the screen, but you were still limited to the available palettes supported by your adapter and monitor. In systems that could only present 8 or 16 separate colors at a time, sometimes dithering was used. Dithering put two or more colors in several pixels close to each other to give the impression of the color that would result if the colors used had actually been mixed in the pixels.
  • The text talks about a common characteristic of using three values, each expressed in one byte, to control how much Red, Green, and Blue to have each pixel generate. This is the RGB system, which we are told is also called 24-bit color, because of the three bytes used.

The text moves ahead on page 245 to describe the size of a file that holds an image. We can calculate file size by first taking the height and width dimensions of the image (in pixels), and multiplying them to get the total number of pixels. Then we multiply that answer by the number of bits used to specify the color of each pixel, then divide that answer by eight to get the number of bytes in the file. This general kind of image file is called a bitmap, because every pixel in the image is specified (mapped) separately.

Example: In an 800 by 600 pixel image, we have 480,000 pixels. If we assume 24-bit color, we multiply by 24, then divide by 8 to get 1,440,000 bytes, or 1.44 MB.
This is the method used in the text. We could have divided 24 by 8 to get 3 bytes per pixel, then multiplied 480,000 by 3 to get 1.44 MB.

File sizes grow very large by this method, so graphic image file types were created that used compression to reduce file size. The text mentions Graphic Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) file types for still images, and Moving Picture Experts Group (MPEG) for moving images. These are commonly used standards, but the text warns us that they are all lossy methods. We can also say that these methods are raster methods, which also means that they specify what to do with each pixel in the image.

An alternative to raster images is to use vector images, which specify starting positions on a screen, then specify how far to draw a line or a curve from that position, as well as at what angle with reference to a standard coordinate system, like the Cartesian coordinates in the image on page 246. By specifying how to draw objects with vectors, we can specify the shape of letters or other graphics as well as their scale on any output device. Vector graphics generally result in smaller file sizes than raster graphics.

In addition to the method used to display images on a screen, the rate at which images on a screen change is important. The text refers to this rate of change in two ways.

  • a refresh cycle takes place each time a video controller changes (or refreshes) an entire screen
  • a refresh rate is the number of times a refresh cycle takes place in a second, expressed as a number of Hertz

The text mentions several technologies that have been used over time in video displays:

  • CRT  - A Cathode Ray Tube was common until the late 1990s. It used a glass screen with phosphor pixels (8) that were excited (activated) by scanning electron guns (1) that were part of its system. (Follow the link for the topic for more details on other parts of the system.)
    CRT diagram from Wikipedia
  • LCD - Liquid Crystal Displays work with liquid crystals (3), polarizing filters (1 and 5), and may be reflective or backlit. In the image below, a reflective system is shown. (Follow the link for the topic for more details on other parts and other versions of these systems.)
    Reflective LCD image from Wikipedia
  • Plasma - A plasma display works by emission of ultraviolet light that excites color phosphors that combine to create an image. Plasma displays can be grainy and short lived.
    Plasma screen schematic from Wikipedia
  • LED - Light Emitting Diode (LED) screens. The material in the text is a bit dated because this was a newer technology when the book was published. It is correct in that there are distinctions between LED-LCD screens, which use LCDs that are backlit by LEDs, and OLED screens which include some organic compounds, and appear to be rare. The more common version for monitors and for televisions is the LED lit LCD.

On page 255, the text turns to printers. It briefly mentions impact printers, which operated like typewriters, which the early models resembled. Most later impact printers used a dot matrix technology, in which some number of pins on a print head were raised from a rest position to form a pattern that was struck against an inked ribbon, leaving an image on paper as its output. The print head might make multiple passes across a page to make an acceptable output. This technology has been replaced by the other two technologies discussed: inkjet printers and laser printers. Both kinds of printers often use an Image Description Language (IDL) to compress the signals sent to them by applications.

An inkjet printer sprays ink onto a page to create characters and graphics. The spray may be done mechanically, or by heating the ink to cause it to burst onto the paper. Resolution for this kind of printer is determined by the smallest dot of ink it can produce, which is often 1/600th of an inch. This is the same as saying it can produce 600 dots per inch (600 dpi).

Laser printers are explained on pages 257 and 258. The explanation is correct, except that in step 4, the image placed on the rotating drum should be described as a mirror image of the desired final output. When the image is placed on the charged paper, it becomes a true image as sent by the application from which you are printing.

The text also mentions plotters, which are printers often used by draftsmen and architects to print large scale drawings. Older models often used pens to draw lines, but current models are often more similar to inkjet printers that simply use large format paper.

We change topics to discuss input devices. The first group is manual input devices, by which the text means devices that typically take input from a user's hand.

  • keyboards - Electronic devices that may connect to a computer by a cable or a wireless connection. These devices generate scan codes that are interpreted by the computer as input to a data stream or as commands to be followed at once. Keyboards come in many styles, which have different feature sets, supporting specific applications or specific alphabets.
  • pointing devices - Mice, trackballs, and joysticks are all devices that allow a user to move an icon on a screen, to activate actions by clicking, and to select and manipulate data shown on the screen. Laptops often feature integrated touchpads which serve the same purpose.
  • input pads - this is a generic category for several device types that support signatures and direct interaction with data on a screen:
    • digitizing tablets - often used by artists, these devices allow a person to create drawings, to manipulate art, and to use applications that support handwriting
    • infrared detectors - an older technology used with early touchscreens that placed infrared emitters and sensors around the edges of a screen, sensing when and where a user broke the beams when the screen was touched
    • photosensors - a screen equipped with photosensors will react to light from laser or light-pen (this is also an older technology)
    • pressure-sensitive pads - may use pressure sensitive ribbons under the touch pad, but more commonly use Thin-film Transistor (TFT) technology, which is used on most touch screens you are likely to have seen or used.

Another category of devices is optical input devices.

  • mark sensors and bar-code scanners - As the text explains, mark sensors are used for special purposes, such as sensing the edge of a sheet of paper that has special coding, so that the paper is aligned properly and the proper ink is detected in the test mark before an expensive print process takes place. Bar code scanners are used at most Point of Sale (POS) locations. As the text says, they are also used for inventory control and package and mail routing.
  • optical scanners - An optical scanner is often used to scan a document (like a receipt or a vital record) in order to create an electronic version of it, typically as a PDF file. The text also mentions using an optical scanner with Optical Character Recognition (OCR) software to create electronic documents that are editable or searchable.
  • digital cameras - Digital cameras are the most common kind of camera now. They are found in most cell phones as well as being used as standalone devices. They typically capture stills, video, or both and they tend to save their captured images as compressed files.
  • portable data capture devices (scanners) - This should not be separate category. The text describes devices whose only special characteristic is that they can be used wirelessly in environments where it is useful to have a portable scanner available instead of one that is wired to a computer or a cash register.

The text also discusses audio data capture and generation. We are told that sound is an analog signal or event. It is created by vibrations that pass through air (or another medium) to be sensed by an ear or an electronic sensor, such as a microphone. For most of its history, sound recording and regeneration was done on analog devices, but when we want to save sound files as digital information, we need to digitize that information. Digitizing begins by sampling a live or recorded sound. Sampling means to measure an analog wave and to attempt to represent that wave as a series of digital values. The text explains that our sampling device needs to sense sounds between 20 Hz and 20 KHz, which is the range that most people can hear.

Sound waves change frequently, so we are advised that sampling must happen at least 40,000 times per second, and more frequent sampling is better. Sometimes, we can do with less fidelity (accuracy) in our sampled sound file, so the sampling rate can be lower, or the range of frequencies being sampled may be smaller.

The text tells us that sampling is done with an analog-to-digital converter (ADC) and that reproduction of sound from a digital file is done with digital-to-analog converter (DAC).

Some specific applications of sound based technology are discussed. You should be familiar these:

  • speech recognition - hardware and software that allow a user to control a computer or to use programs with their voice in place of a keyboard and mouse
  • speech generation - hardware and software that allow a computer to respond to or alert a user with speech, which may be preprogrammed voice clips, output specific to an event, or the ability to read a file to the user
  • phonemes - generated or interpreted speech sounds are represented by sets of sounds, not by the letters of an alphabet; these sounds are called phonemes
  • typical sound card components - on page 268, we see a graphic that is a generic version of hardware components that you would expect to be included on most add-on sound cards, or on motherboards that support sound functions
    • microphone input
    • line input - different from a microphone input, this one expects to receive a signal processed through an amplifier
    • speaker output
    • line output
    • ADC and DAC processors
    • MIDI input and output - MIDI stands for Musical Instrument Digital Interface, which is a standard that was developed in the 1980s, and continues to be a standard interface for musicians and composers who also work with computers (and who doesn't?)