This chapter begins by engaging the interest of the reader: it is about
systems take input from the user and others that communicate output to
the user. It is about how we talk to and control a computer, and how a
computer talks to us.
The text tells us that we have inherited characteristics of the publishing industry that have evolved over the last six hundred years. The printing industry is actually older, but the invention of printing presses coincides with some of the current measurements and terms used in printing to paper and screens.
The text describes some features that are common to displays and to printing technologies. We can say that display screens and printer output can both be considered as areas in which images of various sorts are placed. The quality of the output of a display or a printer can be described by the size of the smallest element used to create that image, and by the number of such elements used. The text tells us to think of the output area as a field divided into rows and columns, which it calls a matrix. The intersection of a row and a column can be called a cell, as it is in a spreadsheet, but since we are creating pictures, we will call each cell in the output a picture element, or pixel. We would normally think of pixels being associated with displays, but the concept applies to printer output as well.
We see a formula on page 241 that could be a bit more useful. The text
tells us that we can calculate the area
of a printing or display output, then divide
that answer by the area of a pixel
in that output to get the number of pixels
in the output. The math is cleanest when we measure the output area and
the pixels in the same terms.
Height times width,
divided by the area of one
pixel equals the number of pixels
in the output.
The more pixels in the output, the smoother the output can look. Smaller
pixels and more pixels increase
the size of the file that is used to store an image, but make that image
more detailed. Note the two examples of print output on page 241. The
upper example has fewer pixels, causing the curved edges in its letters
to be shown as jagged lines (jaggies).
The text mentions that displays (and printers, for that matter) can be described as having a particular number of pixels per linear, horizontal measure. This is often expressed as a number of dots per inch. Printed material can also be described in terms of points. A point is 1/72 of an inch, the system described in the text. There are other systems, as you will see if you follow the link in the previous sentence.
The text moves on to depicting characters on display or on printed output. It is describing alphabetic characters, but it could just as well mean characters in non-alphabetic languages like Mandarin. On page 242, we are shown selected characters in five different fonts, collections of characters that stylistically resemble each other. Note that a font is a particular style, not a size. Size is determined by a character's pitch, or point size.
The text moves on to discuss colors.
Color on a screen is created by
generating combinations of red,
green, and blue
light. Color in ink
may be created with combinations of red,
yellow, and blue
ink, or combinations of cyan,
, and black ink. The difference
comes from the way we see the color in question. Color that is created
with light is generative
color, what the text calls additive
color. We see the combination of the actual frequencies that come from
the light source.
The color we see that is associated with anything other than a light source is reflected color, which the text calls subtractive color. A beam of light strikes an object, and we see the light that is reflected from the object. We do not see the light that is absorbed by the object, which is the component of the light that is subtracted. If they taught you to make green paint in kindergarten by mixing blue and yellow paint, you were using subtractive color to do so. Your may have never mixed light to make colors. Follow this link to a web page that simulates mixing light and mixing paint to get a feel for it.
The actual methods used to show us colors can be more complex. T
The text moves ahead on page 245 to describe the size
of a file that holds an image.
We can calculate file size by first taking the height
and width dimensions of the image
(in pixels), and multiplying them
to get the total number of pixels.
Then we multiply that answer by
the number of bits used to specify
the color of each pixel, then divide
that answer by eight to get the
number of bytes in the file. This general kind of image file is called
a bitmap, because every pixel
in the image is specified (mapped) separately.
Example: In an 800 by 600 pixel image, we have 480,000 pixels. If we
assume 24-bit color, we multiply by 24, then divide by 8 to get 1,440,000
bytes, or 1.44 MB.
File sizes grow very large by this method, so graphic image file types were created that used compression to reduce file size. The text mentions Graphic Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) file types for still images, and Moving Picture Experts Group (MPEG) for moving images. These are commonly used standards, but the text warns us that they are all lossy methods. We can also say that these methods are raster methods, which also means that they specify what to do with each pixel in the image.
An alternative to raster images is to use vector images, which specify starting positions on a screen, then specify how far to draw a line or a curve from that position, as well as at what angle with reference to a standard coordinate system, like the Cartesian coordinates in the image on page 246. By specifying how to draw objects with vectors, we can specify the shape of letters or other graphics as well as their scale on any output device. Vector graphics generally result in smaller file sizes than raster graphics.
In addition to the method used to display images on a screen, the rate at which images on a screen change is important. The text refers to this rate of change in two ways.
The text mentions several technologies that have been used over time in video displays:
On page 255, the text turns to printers.
It briefly mentions impact printers,
which operated like typewriters, which the early models resembled. Most
later impact printers used a dot matrix
technology, in which some number of pins
on a print head were raised from a rest position to form a pattern
that was struck against an inked ribbon,
leaving an image on paper as its output. The print head might make multiple
passes across a page to make an acceptable output. This technology has
been replaced by the other two technologies discussed: inkjet
printers and laser printers. Both
kinds of printers often use an Image
Description Language (IDL)
to compress the signals sent to them by applications.
An inkjet printer sprays ink
onto a page to create characters and graphics. The spray
may be done mechanically, or by
heating the ink to cause it to
burst onto the paper. Resolution for this kind of printer is determined
by the smallest dot of ink it can produce, which is often 1/600th
of an inch. This is the same as saying it can produce 600
dots per inch (600 dpi).
Laser printers are explained on pages 257 and 258. The explanation is correct, except that in step 4, the image placed on the rotating drum should be described as a mirror image of the desired final output. When the image is placed on the charged paper, it becomes a true image as sent by the application from which you are printing.
The text also mentions plotters, which are printers often used by draftsmen and architects to print large scale drawings. Older models often used pens to draw lines, but current models are often more similar to inkjet printers that simply use large format paper.
We change topics to discuss input devices. The first group is manual
input devices, by which the text means devices that typically take
input from a user's hand.
Another category of devices is optical input devices.
The text also discusses audio data capture
and generation. We are told that sound is an analog
signal or event. It is created by vibrations that pass through air (or
another medium) to be sensed by an ear or an electronic sensor, such as
a microphone. For most of its history, sound recording and regeneration
was done on analog devices, but when we want to save sound files as digital
information, we need to digitize
that information. Digitizing begins by sampling
a live or recorded sound. Sampling means to measure an analog wave and
to attempt to represent that wave as a series of digital values. The text
explains that our sampling device needs to sense sounds between 20 Hz
and 20 KHz, which is the range that most people can hear.
Sound waves change frequently, so we are advised that sampling must happen at least 40,000 times per second, and more frequent sampling is better. Sometimes, we can do with less fidelity (accuracy) in our sampled sound file, so the sampling rate can be lower, or the range of frequencies being sampled may be smaller.
The text tells us that sampling is done with an analog-to-digital converter (ADC) and that reproduction of sound from a digital file is done with digital-to-analog converter (DAC).
Some specific applications of sound based technology are discussed. You should be familiar these: