Mxsmanic said:
Then one must wonder why the original video standards such as CGA were
digital, but were replaced by more "advanced" standards that were
analog, such as VGA.
Simplicity. The graphics systems used in the early days
of the PC had only a very limited "bit depth" (number of
shades per color), and the easiest interface to implement for
such a simple system was either one or two bits each, directly
from the graphics output to the input stage of the CRT - i.e.,
just run the bits out through a TTL buffer and be done with
it. With a greater number of bits/color, a D/A converter at the
output and transmitting the video information in "analog" form
(which is what the CRT's going to want at the cathode, anyway)
becomes a more cost-effective solution than trying to deliver the
information as a parallel-digital signal (which is basically what the
"CGA" style of interface was) and instead ship it up on three
coaxes. If it were simply a question of which method were more
"advanced," it would also be legitimate to ask why television
is now moving from "analog" to "digital" transmission, and
computer interfaces are beginning to do the same.
The answer to just about ANY question in engineering which
begins with "Why did they do THIS...?" is generally "because
it was the most cost-effective means of achieving the desired
level of performance." The first computer graphics system I
myself was responsible for used what I supposed would be
called a "digital" output in this discussion, for this very reason.
It was the first "high-resolution" display system in our product
line, way, way back around 1982 - all of 1024 x 768 pixels
at 60 Hz, and we didn't have the option of a D/A to make
"analog" video for us simply because we couldn't put enough
memory into the thing for more than 2 bits per pixel. So we used
a couple of open-collector outputs at the computer end of the
cable, and used the signal coming from those to switch a few
resistors in the monitor (which was also custom) and get a cheap
"D/A" effect - at all of four levels of gray! (Monochrome -
we weren't working on the color version, which would follow
a bit later.)
Digital is less flexible than analog--conceivably a VGA cable can
carry just about any resolution or color depth. Digital provides
fewer errors in exchange for reduced bandwidth and flexibility.
This is incorrect. For one thing, it assumes that simple binary
encoding is the only thing that could possibly be considered under
the heading "digital" (which is wrong, and in fact several examples
of other fully-digital systems exist even in the area of video
interfaces; for instance, the 8-VSB or COFDM encodings
used in broadcast HDTV). The other point where the above
statement is incorrect is the notion that the VGA analog interface
could carry "just about any resolution or color depth." The
fundamental limitations of the VGA specification, including
an unavoidable noise floor (given the 75 ohm system impedance
requirement) and the overall bandwidth constrains the data
capacity of the interface, as it must for ANY practical interface
definition. Over short distances, and at lower video frequencies,
the VGA system is undoubtedly good for better than 8 bits
per color; it is very unlikely in any case that it could exceed,
say, an effective 12 bits/color or so; I haven't run the math to
figure out just what the limit is, though, so I wouldn't want anyone
to consider that the final word.
But the bottom line is that the information capacity of any real-world
channel is limited, in terms of effective bits per second. (Note
that stating this limit in "bits per second" actually says nothing
about whether the channel in question is carrying "analog" or
"digital" transmissions; this is bit/sec. in the information theory
usage of the term. In practice, this limit (called the Shannon
limit) is generally more readily achieved in "digital" systems
than "analog," due to the simple fact that most analog systems
give more margin to the MSBs than the LSBs of the data.
Whatever data capacity is achieved may be used either for
greater bits/component (bits/symbol, in the generic case) or
more pixels/second (symbols/second, generically), but the limit
still remains.
The actual difference between "analog" and
"digital" systems here is not one of "errors" vs."bandwidth,"
but rather where those errors occur; as noted, the typical
analog system preserves the most-significant-bit data vs.
least-significant (e.g., you can still make out the picture, even
when the noise level makes it pretty "snowy") - or in other words,
analog "degrades gracefully." Most simple digital encodings
leave all bits equally vulnerable to noise, which makes for a
"cliff effect" - digital transmissions tend to be "perfect" up to a
given noise level, at which point everything is lost at once.
Bob M.