Mxsmanic said:
But all digital systems are simply analog systems operated in a
predefined way that declares anything below a certain threshold to be
noise. So the capacity of a digital system is always inferior to that
of an analog system with similar components and bandwidth.
No. Fundamentally, there is no such thing as an "analog"
system or a "digital" system - there is just electricity, which
operates according to the same principles whether it is
carrying information in either form. The capacity of a given
communications channel is limited by the bandwidth of the
channel and the level of noise within that channel, per Shannon;
but Shannon's theorems do NOT say what the optimum form
of encoding is, in the sense of whether it is "analog" or
"digital." My favorite example of a device which pushes channel
capacity limits about as far as they can go is the modem - and
do you call the signals that such a device produces "analog"
or "digital"? The answer is that they are purely digital - there
is absolutely nothing in the transmitted signal which can be
interpreted in an "analog" manner (i.e., the level of some
parameter in the signal is directly analogous to the information
being transmitted). The signal MUST be interpreted as symbols,
or in simplistic terms "numbers," and that alone makes it
"digital." The underlying electrical current itself is neither analog
nor digital from this perspective - it is simply electricity.
Furthermore, the physical interface at either end of any system is
_always_ analog, so the system as a whole is never better than the
analog input and output components.
This is also an incorrect assumption; I can give several examples
of I/O devices which behave in a "digital" manner.
Yes. If the channel is analog, the limit of the channel's capacity is
equal to the limit imposed by Shannon. But if the channel is digital,
the limit on capacity is always below the theoretical limit, because you
always declare some portion of the capacity to be noise, whether it
actually is noise or not. This is the only way to achieve error-free
transmission, which is the advantage of digital.
No. Shannon's theorems set a limit which is independent of the
signal encoding, and in fact those real-world systems which come
the closest to actually meeting the Shannon limit (which can never
actually be achieved, you just get to come close) are all currently
digital. (The digital HDTV transmission standard is an excellent
example.) Conventional analog systems do not generally approach
the Shannon limit; the reason for this becomes apparent once the
common misconception that an analog signal is "infinitely"
variable is disposed of.
In analog systems, there is no lower threshold for noise,
And THAT is the fancy version of the above misconception.
There is ALWAYS a noise floor in any real-world channel,
and there is always a limit to the accuracy with which an analog
signal can be produced, transmitted, and interpretated which
the various noise/error floors set. It is simply NOT POSSIBLE,
for example, for the commonanalog video systems over typical
bandwidths to deliver much better than something around 10-12 bit
accuracy (fortunately, that's about all that is ever NEEDED, so
we're OK there). I defy you, for instance, to point to an example
in which analog information is transmitted at, say, 24-bit (per
component) accuracy over a 200 MHz bandwidth.
but you can
use the full capacity of the channel, in theory, and in practice you're
limited only by the quality of your components.
But even in theory, those components cannot be "perfect." A
transistor or resistor, for example, MUST produce a certain
level of noise at a minimum. You can't escape this; it's
built into the fundamental laws of physics that govern these
devices. The short form of this is There Is No Such Thing As
A Noise Free Channel EVER - not even in theory.
Not necessary. Ultimately, audio systems (and imaging systems) depend
on analog devices for input and output. So no system can ever be better
than the best analog system.
That does not logically follow, for a number of reasons. What
DOES logically follow is that no system can ever be better
than the performance of the input and output devices (which
we are assuming to be common to all such systems), but this
says nothing about the relative merits of the intermediate components.
If it is possible for the best "digital" intermediate to better the
best "analog" intermediate, then the digital SYSTEM will be
better overall, unless BOTH intermediates were already so good
that the limiting factors were the common I/O devices. This is
not the case in this situation. (For one thing, it's not quite a
case of the chain being quite as good as its weakest link -
noise is ADDITIVE, just to note one problem with that model.)
Just look at flat panels: they provide defect-free images at a fixed
resolution, but they don't provide any higher resolutions. CRTs have no
fixed upper limit on resolution, but they never provide defect-free
images.
As has already been shown, CRTs most definitely have fixed
upper limits on resolution.
Analog reduces to using the entire channel capacity to carry
information, and tolerating the losses if the channel is not noise-free.
Digital reduces to sacrificing part of channel capacity in order to
guarantee lossless transmission at some speed that is below the maximum
channel capacity.
No. Here, you are actually comparing two particular versions
of "analog" and "digital," not fundamental characteristics of these
encodings per se. And the most common examples of "analog"
signalling do NOT, in fact, use the full channel capacity. (Even
if they did, a "digital" signalling method can also be devised which
does this - again, see the example of the modern modem or
HDTV transmission.)
With digital, you sacrifice capacity in order to
eliminate errors. With analog, you tolerate errors in order to gain
capacity.
"Capacity" is only meaningful if stated as the amount of information
which can be carried by a given channel WITHOUT ERROR.
Any error is noise, and represents a loss of capacity. What I
THINK you mean to say here is probably something like "quality,"
but in a very subjective sense.
I used to go with the "analogy" explanation for digital vs. analog, but
since everything in reality can be seen as _either_ a digital or analog
representation,
NO. Let me be more emphatic: HELL, NO. Reality is reality;
it is neither "digital" nor "analog." Those words do NOT equate
to "discrete" or "sampled" or "linear" or "continuous" or any other
such nonsense that often gets associated with them. They have
quite well-defined and useful meanings all on their own, and they
have to do with how information is encoded. Nothing more and
nothing less. The world is the world; "analog" and "digital" refer to
different ways in which we can communicate information ABOUT
the world (and they are not "opposites," any more than saying that
"red is the opposite of blue" is a meaningul statement).
Bob M.