Mxsmanic said:
I'm not sure what you mean by "24-bit accuracy." How many bits per
second?
"24-bit accuracy" is not dependent on the data rate. It simply
means that your system can accurately produce, communicate,
and interpret levels, repeatedly and without error, to within
half of the implied LSB value - in this case, whatever the peak
expected signal would be, divided by (2^24-1). For instance,
in a typical analog video system (with 0.7 Vp-p signal swings),
"24-bit accuracy" would mean that you are confident you can
determine the signal amplitude to within about 21 nV - and
yes, that's NANOvolts. But this is simply not possible in any
real-world video system, since the noise in any such system
over the specified bandwidth is significantly higher than this
value. (The thermal noise across a 75-ohm termination
resistor at room temperature alone is about 25 mV RMS.)
You can always maintain at least the accuracy of the equivalent digital
system.
Sure; but that's just it - you can always build an EQUIVALENT
digital system. You can't do better than the noise limit in
either case, and the noise limit sets the bound on accuracy -
and so information capacity - no matter how you encode the
information, whether it's in "analog" or "digital" form.
If you can push 200 Mbps through a digital channel, you can also get at
least 200 Mbps through the same channel with analog encoding (and
typically more). However, the analog equipment may cost more.
Sorry - not "typically more". You're still comparing
specific examples of "analog" and "digital"; "digital" does NOT
imply that straight binary coding, with the bits transmitted in
serial fashion on each physical channel, is your only option.
For instance, a standard U.S. TV channel is 6 MHz wide -
and yet, under the U.S. digital broadcast standard, digital
TV transmissions typically operate at an average data rate
of slightly below 20 Mbps. How do you think that happens?
(You should also not assume that straight binary, serial
transmission is all we will ever see in display interfaces; there
are published standards which employ more efficient coding
methods.)
Information theory proves it.
Information theory proves exactly the opposite; it shows
that the maximum capacity of a given channel is fixed, and
that that limit is exactly the same for all possible general
forms of coding. Specific types within those general forms
may be better or worse than others, but the maximum limit
remains unchanged.
The basic limit is the fact that you declare anything below a certain
level to be noise.
Which is equally true in analog systems. No analog system
can provide "infinite" accuracy, or anything remotely approaching
it, and for the same fundamental reasons that limit digital. You
are also here again assume that a digital system cannot be made
noise-adaptive, which is incorrect even in current practice.
What limits resolution in a monochrome CRT? Scanning electron
microscopes prove that electron beams can be pretty finely focused.
Yes, but there the beam does not have to pack enough punch
to light up phosphor. There is an inescapeable tradeoff between
beam current and spot size, and also a point below which practical
phosphors simply cannot be driven to usable levels. This, along
with the unavoidable degradation in spot size and shape which
results from the deflection system (another little detail that SEMs
don't worry about in nearly the same way) results in some very
well-defined limits on any practical CRT
Bob M.