Barry said:
In that case, there's nothing that you can easily do, the DVD was just
recorded low. [not unusual, for cds or dvds. I have the CD of "Back to
Titanic", a 2nd "soundtrack" album from the movie Titanic, and on some
songs the PEAK level is less than 20%, the peak level of the entire CD
is only about 60%].
If you were fanatic about it and wanted to take drastic measures, you
could "rip" the CD, then reprocess the audio track to "normalize" it,
bringing the peak level to 100% and everything else up accordingly. It
would be a LOT of trouble. It's also possible that the peak level is
100%, and that everything else is just dramatically (but
correspondingly) lower. This can happen if the peak level occurs during
an explosion or some other momentary but extremely loud event, and the
audio processing was sloppy.
This is a very fine and picky technical exposition/discussion, so if such
things make you yawn you can safely skip it. <g>
The low recorded level may be the result of aesthetic preference influencing
technical practice.
Some engineers prefer to record sounds at their "natural" level--that is,
the loudness of the recorded sound matches the loudness the sound would be
if it were heard in the wild. There is no background noise such as tape
hiss, phonograph record surface noise, etc. in the modern digital recording
process[1] (as oppposed to background noise in the recorded "sound space,"
which some would argue is just as much a part of the music as the
instrumental sounds), and the available dynamic range of DVDs (and of CDs)
approaches that of human hearing; both of these make this "natural"
approach to recording possible and are considered by some engineers and
producers to be compelling arguments for its use. The idea is that if one
sets the volume control on one's playback system such that a sound recorded
at the 100% peak level of the recording process results in a played back
sound at the maximum level one's playback system is capable of (ideally,
the same level as the recorded sound and, basically, the threshold of pain
and damn the neighbors or the glassware) and you never vary that setting,
the argument is that you are hearing the sounds at their natural and proper
level, and any variance from that is considered a distortion of the
original sound.
The other approach--and that used in the days of more primitive equipment
with a much narrower dynamic range--is to make test recordings to determine
the maximum level that whatever is being recorded will produce, set the
equipment such that that this loudest sound causes the equipment to
register the maximum level permitted by the system (or maybe a little less,
to provide a fudge factor in case something is unexpectedly louder), and
leave that setting in place for the duration of the recording session.
"Riding gain"--continually twiddling the recording level to compensate for
variations in the loudness of what's being recorded--is a no-no because it
distorts the dynamic range, although it often used to be necessary because
the older equipment could not accommodate the entire dynamic range and you
had to fudge in order to prevent over- or under-recording if a limiter was
not available. "Limiting" is a form of electronic signal processing which
compresses the dynamic range, and it was a common and (some argued)
essential part of making phonograph records and in live radio/TV pickups:
it boosts the level of the softest passages so they can be heard above the
background noise, while at the same time reducing the level of the loudest
passages so they don't overload the system. Limiting is still used for
special effects, particularly in pop material.
Now about "normalization": that can be good or bad, depending on the
algorithm used. Some normalizers are basically "volume expanders," or
limiters turned backward; they stretch the dynamic range of the material.
The softest sounds in the original remain soft, at their original level,
while the level of the loudest sounds is raised to (or close to) the peak
level permitted by the recording process, and that which lies in between is
raised in level proportionately. This, to me, is a significant distortion
of the original recorded sound. Not only is it quite likely that the
original recording didn't sound at all like that (and in some cases it is
quite obvious and even unsettling), and in extreme cases you might have to
turn the volume up to hear the softer parts of something, then turn it back
down to keep your ears from being blasted out when a peak occurs. Other
normalizers simply raise the level of the entire selection as a block by
the amount required to get the loudest sound to the peak level of the
process, without tampering with the dynamic range. To my mind this is the
right way of doing it, although if it's an older analog recording, there's
the possibility it might raise the level of the original background noise
to the point that it would be distracting ... but then, it probably was
already.
While I agree with the aesthetic basis underlying "natural" technique--to a
point, at least--it's still a pain in the ass when you're going back and
forth from one recording to another (doing a radio show or an audio
collage, for example) and have to keep twiddling the levels so that
everything sounds about the same level. And, of course, it will always be
this way, since older recordings were made such that their loudest sound
matched the peak level of the available process, lest the softest sounds be
lost in the background hash.
And The Moral Is: Nothing's perfect.
[1] Slight caveat: yes, there is still the slight hiss produced by the
random quantum motion of electrons in the connecting wiring and which you
can hear if you crank your stereo up really loud, particularly on low-level
inputs such as those used for microphones and magnetic phonograph
cartridges.