Poster said:
What I have gleaned from reading various faqs and threads is that
interference from other electronics affects the audio quality. My goal
is to get up to 99% accuracy from the speech recognition software, I
already have a high level of accuracy, I just want to get up as high as
possible which people say is currently (with today's technology) about
99%. Just a little electronic interference brings that down a little. So
the advise is not to use an internal soundcard because that can be
housed near electronics which can interfere a little. So you use an
(external) USB sound card and make sure that other USB devices don't
interfere at all by making sure that the USB port is a 'Primary' one and
is directly connected to the motherboard. If there is no distinction
between 'Primary' and 'Secondary' in the USB standard might it be what
is meant is as simple as the wiring passing through several USB ports on
the way to the motherboard (and that this is to be avoided)?
I was hoping to find a quick and easy way to identify which was the
'Primary' port without running a sound test -which the software has
available- for each USB port until I find the lowest interference level.
But from what you say I think that's what needs to be done, try each
port and run a sound test and measure it's levels and final score.
Tedious with 8 ports. Still it's Sunday tomorrow, I can spare an hour or
two.
Thanks very much.
There is an example of a USB audio device here. It uses isochronous
endpoints, one for transmit and one for receive, and the purpose of
those, is to guarantee bandwidth and low latency for the sampling
process. (The USB20.pdf 650 page standard, from usb.org, has more
details on this.)
http://www.cmedia.com.tw/files/doc/USB/CM108 DataSheet v1.6.pdf
Any analog to digital conversion process, has the opportunity to add
background noise to the signal. And that wouldn't change, whether a regular PCI
sound card was used, or a USB audio solution was used. You would need to
check the quotations of noise floor, to see if there was a significant
difference between the various sound input solutions.
I know that it is possible for the position or shielding of a sound
card, to make a difference to audio performance. A poster here, a few
years back, mentioned doing an experiment, where he moved a sound
card to the last slot, and put a metal cover over it. He got something
like a 10dB improvement in the noise floor. And that would be caused by
reducing electromagnetic interference coming from the rest of the
computer, to the ADC.
Those are the kinds of effects that would be harder to predict (there is
no way to know how good or diligent, the designer of the USB microphone
is). As an example, there have been motherboards, where the built-in sound
was a disaster, and implies no one bothered to verify the audio performance
before the motherboard was put into production.
But as far as the USB transfer protocol goes, about the only thing
that can go wrong, is for packets to be corrupted in transit. Reading
the USB20 spec, it mentions that isochronous transfers don't have
provisions for error detection and retransmission (which might be used
for normal asynchronous transfers to peripherals). So if a packet has
bad data in it (would sound like a burst of static perhaps), USB doesn't
have a provision to fix it. This is similar to some other media, that
have no opportunity to correct errored bits. (The DVI interface to an
LCD monitor would be an example - if the signal on the cable is weak,
snow shows up, and the interface is not designed to ask for a
retransmission of any corrupted parts of the image, because the
interface is unidirectional.)
I think, by all means now, you should carry out your test cases, to
see if there is any effect. But you should be careful in your test
design, to come up with a way to measure the differences, if they
exist. I would think the design of the microphone itself, may be
more significant than the rest of the transfer process.
Paul