kurt said:
- they aren't using the desktop engine but a command-line engine which
can and in some cases does differ significantly from the desktop equivalent
I don't recall using an AV product in which the UI app was the engine.
The engine is separate from the UI. In fact, the UI may be unloaded or
even crash but it doesn't take out the engine (the kernel-mode file
system filter).
- they're using settings they can't reveal due to NDAs
Alas, there isn't much info at VirusTotal regarding their setup for each
AV product. For other tests, usually they announce that settings were
at "highest" (which doesn't often match the install-time defaults in
typical user installs).
- the versions of the products aren't necessarily all up to date or even
all equally out-dated
But this subthread started due to Rick's comment that VirusTotal is
using an old version of NOD32 so there must be some indication to the
user as to which version was using during the summary period for that
test result report. However, I don't see SRI listing the version in
their summary report. I took a very quick peek at VirusTotal's site and
didn't see versions mentioned there, either. The only time that I see
the product version listed is when I submit a file to them and then look
at the scan report.
With a summary report that spans a period of time, it is possible the
version of the product has changed. That's why I submitted a request to
SRI that they either show coverage by product over several of their
summary reports (since a single snapshot alone is hard to guage how well
a product has fared over time), or keep an archive of their old summary
reports so users could copy them into a spreadsheet to see the
effectiveness of a particular product over repeated snapshots.
- they may halt engines that take to long for the service virustotal is
providing (and their performance requirements are different than those a
customer would have)
Not sure how timeout scans for a particular test could be included in a
summary report that includes multiple tests. But then users might not
want to use a product, even with high coverage, that takes a really long
time to detect a pest.
- they're using apples along-side oranges (different products meant for
different purposes) which you should never compare (for i hope obvious
reasons)
Yet each product included in the scan *claims* to also detect viruses.
There are few exactly identical products. If they were identical, we
wouldn't need any of these "results" summaries since every product would
fare the same as another because they were the same. When you shake
flour through sifter, you're looking for an overall granularity of
powder, not that it is a absolutely perfect consistency. They throw the
suspect at their sifters, one for each product, and see what falls
through. Like PGP, the test is pretty good. Not very good, or
extremely good, or perfectly good, but good enough to provide some gauge
of effectiveness.
not too long ago av-test.org produced a test that included run-time
detection capabilities... here's a link describing the test and how the
proactive portion of it included actually running the samples:
http://www.virusbtn.com/news/2008/09_02
I've never trusted av-test.org. They get commissioned (paid) by AV
vendors to "test" that vendor's product but are guidelined as to the
test scenarios and sometimes as to even which sample of pests that they
are to test against. I'm not convinced they qualify as an *independent*
testing agency. I haven't seen one AV vendor who commissioned
av-test.org to test their product where that product didn't come out
shining like a white knight of security products.
The only free and publicly available "comparison" they offer on their
web site is how often the various AV products provide updates. Oh gee,
golly, big deal.
i gather from the most recent av-comparatives retrospective test that
they plan on implementing similar tests of dynamic detection sometime
this year...
Alas, I remember reading a blog or article from them where they mention
that costs are getting prohibitive to do this testing for free. So they
may go the way of VirusBulletin and others that charge for testing an AV
vendor's product. That means:
- Some vendors won't submit their product for testing, or they will be
selective as to who tests their product that results in presenting them
with the best image.
- Vendors can pay to have their product tested but they can also request
the results not be published. So if they did really poorly, you don't
get to see it.
- A bias can creep into the tester's methodology regarding products for
which they get paid to test, especially if repeatedly paid in subsequent
tests, and those that don't pay to get tested only occasional pay to
test their product.