Actually, it now seems this method differs for different of his tests. Some
of his tests have required at least two of his varying number of "preferred"
(that is, pre-selection) scanners to report _something_ for the file.
However, it seems "corrupt executable" and "intended virus" are more than
sufficient "somethings" to report. (Given no other information than that
F-PROT reported either of those results and some other scanner found what
it claimed was real malware, I know what I would do with such files --
unless there was enough time to analyse them all (F-PROT does _very_
ocasionally get such things wrong, though incredibly seldom with its
"intended" ascriptions) I'd bin them...)
The test set also apparently includes ASM source listings and other such
drivel any marginally competent tester would automatically ditch as
irrelevant to a meaningful malware detection test.
Would the results have been any different if he had included any and
all files that any scanner alerted on? I venture to say the results
would have been virtually identical.
Yep. The poor but very crud-loaded scanners (I referred to these
elsewhere in my initial response) would have scored even more
disproportionately highly than they "deserve". Some products are
suspected of basically being able to detect virtually any file that has
ever been (publicly) posted on a VX site and given the source of VirusP's
initial test set (his own VX collection) the starting point was bound to
be replete with utter crap that, even by Kaspersky Labs' standards has
nothing to do with malware detection...
No, that's far from proven. It's simply expected.
Nit pick.
Had I written something like "So, there is compelling evidence that the
worst of the expected detection biases is present in the results" my
meaning would have been the same but you would not have been able to
disagree...
But do they claim that any malware they alert on is definitely viable?
Depends on the product. For example, F-PROT goes to great lengths to
make its detection reports as clear as posisble, _particularly_ when
it is reporting non-viable crud that its developers have basically
only included under protest (yusually to protect themselves against
such stupid, ill-informed and mal-intended tests and publications such
as this...
I've never heard of any such claim. And until vendors can honestly
make such claims, demanding the use of only viable samples in tests is
actually quite irrational.
But the flip side is equally true. The test is presented as a test
of "malware detection" yet the test set is replete with unviable crap,
completely non-malicious stuff and huge scads of widely accepted
"grey area" stuff. If the tester and publisher were to be half-way
honest, common decency would, at the least, require them to separate
the non-malware and grey area stuff into separate test sets. Simply
lumping it all into a "malware test" and making snide comments about
how poorly some products do is, at best, grossly disingenuous.
And that reminds me, I meant to mention this earlier... IIRC, Live
Publishing (or some earlier incarnation of it) was the publisher
behind some of the virus distribution via cover CD instances. Their
position at the time was "we test with multiple up-to-date" virus
scanners" (which, again IIRC, was provably false in at least one
case) and clearly wrong-headed anyway. You don't think that, as a
publishing group
Whacha got against super crud detectors?
Personally, I much prefer
them. I like being alerted on all crud. After all, malware is just
"crud" . Anything typical users don't care to have or want to have on
their hard drives is "crud". Whether the crud is a result of botched
disinfections or whatever kind of crud you have in mind, I'd prefer to
have my scanner alert. I couldn't care less whether or not the file in
question is viable. I want to know about it.
I understand this oddly idiosyncratic view of yours, but it largely
misses the point. Much of the crud we are talking is unable to get
on your machine _unless you specifcally and knowingly seek it out_.
Thus there is no point in detecting _unless you want to boost the
egos of the VX community_ where the driving ethos is a minor variation
on the old "mines bigger than yours" dick-waving ceremony...
In fact, had no scanner developer ever insisted on detecting any of
this crud, much of it would not "exist" today, because not "detected"
by AV means not collected by VX morons. (The "partial" infections,
munged disinfections, etc is another issue -- that sort of crud would
still be with us...)
I acknowledge that you want to know about all that weird stuff. Fine
-- you're a weird curmudgeon like that anyway -- but what is not fine
is Virus-P and the publishers presenting his results as a malware
detection test.
Or they may simply have the same attitude I have.
No -- they deliberately ansure they detect all files available from
(public) VX collections. Such products tend to be at the low end
of "real" detection rates too, but this tactic (which is very cheap
to implement) gives them a huge boost in shoddy tests, typically
leap-frogging them square into the middle of the "not top but
respectable looking" bracket and (to those who know better) oddly
putting them ahead of many products that have better "real"
detection rates.
Are you suggesting that F-Prot detects as much as KAV when large scale
virus zoo and Trojan tests of a more "scientific" nature are
conducted? Please show me those test results. I've never seen them.
No, not quite.
What I was suggesting was that there would be little crud that
F-PROT detects and KAV doesn't, _and_ it was unlikely such a poor
collection would contain many real malware samples that KAV detects
and F-PROT would miss. If you find that hard to believe, recall
that these are typically the two preferred scanners of most VXers
(at least historically -- not sure if this is still the case). As
VXers are hugely "protective" of their preferred scanner (because
they believe their own hype that the larger your collection the more
expert you are) F-PROT and KAV tend to be sent stuff from these
collections that the other misses. F-PROT should be (based on its
history) have been less likely to add detection of outright rubbish
thus received, whereas it would quite likely be a matter of pride
with AVP/KAV to not allow a VXer to say "F-PROT detects this but KAV
doesn't"... Thus, a detection difference between F-PROT and KAV on
samples not well filtered for crud and (primarily) sourced from VX
will almost all be crud.
Of course, F-PROT does detect all manner of non-replicating and
otherwise non-malware stuff too. However -- as I said above -- its
developers generally go to great pains to ensure that such stuff is
reported as _not_ viral or _not_ malicious ("intended", "corrupted",
"inactive", etc). The precise level of this stuff in the collection
is unknown unless we can access full scanning logs for the products.
Thus, it should be clear that it is a safe bet that the actual level
of crud in such a test will be quite a bit higher than the detection
difference between F-PROT and KAV. In fact, we should expect the
crud level of such a test set to closely approach the difference
between these scanners' detection rates _added to_ F-PROT's "crud
detection rate" (that is, the rate of stuff F-PROT reported as
non-malicious, inactive, etc).