e: Kaspersky wins yet another comparison

  • Thread starter Thread starter Nomen Nescio
  • Start date Start date
N

Nomen Nescio

I require that _several_ scanners all agree before a sample is
included.

There is only _one_ was to verify a file as viral, and yours is not it.
Your "requirement" is scientifically unsound.

There was a ruckus on DSLReports last year when a tester reported
_five_ scanners detecting a file as infected by the Magistr virus but
NOD32 reported it as clean.

It turned out the virus was inactive, i.e. not a virus by any criteria.

The five scanners were wrong, and a properly conducted test would
have penalized them for producing a false alarm.

NOD32 was right, but it was unfairly penalized because the testers
followed your flawed "several scanners all agree before a sample is
included" methodology.

No sample should be included in a testbed unless it has been tested
and replicated, preferably for 3 generations. I know you dislike the
Virus Bulletin tests, but VB is the only tester who publicly declares
it has tested and replicated _every_ sample its in testbed. For me,
VB produces the _only_ credible anti-virus test results.
 
There is only _one_ was to verify a file as viral, and yours is not it.

That's correct. I was talking about what major trends can be found
using an unscientific test bed.
Your "requirement" is scientifically unsound.

That's correct. But not necessarily useless.
There was a ruckus on DSLReports last year when a tester reported
_five_ scanners detecting a file as infected by the Magistr virus but
NOD32 reported it as clean.
It turned out the virus was inactive, i.e. not a virus by any criteria.

Not terribly unusual since most all scanners alert on such "crud"
files.
The five scanners were wrong, and a properly conducted test would
have penalized them for producing a false alarm.

Only if FP tests include such non-viable samples or "crud" in their FP
test beds. Not sure how many quality tests do.
NOD32 was right, but it was unfairly penalized because the testers
followed your flawed "several scanners all agree before a sample is
included" methodology.

NOD32 isn't particulaly noted for it's low FP rates either, as I
recall. Oddly enough, the super crud detectors such as KAV and McAfee
often have the lowest FP rates in tests I've seen.
No sample should be included in a testbed unless it has been tested
and replicated, preferably for 3 generations. I know you dislike the
Virus Bulletin tests, but VB is the only tester who publicly declares
it has tested and replicated _every_ sample its in testbed. For me,
VB produces the _only_ credible anti-virus test results.

It's not that I have any dislike of Virus Bulletin, as such. I very
much dislike the VB100 since it misleads naive users with its
infantile pass/fail criteria. Many people _only_ look at strings of
VB100 tests and don't dig any deeper.

And I believe you're wrong about VB being the only test site that uses
viable samples of malware. Check out the Uni Hamburg VTC test site,
for one.


Art
http://www.epix.net/~artnpeg
 
Nomen said:
There is only _one_ was to verify a file as viral, and yours is not it.
Your "requirement" is scientifically unsound.

no shit... his premise right from the outset was that *unscientific*
tests could still be useful so obviously he's going to describe a
methodology that is scientifically unsound... do try to keep up...
 
Nomen Nescio said:
(e-mail address removed) wrote:
The five scanners were wrong, and a properly conducted test would
have penalized them for producing a false alarm.

Would a properly conducted test have had that sample included
in the test bed of virus samples?

....or was this a crud test?

If so, what sort of possible methods could be used to make sure
that the included crud is non-biased. Ideally, I think that crud has
to be *all* non-viral programs or files - so to make a testbed for
these more manageable what could be used to whittle this bunch
down to a manageable level. To ascertain that something *is*
viral you make certain they are both progeny and grandparent.
This is not that case for crud. Crud is that which can be mistaken
for virus and yet not be a *viable* virus sample.
NOD32 was right, but it was unfairly penalized because the testers
followed your flawed "several scanners all agree before a sample is
included" methodology.

Yes - this points out the danger of flawed methods very well.
No sample should be included in a testbed unless it has been tested
and replicated, preferably for 3 generations.

Yes, a viable virus will prove itself to be viral when you allow it
to replicate recursively - but how would one prove the criteria
for "crud" has been reached? Are the samples that are indeed
offspring from your original collection sample - yet failed to
produce viable offspring themselves, considered crud just due
to the failure to meet that criterion?
I know you dislike the
Virus Bulletin tests, but VB is the only tester who publicly declares
it has tested and replicated _every_ sample its in testbed. For me,
VB produces the _only_ credible anti-virus test results.

I doubt that they are the *only* ones to do so.

....and do you believe it was warranted to penalize f-prot
for not considering container files to be a threat. Do you
think that on access scanning should be tested, or just on
demand (where which files to scan can be left up to the
user to decide)?
 
Back
Top