K
kurt wismer
??? I did.
excuse me, apparently my rhetorical question was not clear... i was
explaining to FRT how using scanners to decide what goes in a test will
leave out samples that should otherwise be in a test and how that can
corrupt the results... *you* then brought up categories, you're correct
about that, but it was a red-herring... the samples that the
scanner-filter method leaves out aren't going to magically all belong
to some uninteresting category...
I was talking about looking at categories of malware that
several good scanners test well in (according to quality tests) that
products to be tested by my method do not do well in. Or, conversely,
I also included that you could also see when a vendor suddenly decided
to drop detection in one of those same categories. It would be obvious
using my method over time when a vendor dropped detection of old DOS
viruses, for example.
that is the ideal scenario, however you cannot blindly hope that
reality will turn out ideally... you have to enumerate the ways in
which things can go wrong - something i tend to be good at...
I don't understand that sentence.
ok, i'll try again - why can't i be talking about samples that are from
all categories...
But in order for me to defend my
method which you attacked as being worthless, I would hope that you
would stick to that topic and not wander off onto something else.
i am still talking about your methodology, don't worry... i'm just
talking about one of the problems it has...
In the case of checking on a scanner with weak Trojan detection, for
example, that scanner is not used in building up the test bed.
yes, i would assume you don't actually require agreement between all
the scanners - that's why i said "imagine"...
I see
no problem. And a scanner used in building up the bed of old DOS
viruses can be tested later for a significant drop in detection in
this category.
i would steer clear of testing for such drops... significant reductions
*could* be a drop in detection of real viruses, or it could be a drop
in detection of crud... without a better means of determining viability
of samples it's impossible to be sure...
It would be better if you requested clarification before you rejected
my idea outright.
i didn't say you were unclear, you were quite clear... there's a
difference between being unclear and being over general... had you been
unclear then i would have been confused and i would have said to myself
'i think there's something wrong here'... instead i found you making
what i thought was a far reaching general statement and since i can't
read your mind i have no way to know when you intend to make a general
statement and when you don't...
You turned off any interest I had in further
discussion or clarification by pontificating and "instriucting" me and
insulting me by referring me to a treatise on logic. That pissed me
off.
i'm sorry you feel that way... personally i find that reference (and a
similar one i also have bookmarked) to be quite helpful in getting a
deeper understanding of what can go wrong in a logical argument (both
my own and other people's)...
Omitted population segments??
segments of the population of malware... your methodology will omit a
bunch of viruses, a bunch of worms, a bunch of trojans, etc. from the
final testbed... i'm sorry if statistical jargon terms like
'population' caught you off guard...
Improvement trends get missed?
your stated position is that you can use 'unscientific' tests to
discover trends - trends that presumably indicate the improvement or
deprecation of a scanner over time... trends that are less likely to
reveal themselves when you use scanners to select the samples that you
later test scanners on...
What in
the hell are you talking about?
things that can go wrong with what i currently understand of your
hypothetical unscientific test methodology...