But that's not a valid comparison to antivirus manufacturers.
... every single computer user needs to have a good, updated
antivirus program running that dynamically monitors for virus
Anyone who is running a good, updated antivirus program that meets these
needs, there is no problem.
SYJ?
I don't *ever* confuse antivirus (exploit detection) with risk
management. The one is NOT a substitute for the other, and it's a
gross disservice to even the newest user to suggest that all they have
to do is keep thier av up to date, then they can do what they like.
I explain these concepts to newbies as "small walls and large fences".
Risk management is like the small wall. You can walk around it (i.e.
use some other point of entry that is unmanaged) but you cannot go
through it - no matter how fresh from the tank you are, or how
polymorphic, or even if you are one-off FBI prototypeware.
Virus scanning is like the large fence; there may not be any way
around it, but you may well be able to cut through it (kill the av, or
enjoy the fact that an earlier malware has already done this), climb
over it (use methods that aren't heuristically detected or monitored),
or simply pass through the holes (be too new or rare to be detected).
In fact, traditional av is more like a doorman who has an eidatic
memory for the mugshots of every known perp. "Sorry, I didn't
recognise him in that coat" (re-packaged malware, mutations,
unexpected and thus unmonitored file types) or "I never saw him
before!" (the Day Zero effect).
Traditional av is based on "virus infects computer", where it's
effective to micro-manage each PC to prevent malware's persistance
from one boot up to the next. Adding risk management has value,
because currently unknown malware that relies on points of entry (or
escalation) that are blocked, will fail or be better contained.
But we are moving to "worm infects infospehere", where the malware
finds it far more useful to simply re-infect unmanaged PCs as soon as
they appear ion the Internet than to bother about persisting across
bootups. Servers that boast year-long uptimes can act as fat pipes to
hose the rest of consumerland... in these cases, risk management is
your primary defence; you cannot ever "clean" the infosphere!
"Virus? Impossible! I update my av every week without fail!"
Slammer went global in 10 minutes, doubles the number of infected PCs
every 8 seconds for a while (and no, I don't think that was from 1 PC
to 2 PCs infected <g>). Does anyone *seriously* think that...
- your av vendor will get a sample...
- ...anylize it...
- ...code appropriate sig data...
- ...and let's be generous and assume no engine mods needed...
- ...test the thing...
- ...as you'd want better quality than the spelling in this post?
- ...deploy the fix on their site...
- ...push it to users (malware-spoof risk there!)...
- ...or wait for users to grab it and apply it
....is going to happen within 10 minutes?
I love risk management, and I extend this way beyond MS's patching of
particular holes in a collinder (when I'd rather use a bucket instead)
But even buckets can have holes, i.e. something designed to not run
scripts in HTML email "messages" etc. can still facilitate this if the
code is itself flawed. Even good data/program distinctions evaporate.
Historically my risk management approach has tended to assume that if
functionalities are removed or suppressed, one need not bother to
patch them. False, because code holes operate at a level under code
design, rendering the latter as meaningless as NTFS file system
protection in defective hardware or raw sector access scenarios.
Finally, there's the difference between pre-emptively killing inactive
malware, and killing actrive malware that may shoot back.
In Win9x, or in most cases an NT on FATxx, you can formally scan and
clean malware that would be active if the PC was allowed to boot.
You'd still have inactive malware hidden in mailboxes and SR data, and
there'd still be malware outside (LAN, Internet, removable media) and
your safety against that hinges on the clue and the ability to say No.
Bad design (e.g. auto-running scripts without prompting) or code
defects (allowing direct penetration through unchecked buffers) rob
the user of the opportunity to say No. You can no longer "blame the
victim" for acting in a foolhardy manner, unless you accuse home users
of dereliction of duty as sysadmins (a job description of which they
were not aware) or of poor judgement in using software (I say,
"software" not "MSware" as other OSs have holes too).
When it comes to risk management - and patches are often the only
defence against the Slammers and Lovesans of the world - you face a
similar problem. Before the infosphere got actively infected, you
could stroll down to the update site and pull down patches at your
liesure. Once the war's on, you have to race the malware to pull down
the patch and apply it before you are attacked - and when it's a large
patch from one server vs. tiny attackers actively sent by thousands of
systems, it's a race you will lose.
It's not like the old days, where only servers had fat pipes that
could out-gun the modems of infected consumer PCs. These days, Joe
Sixpack is packing server-grade broadband
My approach, as an OS designer, would be to recognise that no code
should ever be assumed perfect, and thus all risk-relevant code should
be modularised to facilitate "bulkhead" damage control.
If there's no good reason to expose a functionality to the outside
world (why is it that XP Home needs to be remotely administered, when
corporates needing remote admin are supposed to use Pro?) then don't
do it, or at least make it possible to shut down that subsystem.
The av is your "goalie of last resort", and every unexpected "Virus
detected and blocked" alert is not a reason to feel warm and fuzzy
that your av's working - it's a reason to feel the chill of fear,
because something had an unexpected shot at goal.
------------ ----- --- -- - - - -
Drugs are usually safe. Inject? (Y/n)