You can see how "NDA logic" applies here.
A key thing to look for when signing an NDA (Non-Disclosure Agreement)
is whether it applies to information that is no longer private (for
whatever reason). A "good" NDA will remove restrictions on
information that is already public, whereas one that applies to
information whether it is public or not amounts to a gagging order.
For example, if I got you to sign an NDA that stated "anything you
hear during the following seminar is private and must not be repeated
in public", I could stand up and say "our product sucks" as my
introduction. You would now be gagged from denigrating our product,
because you heard it here, even if you didn't hear it here first.
OTOH a "good" NDA would leave you free to say "their product sucks" -
you just wouldn't be allowed to say "their own CEO says their product
sucks" until someone else made that information public.
In that spirit, a publically unknown hole can be treated as a private
matter (and should scramble an urgent chase to fix - 3 years is a long
time not to do this for something as evil as that RPC hole, suggesting
it was privately unknown too).
Once publically exploited, the cat is out of the bag and there is IMO
an obligation for the vendor to 'fess up.
If they have a patch; good.
If they don't have a patch, then tell us how to wall out that
functionality for safety - even if it means having to concede that the
design or coding of that functionality is so poor that we should have
second thoughts about ever using it again.
If the functionality is so embedded that it can't be walled off, then
this is urgent product quality information that is crucial to rational
planning - once again, it's information that cannot be withheld.
You can't trumpet market forces as an acceptable referee and then rig
the game. The law recognises that in all sorts of ways, such as
"insider trading", for example. Continuing to cover up a defect when
it is publically exploited and thus a very clear and present danger to
consumers crosses that line, and spreads beyond the security debate.
Better yet, make sure you never geat into that situation. You know
there will always be coding defects, so you have to forego the hubris
of thinking that old safety standards are fuddy-duddy stuff you can
prance around with impunity. Never eat anything bigger than your own
head; never code a monolithic system that is bigger than your ability
to maintain fine-grain code quality and know/test *exactly* how
everthing works in practice, for all possible permutations.
For a long time, modular program design was a big buzzword. There
were good reasons for that, and those reasons haven't gone away.
When was it that viruses started effecting Microsoft OSes and apps? DOS v3
or was it v1. When was that? *1984* - almost 20 years ago. They just
started noticing? I don't buy this.
The nature of the problem changed - and the reason isn't simply the
"oh it's so difficult!" cop-out excuse (i.e. that modern code is so
complex, we should abandon expectations that it works out of the box).
In the DOS days, what the user needed to know was this:
1) Files ending in .exe, .com and .bat are programs
2) Don't run programs unless you trust them
3) Don't boot off untrusted diskettes
The frontier was well-defined, and 99.99% of attacks were made at the
SE level. In fact I don't know of any attacks that breached the
frontier design as enumerated above - not one.
Now it would have been possible to evolve today's complexity while
maintaining a frontier that was as well-defined as above; perhaps with
even more user friendliness than above, something like...
1) Files with red triangle icons are programs
2) Don't run programs unless you trust them
3) Don't boot off untrusted media
If you could trust data to not act as programs, a whole slew of
problems go away - web page attacks, document malware, no-click email
attacks, auto-running CD attacks.
If you contuinued a sense of frontier awareness within the boundries
of the network, you'd have fewer escalation risks to worry about.
With no scripting inherent in "View As Web Page", every write-shared
folder would no longer be a land-mine opportunity. With \AutoRun.inf
processing for HD volumes disabled, and write-shared volume root need
not be a landmine opportunity. With those dumb-ass "admin shares"
disabled, we could follow good LAN sharing practice and never expose
the startup axis or system code base, and wouldn't have to care about
password discipline or efficacy in that context.
As it is, our struggle slogan "an injury to one is an injury to all!"
applies, and that's not in a *good* way
That's why the situation is spiralling out of control.
The need for corporates to centrally-administer PCs was allowed to
override the need for home users to retain the meaning of the word
"home" (a physical location where safety can be assumed).
The need for web sites to manipulate users for marketing purposes was
allowed to override the user's safety needs, and once HTML was the web
standard, no-one had the clue to see why simply using this as-is as a
system-wide "rich text" standard (including email) was a Bad Idea.
The need to spare CD-ROM vendors from having to explain how to "click
Start, Run, enter ?:\RUN where ? is your CD drive letter" left us with
auto-running CDs; can't see what it is until it's already run itself.
And the data/program distinction is well and truly hosed, to the point
that extensions are hidden and "leaky" (i.e. even if you can see the
..ext, it can lo longer be relied upon). So much nicer for programs
and shortcuts to have their own unique icons, even if it means there's
no replacement way to tell what is a program and what isn't.
The frontier is so fuzzy, that it's near impossible for the average
user to practice "safe hex". And there is so much "dancing with
wolves" going on that even if the design doesn't give malware a free
backstage pass, there's such a maze of little band-aids to shore up
the frontier that code defect opportunities will abound.
Today's malware mainly exploits bad software design, i.e. the
opportunities presented by the users' inability to assess the full
risk their actions facilitate. I don't expect "reading message text",
"visiting a web site" and "reading a document" to be conferring
programming rights to those entities, but they do.
Today, user and vendor share responsibility for malware outbreaks; the
user, for not practicing "safe hex", and the vendor, for undermining
the user's ability to practice "safe hex".
Tomorrow's malware may focus mainly on code defects rather than simply
leverage poor software design. In this case, the user's
responsibility is completely bypassed, and the vendor's responsibility
is manifest. Unless we make special rules for the software industry
to let them off the hook ("oh, it's so difficult!" etc.), there is no
question who to blame where a product defect is the cause of the
problem and the user's ability to manage this is sidelined.
The above is the wider context in which to assess the answer to "if
the vendor knows of a defect that's being exploited In The Wild,
should users be informed and advised how to protect themselves?"
------------ ----- --- -- - - - -
Drugs are usually safe. Inject? (Y/n)