Microsoft hunting down exploits

  • Thread starter Thread starter P. Thompson
  • Start date Start date
P. Thompson said:
ftp://ftp.research.microsoft.com/pub/tr/TR-2005-72.pdf
A good read.

Is the only way to discover a mal-page is by running IE in a virtual
process and seeing if it breaks?

Can't a mal-page be discovered programatically by analyzing the code?

Couldn't search engines perform this sort of discovery as part of
their normal function, and potentially build a business model around
protecting users from being directed to those mal-pages?

Why doesn't MS or Google offer an on-line portal that would allow
users to enter a URL path and have the URL analyzed for mal-behavior?
 
Is the only way to discover a mal-page is by running IE in a virtual
process and seeing if it breaks?

Can't a mal-page be discovered programatically by analyzing the code?

Couldn't search engines perform this sort of discovery as part of
their normal function, and potentially build a business model around
protecting users from being directed to those mal-pages?

Why doesn't MS or Google offer an on-line portal that would allow
users to enter a URL path and have the URL analyzed for mal-behavior?

Imagine the load put on a server to perform such requests :) There is
nothing stopping them from doing it - more hardware would be needed -
but is there a demand out there for this right now and IMHO there
never will be as there are existing and cheaper alternatives out
there! Afterall, if the exploit was known then a patch would be
available. How about an (opt-in) activex control on a search engine
that would detect weather a broswer was up to date or not. Also,
antivirus software already employ heuristic and generic detection for
most browser exploits as soon as they are known as well as some
firewalls employing IDS.

My point is a system where is website is checked for malicious scripts
this is a reactive solution as a database of exploits needs to be
maintained. I proactive solutions whereby updates are delivered in a
timely manner to patch vulnerabilities is a better solution. I think
Microsoft Update has improved dramatically in this department. Pity
about WGA though. I have experienced some horror with this. I guess
just teething problems.

I think encouraging users to patch up their browser and alerting them
if it is not up-to-date is more practical than detecting malpages
prior to browsing to them.
 
Virus said:
:




Is the only way to discover a mal-page is by running IE in a virtual
process and seeing if it breaks?

Can't a mal-page be discovered programatically by analyzing the code?

2 words... halting problem...

think about it - replace the word "mal-page" with the word "virus"...
can you discover viruses programmatically by analyzing the code? no, not
unless you already know what the particular virus you're looking for
looks like...

it's not possible to algorithmically determine what a program does for
all possible programs just by looking at their code... it doesn't matter
if you're trying to determine if they do something complex like install
malware through IE or self-replicate or if you're trying to determine if
they do something simple like terminate - you can't write an algorithm
that can make that determination in the general case...

and if you can't do the general case that leaves you with handling
special cases - which requires that you know what you're looking for...
 
kurt said:
it's not possible to algorithmically determine what a program does
for all possible programs just by looking at their code...

What is a web page doing trying to push executable code to a viewer?

Or trying to send contents designed to cause a buffer overrun?

You don't have to analyze executable code to know that the web site is
trying to push something bad back to the viewer.
 
Ian said:
Imagine the load put on a server to perform such requests

How many page-views are initiated in the first place by a search?
Quite a few I bet.

How many search engines, in the course of scanning and archiving the
web, make sure that the pages they're cataloging (and will at some
point present to someone) are "safe" and don't contain known exploits?

Why is it that we can scan files for viral/trojan/worm content, but we
can't scan web URL's for their equivalent form of mal-code?

IE (and other browsers) are constantly getting patched for one
vulnerability or another. Why can't they throw up a message saying:

"hey, I've just detected a threat called "(you-name-it)" in the
web page you're trying to view. The protection for this
threat was installed with update patch kb123456 (July 2005).
I'm going to add that site (or entire domain) to my quarantine
list to prevent it from causing harm. Would you like to take
a look at or edit the quarantine list?"

When are browsers going to get smart enough to basically have their
own version of a built-in anti-mal-code scanner and dynamically
maintain their own site-by-site or domain-by-domain quarantine list?
 
What is a web page doing trying to push executable code to a viewer?

Hmmm, activex, browser plugins, javascript, java applets.......
You don't have to analyze executable code to know that the web site is
trying to push something bad back to the viewer.

Actually, you would definately have to do this.
 
How many page-views are initiated in the first place by a search?
Quite a few I bet.
yes...

How many search engines, in the course of scanning and archiving the
web, make sure that the pages they're cataloging (and will at some
point present to someone) are "safe" and don't contain known exploits?

I have no idea.
Why is it that we can scan files for viral/trojan/worm content, but we
can't scan web URL's for their equivalent form of mal-code?

We can (and do) at the desktop level. There is no market for this type
of scanning. Who would pay for it? Webhosts? Customers? Hackers - ha
ha?
IE (and other browsers) are constantly getting patched for one
vulnerability or another. Why can't they throw up a message saying:

"hey, I've just detected a threat called "(you-name-it)" in the
web page you're trying to view. The protection for this
threat was installed with update patch kb123456 (July 2005).
I'm going to add that site (or entire domain) to my quarantine
list to prevent it from causing harm. Would you like to take
a look at or edit the quarantine list?"

When are browsers going to get smart enough to basically have their
own version of a built-in anti-mal-code scanner and dynamically
maintain their own site-by-site or domain-by-domain quarantine list?

For the same reason they say antivirus software is only as good as its
latest update. Just like virus writers bypass av software detection by
modifying/changing the code, they would do the same for this concept.
So inbuilt scanners would only be bypassed by the virus writer.
Generic detection and script blockers such as those employed by the
Kaspersky product along with http stream scanning in NOD32 protects
the user here in a lot of cases. Buffer overflow attempt detection is
also present in Mcafee Virusscan enterprise 8i also prevents some
exploits from having their desired effect by blocking the execution.
So the technology is already in place and for the most part it is
reactive (relying on signatures). Your concept is also reactive and
would be too costly to implement and a pain in the hole to maintain.
Excuse my French.
 
Ian said:
For the same reason they say antivirus software is only as good
as its latest update.

So why do we bother having AV software then? By your reasoning AV
software should have never have been developed because of all the
arguments you just listed. Same with anti-spyware and ad-ware.

The biggest hole right now is that the web browser has no front-end
mal-ware detection or code-handler in it.

Instead, while the browser is mindlessly loading in any URL that it's
pointed to, other processes running in the background have to make
sure that nothing weird or bad is happening. The first line of
defence would be for the browser to have some ability to know that it
is being presented with a KNOWN EXPLOIT and to neutralize it by not
loading or rendering it in the first place (and telling the user about
it).
 
Virus Guy wrote:
[snip]
Why is it that we can scan files for viral/trojan/worm content, but we
can't scan web URL's for their equivalent form of mal-code?

if you know the particular 'mal-code' you're looking for, you can... the
paper pointed to originally describes a method that doesn't depend on
prior knowledge of specific 'mal-code's...

if you have a catalogue of known 'mal-code's then writing a scanner for
them would be trivial (or at least no more difficult than writing a
virus scanner) - the problem is coming up with the catalogue in the
first place... the method described in the paper could do just that...
 
Virus Guy wrote:
[snip]
Instead, while the browser is mindlessly loading in any URL that it's
pointed to, other processes running in the background have to make
sure that nothing weird or bad is happening. The first line of
defence would be for the browser to have some ability to know that it
is being presented with a KNOWN EXPLOIT and to neutralize it by not
loading or rendering it in the first place (and telling the user about
it).

??? do you realize what you're saying?

you want the browser to have an exploit detector and be updated to
detect new known exploits? why not just fix the vulnerability so the
exploit doesn't work anymore?
 
So why do we bother having AV software then? By your reasoning AV
software should have never have been developed because of all the
arguments you just listed.

The USER has control of it. With your concpt the user has no control
over it.
The biggest hole right now is that the web browser has no front-end
mal-ware detection or code-handler in it.

Thus the term exploit!
Instead, while the browser is mindlessly loading in any URL that it's
pointed to, other processes running in the background have to make
sure that nothing weird or bad is happening. The first line of
defence would be for the browser to have some ability to know that it
is being presented with a KNOWN EXPLOIT and to neutralize it by not
loading or rendering it in the first place (and telling the user about
it).

No - the first line of defense is a patched browser that is immune to
this. The patch is installed the the vulnerability is null and void!
 
Virus said:
What is a web page doing trying to push executable code to a viewer?

Or trying to send contents designed to cause a buffer overrun?

You don't have to analyze executable code to know that the web site is
trying to push something bad back to the viewer.

a) websites don't push anything... browsing is entirely a pull type of
system...
b) the complexity of analyzing a web page and all it's content is no
different from analyzing an executable program...
c) if you knew that X would cause a buffer overflow in the browser, why
wouldn't you just fix the browser to that the buffer overflow couldn't
happen anymore?
 
Is the only way to discover a mal-page is by running IE in a virtual
process and seeing if it breaks?

A virual process or a whole virtual *machine*?
Takes bloatware to a whole new level.
Couldn't search engines perform this sort of discovery as part of
their normal function, and potentially build a business model around
protecting users from being directed to those mal-pages?

Oops, you forgot to patent that.

Perhaps research.microsoft.com will release a white paper for a procedure
for scouring usenet for good ideas and patenting them.
 
P. Thompson said:
A virual process or a whole virtual *machine*?
Takes bloatware to a whole new level.

bloatware? i thought you read it - they aren't talking about a client
side application, it's not meant to be run on customer's computers...
 
Virus Guy said:
So why do we bother having AV software then? By your reasoning AV
software should have never have been developed because of all the
arguments you just listed. Same with anti-spyware and ad-ware.

Except that viruses don't as a rule use exploit code, they use desired
functionality to do undesired (by the victim) things. Exploit code
exists because a flaw creates a vulnerability to exploit. Combatting
exploits is done best by addressing the flaws. With the virus, there
usually is no flaw involved to address - so there is little choice but
to attempt to detect them. The signature based detection method is not
the only way to do this though.
 
P. Thompson said:
A virual process or a whole virtual *machine*?
Takes bloatware to a whole new level.


It is not run on users machines, ergo the user will never encounter the
"bloatware"... read the article. It basically involves running a network of
VMs on Microsquash Servers, some of the VMs are hardened against esploits,
others are not, and i suspect they can be testing patches on others tho its
not stated in the article. The idea is to run these VM's to less than
desirable sites, while the server moniters the actions on the VM to see if
there is anything going on that shouldn't be, if it detects the exploit, it
logs the site, results of the exploit, and then destroys the VM and restarts
a new one so that there is no remnant of the malware/whatever when it goes
on to the next site on the list. The log is then reviewd by flesh and blood
tech's to try and pattern the exploits so they can be patched in later
versions. Really not a bad idea, lets teh guys at MS detect both known and
unknown exploits that have made it out in the area of malware, but it will
only work if they are QUICK about releasing patches, which sadly to say they
are not. Which i why I use Firefox as my web browser. However i expect
more malware writers to target firefox now that it is in use by enough
people to make it worthwhile.

It would be great if the "Sites using exploits" list could be released to
the major search engines, where the search engien adds a remark or something
to the search results next to each offending result giving the user warning
before clicking, but it should not be in the form of a "Are you sure you
want to click this link?" popup or page, needs to be inline with the search
results, perhaps an icon at the beginning of the result field, and a
"WARNING, this site is known to utilize spyware/malware" tagged on to the
end of the description.

Still no excues for not running good protection clientside... keep your AV,
keep your spyware detector (preferably several), keep your firewall etc...
AND KEEP THEM UPDATED!
 
bloatware? i thought you read it - they aren't talking about a client side
application, it's not meant to be run on customer's computers..

Now where did I say it was going to run on customer's computers?

I was expounding on virus guy's marveling at their technique, which in the
paper they coyly describe as "fairly expensive".
 
Now where did I say it was going to run on customer's computers?

I was expounding on virus guy's marveling at their technique, which in the
paper they coyly describe as "fairly expensive".

I hate to reply to myself, but I can just see the 'in the box thinkers'
out there saying "why a virtual machine is cheaper than a real one, ain't
it".

So I will clarify "expensive" in the computer science terminology of being
of high computational overhead compared to a possible alternative that
virus guy mentioned of a technology which scans the HTML and extrapolates
the result without starting an OS, starting a browser, injecting mouse
click events of the typical porn surfer and then tearing it all down when
something exploits the machine.
 
P. Thompson said:
I hate to reply to myself, but I can just see the 'in the box thinkers'
out there saying "why a virtual machine is cheaper than a real one, ain't
it".

That depends on a lot of variables - one could run an expensive virtual
machine on a free PC.
So I will clarify "expensive" in the computer science terminology of being
of high computational overhead compared to a possible alternative that
virus guy mentioned of a technology which scans the HTML and extrapolates
the result without starting an OS, starting a browser, injecting mouse
click events of the typical porn surfer and then tearing it all down when
something exploits the machine.

This program is not designed to protect a machine, but to identify as
many malware hosts as possible so that further action can be taken if
warranted. As a bonus, it can detect "new" exploit code through behavior
monitoring of "professional" drive-by malware installers' sites which
are quick to add new exploit code to their sites. Virus Guy was talking
about protection, but his scheme would also detect exploits that the
machine being protected was immune to anyway if kept up to date.
 
Back
Top