Microsoft !exploitable Crash Analyzer

Recently at CanSecWest 2009, Microsoft released their internal !exploitable Crash Analyzer to the general public using their Microsoft Public License (Ms-PL). This tool plugs in to the Windows debugging extension (Windbg) and attempts to both uniquely identify and assign an “exploitability” rating to program crashes. Essentially, the end goal of !exploitable is to group crashes by location in code and classify them by severity. Both the CanSecWest presentation and tool can be found on the Microsoft CodePlex website at:

Where will this be used?

An effective bug and vulnerability management program is one sign of a mature and security-aware product program. In a perfect world, resources will be unlimited, developers will always have time to go through all the proper security training, every line of code will be peer reviewed, there will be time to properly validate both the design and implementation before release, products will scale well as new features are added, and there will be plenty of time to perform a full application security assessment and remediate any issues identified. In reality, security vulnerabilities do happen. Furthermore, they are not always easily segregated from other bugs (especially in the finite amount of time dedicated to remediation). By classifying program crashes as Exploitable, Probably Exploitable, Probably Not Exploitable, or Unknown; this tool aims to help organizations triage their bug reports. These ratings tie in directly to the Microsoft Exploitability Index now included with security bulletins. More information about this index can be found on Microsoft Technet at:

How does it perform?

After reading about this tool, I was curious as to how well it performs. I ran a couple of binaries through the tool that were compiled using MinGW (GCC for Windows). The binaries with buffer overflow and format string vulnerabilities had stack protections disabled. The following GCC compilation options were used to disable stack protections:
-fno-pie -fno-stack-limit

Buffer Overflow, No Stack Protection, MinGW
A stack based buffer overflow is a condition that arises when a program is allowed to store data beyond the bounds of the pre-allocated memory space reserved. In this situation, the data will overwrite other values on the program stack. Typically, an attacker will leverage this vulnerability to control program flow by modifying local variables, function pointers, or the return address of the stack frame.

The following excerpt is the vulnerable code from the application used to force a program crash from a buffer overflow condition.
char buf[8];
strcpy(buf, argv[1]);

The following is the rating and information outputted by the Microsoft !exploitability tool.
Description: Read Access Violation at the Instruction Pointer
Short Description: ReadAVonIP
Exploitability Classification: EXPLOITABLE
Recommended Bug Title: Exploitable - Read Access Violation at the Instruction Pointer starting at Unknown Symbol @ 0xa3e1dcffff000a (Hash=0x264d5172.0x1e004834)

Format String, No Stack Protection, MinGW
A format string vulnerability arises when a program uses unfiltered input in the format specifier of certain formatted output functions such as printf, fprintf, or scanf. An attacker can supply his or her own formats to write an arbitrary value to an arbitrary location. Many attacks involve an attacker overwriting a function pointer to control program flow. The following excerpt is the vulnerable code from the application used to force a crash-dump from a format string vulnerability.

The following excerpt is the vulnerable code from the application used to force a program crash using format string specifiers.
if(argc >1)

The following is the rating and information outputted by the Microsoft !exploitability tool.
Description: Read Access Violation
Short Description: ReadAV
Exploitability Classification: UNKNOWN
Recommended Bug Title: Read Access Violation starting at image00400000+0x1327 (Hash=0x575c4810.0x70226436)

Function Pointer Manipulation, Default Stack Protection Enabled, MinGW
I compiled a program where a user can directly control the pointer to a function. An attacker would leverage this overwrite the funptr to point to the address of his or her shellcode thereby executing arbitrary instructions on the machine. This was compiled using the standard GCC flags to enable stack protection. The rating of this is interesting in that it changes based on the address of the function pointer. Moving the function pointer a small difference (one) elicits a Probably Exploitable rating. Moving the function pointer a larger difference (ten) elicits an Exploitable rating.

The following excerpt shows a vulnerable program moving the function pointer to a user supplied location.
int (*funptr) (void) = &function;
if(argc >1)
funptr += atoi(argv[1]);
result = (*funptr)();

Adding one to the function pointer will elicit the following rating.
Description: User Mode Write AV near NULL
Short Description: WriteAV
Exploitability Classification: PROBABLY_EXPLOITABLE
Recommended Bug Title: Probably Exploitable - User Mode Write AV near NULL starting at Unknown Symbol @ 0xa3e1dcffff000a (Hash=0xd667a59.0x9444d1e)

Adding ten to the function pointer will elicit the following rating.
Description: User Mode Write AV
Short Description: WriteAV
Exploitability Classification: EXPLOITABLE
Recommended Bug Title: Exploitable - User Mode Write AV
starting at image00400000+0x12ea (Hash=0x575c4810.0x70226436)


It is important to note that this tool relies on analyzing program crashes to generate a rating. By that definition alone, there is a huge range of attack vectors that will not be covered. Not every vulnerability will crash a program, and for this reason, !exploitable can never be looked at as a “find all tool.” Of the conditions that !exploitable can analyze, it did not accurately identify the format string vulnerability. This worries me as I am now concerned as to what other severe problems it may miss.

With these limitations in mind, I would treat the !exploitable tool as a guide to raise the awareness of those vulnerabilities that are exploitable. I would not rely solely on this tool to categorize or assign risk ratings to program crashes. As long as one only relies on this tool to move a small selection of vulnerabilities to the top of the list, it can be a great asset to an organization. Hopefully, Microsoft will continue to invest in this tool as it has the potential to become a good weapon in a security organization’s arsenal.

This is the blog post which inspired me to look in to this myself. This particular article goes much more in depth as to how different protections (such as Microsoft DEP and GS) affect the ratings.
!exploitable Crash Analyzer homepage on Microsoft CodePlex
Microsoft Security Engineering Center homepage

Security PS Adds Team Members In Kansas City

Continuing with more news of growth and expansion, we've added a small army of new team members in the Kansas City location. Welcome to the team Naithan, Amy, Mike, and James!

Press release: Security PS Adds Team Members In Kansas City

Google Client Redirection Vulnerability

As a part of its search functionality, Google creates redirection links that send users to other sites on the Internet. Although the search engine giant has some simple measures in place that attempt to prevent tampering with these links, it's possible to create URLs that appear to go to, but actually send a user to an arbitrary site on the Internet. Consider this example (link will probably no longer work):

That link starts with, but (if you had clicked on it within the first few minutes after it was created) it would actually take you to, which is a page I constructed to look exactly like the iGoogle login page. (Don't worry, it doesn't actually capture any information… but it could!)

Although Google would have a hard time preventing me from trying a phishing attack on their users, allowing me to use their own domain as the phishing URL helps increase the potency of my attack tremendously. Basically, they are letting me use their users' trust in the domain against them.

Their mitigation strategy appears to be that they set a timeout on the link (which is why the above example probably won't work). Of course, the most common phishing attacks are propagated through email. Users who are sitting at their computers when they receive an email warning them of a "serious problem with their iGoogle account" might be enticed to log in immediately to check it out.

This vulnerability is obvious enough that I'm betting I'm not the first one to find and report it, but I notified Google just the same. I'll post an update when I have their response.

Twitter XSS/XSRF Worm

Over the weekend, Twitter was attacked by a JavaScript-based worm that spreads by using a cross-site request forgery (XSRF) attack to update the twitter status of anyone who viewed an infected profile. The update included obfuscated JavaScript that spread the attack (unobfuscated version).

A 17 year old has admitted to creating the attack to promote his website (and out of boredom). While his site will undoubtedly get more traffic, I wouldn't be surprised if he also gets a felony charge for his trouble. Twitter has an explanation of the event and several blogs have an explanation of the offending JavaScript.

Google Gadget Login Forms = Not Good

If you're not familiar with iGoogle (, it's a Google service that allows you to create customizable home pages by including gadgets that were contributed by the user community. These gadgets do anything from display the weather to providing news or stock reports. There are even mini Flash games you can play.

This seems harmless enough, but Google's gadget security model gave user-created (and therefore untrusted) gadgets access to your data at Google like gmail, Google docs, etc. I brought up some concerns with this model at the OWASP conference in San Jose a few years ago. Since then, Google has updated their security model to remove some of the more blatant weaknesses.

Recently, I came across a particular gadget created by the user community. It provides a login form that looks like this:

This gadget uses JavaScript to open an iframe to the eTrade mobile login form:

function display() { var destination_url = ""; var html = '<iframe id="iframed_iframe_id" name="iframed_iframe" border="0" src="%27%20+%20destination_url%20+%20%27" marginwidth="0" marginheight="0" width="100%" frameborder="0" height="' + _gadget_height_pref + 'px"></iframe>'; ...

So, users of this gadget are allowing their login form for eTrade to be controlled by some random bozo on the Internet. The author of this gadget could update it at any time to change to and the user wouldn't notice a thing.

Now, is it really Google's responsibility to prevent users from using gadgets like this? I say it is. Phishing attacks come down to an issue of education and trust. A user that knows to check the location bar of their browser, look for the lock icon, and so on may still fall for a phishing attack that comes from Google. After all, the location bar says "", the certificate is correct, and the lock icon is right where it should be. If Google wants to host user-provided content, they should prevent gadget writers from abusing users' trust in the Google name.