MySpace and the Case Against Input Scrubbing

If you haven’t heard of MySpace.com, it’s safe to say you’re not a teenager. MySpace and other social networking sites have skyrocketed in popularity recently. The site allows users to create a unique homepage and customize it by adding HTML and style sheets. Each user essentially gets to build his or her own mini-website within the MySpace environment.

This creates some interesting challenges for managing site security. While not responsible for the user content, MySpace is still obligated to protect users from each other. To accomplish this MySpace allows users to enter “safe” HTML and style sheet tags, but has to prevent “unsafe” tags to avoid attacks like cross-site scripting (XSS). Their solution has been to compile a list of every bad tag or pattern they can think of and try to “scrub” these patterns out of incoming data. So, if you tried to use a JavaScript command like this on your homepage:

<script>alert(‘xss’);</script>

MySpace would recognize the script tags in that pattern as being unsafe and remove them.

The problem with this approach is that there are a huge number of ways to get a script to run on a user’s browser and MySpace can’t keep up with them all. In fact, there have been a number of vulnerabilities reported about this very problem. These security issues will continue to crop up as long as MySpace attempts to prevent them by scrubbing bad data out of user input. In fact, Security PS has knowledge of at least two previously unknown cross-site scripting vulnerabilities on the MySpace site. We are working with MySpace to address these issues.

As our clients know, the only way to really remove assumptions about incoming data is to positively match it against a very specific pattern. MySpace should define a patterns for each HTML tag they consider safe. They could then match incoming data against these patterns and deny any input that doesn’t match the pattern for a known tag. By doing this, they will no longer have to keep up with every new attack in existence. Each new attack will be automatically denied by the data patterns. Until they do this, they will continue to fight a losing battle against attackers.

Examining the protection provided by SSL

If you’ve attended one of our web application security classes or seminars you’ve probably heard us say “SSL does not provide application security.” It’s true. While SSL can provide great protection for data traveling across a network, it doesn’t do anything to stop most web application attacks.

Which isn’t to say we discourage SSL; just the opposite is true. You should use SSL any time you need to protect the transmission of user credentials, authentication session IDs, or other confidential content. SSL can go a long ways towards making sure your users can still safely use your application even when connecting over untrusted networks.

One question we are frequently asked is “does SSL protect the contents of URLs as well as the contents of web pages?” The answer is yes. If your application stores sensitive information as parameters in the URL these values are not exposed to anyone other than the user. The same goes for all other information in the HTTPS header. A bad guy sniffing network traffic will only see packets containing basic TCP/IP information and an encrypted data payload traveling between the client and web server.

Another question we hear is “when does SSL start protecting transmitted data?” These folks are typically worried that sensitive URL parameters might be exposed if they are included in the very first HTTPS request to the web server. Luckily, these requests are also protected.

Here’s how SSL works: A client performs a basic TCP/IP handshake with the web server, then completes the SSL certificate and key exchange, and finally requests the URL over the encrypted channel. It’s nice to know your data can be encrypted from beginning to end using SSL.

Have other questions about SSL? Let us know and we will try to answer it in an upcoming blog post.

Don’t let scammers redirect customer anger at you

A recent edition of the RISKS digest reports on the receipt of an interesting phishing email. Like most phishing attacks, the email informs the reader that their bank account status (at Barclays Bank in this case) is in jeopardy. To keep their account in good standing the reader is required to log into the online banking service, and to facilitate this process they just need to click the provided link.

Normally this link takes them directly to the attacker’s site, configured to impersonate the legitimate bank site. To add legitimacy to the email, an attacker often tries to obscure the fact that the reader is being directed to their site instead of the real bank. They may use an IP address, a slight variation of the legitimate site’s DNS domain, a HTML hyperlink, or another method of obfuscation.

But this particular link actually did take readers to the legitimate Barclays Bank site, at least initially. Here is a safe sample link that mimics what was contained in the phishing email:

http://www.barclays.co.uk/cgi-bin/gotosite.cgi?location=%68%74%74%70%3A%2F%2F%77%77%77%2E%73%65%63%75%72%69%74%79%70%73%2E%63%6F%6D

As you can see, the link does point to Barclays Bank. But it requests a CGI script that is designed to redirect your browser to the URL contained in the ‘location’ parameter. And the URL in the location parameter is encoded to prevent you from seeing that it really looks like this:

http://www.barclays.co.uk/cgi-bin/gotosite.cgi?location=http://www.securityps.com

So, visiting this link does take you to Barclays, but also immediately redirects you (via a HTTP 302 response) to the Security PS Web site. An attacker might make use of this feature to convince an only slightly savvy reader that the link is safe to follow.

So a good question is why does Barclays support this feature? Barclays may have intended the cgi for use in site navigation, in which case the location parameter should only contain references to another page on their site. However, if they fail to actually constrain this functionality they end up supporting offsite links as well.

Barclays isn’t alone in having their application’s functionality abused by criminals. Both Visa and eBay fell victim to the same issue last year. They both eventually modified their application when the abuse received public attention.

To their credit, Barclays does try to educate its customers about phishing and other email scams: http://www.personal.barclays.co.uk/BRC1/jsp/brccontrol?task=articleFWvi2&value=9190&target=_self&site=pfs They specifically instruct customers not to click on any links they receive in emails purporting to be from Barclays. But try explaining this to an angry customer who thinks your Web site facilitated fraud against them.

While this certainly isn’t a critical risk – stealing credentials by exploiting a cross-site scripting attack on your site would be much worse – it is important to recognize that the feature has potential for abuse. Find out if your Web applications have a similar feature. If they do, we recommend that you eliminate or constrain the redirect functionality.

Integrating security into the SDLC

Recently I stumbled upon an article about integrating security into the development lifecycle without adversely affecting the normal development process. The article was written by Gary McGraw, and I found it to be a good read. The article discusses some of the problems with the security process as it pertains to the SDLC, and how to address them. I’ll discuss a few of the notes that I picked up from it.

In his article, Gary McGraw observes three phases of organizational maturity from a security perspective:

  • Organizations that don’t fully understand the security problem
  • Organizations that are in a constant “reactive” mode
  • Organizations that are integrating security best practices into their SDLC

McGraw goes on to describe best practice items that can help improve the security of software. The following outlines the process that a mature organization might follow to ensure more secure software.

  1. Perform a code review using static code review and black box testing tools.
  2. Perform a security architecture review.
  3. Get penetration testing completed.
  4. Attack the application like an actual intruder would (“Risk based security testing”).
  5. Understand the application “Use case” scenarios.
  6. Understand the application “Abuse cases” - how an attacker might use it in the future.
  7. Ensure that traditional security measures are in place for the environment (Firewalls, IDS, monitoring, patching).

Even if all of the above aren’t followed, McGraw points out that the most important things to consider would be performing code and architecture reviews. “I think those are the first two that everybody should be doing today. So if you're only going to do two, do those two.”

In closing McGraw mentions that it is important to get the support of both management and the developers to ensure a successful secure development lifecycle. Combining the support of those involved will help to drive more secure applications.

To read the full article in its entirety, see http://searchappsecurity.techtarget.com/qna/0,289202,sid92_gci1187360,00.html.

Microsoft releases library to help mitigate cross-site scripting

Many web applications today exhibit security vulnerabilities due to the lack of proper input validation and output encoding. Though numerous development platforms exist, none have a foolproof way to provide complete protection from attacks such as parameter manipulation or cross-site scripting (XSS). Even modern and robust frameworks such as Microsoft .NET are no exception.

However, Web applications written with .NET, in a language such as C#, can utilize many new and interesting approaches to solving input and output vulnerabilities. The attribute validateRequest, for example, can force a .NET application to check for the existence of script-based attacks.

The validateRequest functionality checks for the presence of patterns containing an angle bracket and an alpha character. Under many circumstances, this will prevent a XSS attack. However, when values are written dynamically to HTML, angle brackets are not needed, and an exploit remains possible. Then there are times when developers may choose to disable validateRequest, in which case there is no default protection against XSS attacks.

To aid in mitigating these threats, Microsoft recently released a programming class to prevent XSS vulnerabilities. The Microsoft Anti-Cross Site Scripting Library performs transformations of certain special characters into their HTML entity equivalents, or URL encoded equivalents for items that need to be passed in the URL. For example, <, when run through the HTMLEncode() method will now be safely rendered by the browser as &60;, which is the hexadecimal form of the less-than sign.

Some scenarios will still permit XSS attacks. Developers should use the URLEncode() method to write information that will be sent via URL, such as links. It is therefore critical to apply this as another layer of data validation and encoding security and not use it as your only defense.

Programmers using .NET that wish to make use of this in their applications as an approach to defense-in-depth can obtain it for free from the Microsoft website.

Web application attacks on the rise

According to statistics gathered by the Web Application Security Consortium and reported by Information Week, attacks against Web applications are on the rise. In fact, if the trend continues to the end of the year, 2006 will be the worst year on record for Web application security breaches. According to the article, this is happening for two reasons:

1. The prevalence and availability of tools that make it easier to find and exploit vulnerabilities in Web applications.
2. Web applications aren't often designed with security in mind.

There are even more reasons for this trend than those covered in the article, such as the emergence of worms and other automated attacks that target vulnerabilities in Web applications. Furthermore, knowledge of Web application attacks is becoming commonplace, reducing the average attacker's reliance on tools. Many attackers now need only a browser to wreak havoc in a poorly designed Web application.

The latter point, however, is the crux of the problem. Web applications that weren't designed with security in mind are far more likely to have problems later on. Even if the problems are discovered before they hit the news, it is costly and difficult to retrofit an application with security controls. On the other hand, when security is incorporated into the software development lifecycle from the beginning, the application is prone to fewer vulnerabilities and is much less likely to end up on the news because of an intrusion.

Google spider deletes application content

A recent item in the news (http://www.thedailywtf.com/forums/65974/ShowPost.aspx) reminds us of two important Web application security tips:
1. Don’t fail into an insecure mode by default
2. Be careful running automated spidering software on your applications.

This story took place during development of a Web content management application. One morning the dev team came in to find that all content had been erased. An investigation of the incident linked blame for the deletions on an IP address associated with one of Google’s Web spidering servers. Logs revealed that the spidering software was indexing the site when it came upon a link for content editing. Like a good spider, it followed the link.

Application access controls should have required authentication at this point, effectively stopping the spider from anonymously changing anything on the site. This particiular application assigned a cookie parameter named “isLoggedOn” with a default value of “false”. Once a user authenticated, the app changed this value to “true”. Unfortunately, the application only denied access if the value was set to “false”. Any other value, or the absence of a value altogether, would permit the requested operation.

As you may have guessed, the Google spider was able to successfully enter page editing mode because it didn’t accept the original cookie from the application. Thus is passed the badly written authorization test. Once in edit mode it dutifully continued following all links, including the “Delete Page” option. Any curious hacker could have done the same thing.

Obviously the problem could have been prevented by better programming. Applications should operate by authorizing only requests that are accompanied with legitimate authentication credentials (like a unique session ID) and not by the absence of a value.

The story also acts as a good reminder that spidering a web application can have unintended consequences. If you are using automated vulnerability scanning software that spiders content during testing, it can cause the same negative impacts. This is why Security PS assessments include time for us to manually walk through applications and identify issues like a link that logs out of the application or deletes a user account. These links can then usually be placed in an exception list so they are avoided by any subsequent spidering.

Which is nice, because it allows you to spend time doing something more exciting than running to grab the latest backup tape for your server.

Welcome to the Security PS Blog

In security assessment after security assessment we find those organizations that focus on educating employees about of security threats and countermeasures do far better than those who don’t. To support this effort, we introduce the Security PS blog.

Our goal for this blog is to supplement our other forms of client communication (like our quarterly newsletter and training sessions) with more frequent tips. These entries will range from links for newly released security reports and standards, to brief commentaries on computer attacks covered by the media.

As always, we’re interested in your feedback on blog entries or requests for more information. Feel free to contact us at info@securityps.com or visit our website at www.securityps.com.