Forms Authentication Token Termination in ASP.NET WCF Services

In my last post (Session Fixation & Forms Authentication Token Termination in ASP.NET), I talked about ways to mitigate two types of session related vulnerabilities in an ASP.NET MVC 4 application. One of these vulnerabilities is also present in many WCF web services. In one mode of operation, WCF web services can authenticate users and issue forms authentication cookies. Since this token contains an encrypted set of values and resides only on the client-side, the server cannot choose to invalidated that token and end a user’s authenticated session. This allows attackers to continue using stolen tokens, even after the user logs out.

One solution for fixing this vulnerability is to issue an ASP.NET_SessionId cookie and to tightly couple it with the forms authentication cookie as described previously. Whenever a web service request is issued, the ASP.NET_SessionId value should be referenced to determine if the session store contains the username and it matches the value stored by the forms authentication token. This approach is sound; however, my implementation is experimental. I’m not a WCF or ASP.NET expert.

In my first attempt to implement this model, I tried using the built in Windows Communication Foundation Authentication Service (System.Web.ApplicationServices.AuthenticationService). There were some critical modifications I needed to make to this service for it satisfy all my security needs; however, due to its construct and scope, I couldn’t find a good way to extend it or to use a decorator pattern to utilize it. Instead, I chose to write my own authentication service. The code can be found below:


Next, I extended the ServiceAuthorizationManager class to provide the capability to validate users’ session and forms authentication cookies for web service calls. It ensures the user has authenticated, and that the identity in the session store matches the identity in the forms authentication token.


Finally, in the web.config file, I created two service behaviors. One for unauthenticated access to the login service (anonymousServiceBehavior), and one for authenticated access to all other web services (authenticatedServiceBehavior). Then in the service definition, I applied each of those behaviors using the behaviorConfiguration attribute.


The result is that all WCF calls to the IngredientsService and the ShoppingListService require authentication and ensure users forms authentication token is tightly coupled with the ASP.NET_SessionId.

Session Fixation & Forms Authentication Token Termination in ASP.NET

ASP.NET applications commonly have one or more vulnerabilities associated with the use of ASP.NET_SessionId cookies and forms authentication cookies. This article briefly discusses those common vulnerabilities and explains one method of mitigating them in an ASP.NET MVC 4 application. Explanation of the exploits are not included, but I linked many of the keywords to OWASP or MSDN articles to provide more details. The security best practices for session cookies and use of sessions in general are provided in the OWASP Session Management Cheat Sheet.

ASP.NET_SessionId cookies and forms authentication cookies can be used alone or together to maintain state with a user’s browser. Each cookie works a little bit differently. The ASP.NET_SessionId cookie value is an identifier used to look up session variables stored on the server-side; the cookie itself does not contain any data. The forms authentication cookie, named .ASPXAUTH by default, contains encrypted data, stored only on the client-side. When it is submitted in a request to the server, it is decrypted and used by custom application code to make authorization decisions.

ASP.NET_SessionId Alone: Session Fixation
There are three common ways to use these cookies that result in risk. First, when the ASP.NET_SessionId cookie is used alone, the application is vulnerable to session fixation attacks. The root cause of this vulnerability is that the ASP.NET_SessionId cookie value isn’t changed or regenerated after users log in (or cross any kind of authentication boundary). In fact, Session IDs are intentionally reused in ASP.NET. If an attacker steals an ASP.NET_SessionId prior to a victim authenticating, then the attacker can use the cookie value to impersonate the victim after he or she logs in. This gives the attacker unauthorized access to the victim’s account.

Forms Authentication Cookie Alone: Can’t Terminate Authentication Token on the Server
Second, when a forms authentication cookie is used alone, applications give users (and potentially attackers) control over when to end a session. This occurs because the forms authentication ticket is an encrypted set of fields stored only on the client-side. The server can only request that users stop using the value when they log out. The ASP.NET framework does not have a built-in feature to invalidate the cookie on the server-side. That means, clients (or attackers) can continue using a forms authentication ticket even after logged out. This allows an attacker to continue using a stolen forms authentication token despite a user logging out to protect him or herself.

Loosely Coupled ASP.NET_SessionID and Forms Authentication Cookies: Still Vulnerable
Lastly, applications can combine both strategies and use forms authentication and sessions. In this arrangement, the forms authentication cookie is used for authentication and authorization decisions, and the session cookie is used to store additional state information. I’ve seen a 50/50 split between applications retrieving identity information from session variables versus the forms authentication ticket however. Either way, the ASP.NET framework does not explicitly couple a specific forms authentication cookie to an ASP.NET_SessionId. Any valid forms authentication cookie can be used with any other valid session cookie. Depending on the implementation, this results in a session fixation vulnerability (for the ASP.NET_SessionId cookie), the inability to terminate authenticated sessions on the server side (for the forms authentication cookie), or both vulnerabilities.

One Possible Solution: Tightly Couple ASP.NET_SessionIDs to Forms Authentication Identities
There are a variety of potential solutions to mitigate these risks. One of those solutions is to use both the ASP.NET_SessionId cookie and a forms authentication cookie, AND to tightly couple them using the user’s identity as the link. In an application that uses forms authentication, this means that the identity of the user is stored in session variables (you have to do this manually) AND the forms authentication ticket (occurs through the normal use of forms authentication). Then, on each request to the application, the identity associated with each cookie should be compared. If they do not match, invalidate the user’s session and log them out.

This solution prevents session fixation by ensuring that an ASP.NET_SessionId cookie (vulnerable to session fixation) MUST be coupled the user’s own forms authentication token (not vulnerable to session fixation) rather than just any individual’s forms authentication token. Additionally, it allows forms authentication tokens to be indirectly invalidated on the server-side by destroying the session that is associated with it. Since both cookies have to be present and linked by the user’s identity, each protects against the weaknesses of the other.

Solution Implementation in an ASP.NET MVC 4 Application
I created an ASP.NET MVC 4 application to test this solution and to be able to demonstrate code for it. My goal was to create a global function that would execute for every controller and action to ensure the user identity referenced in session variables matched those stored by the forms authentication ticket. For unauthenticated users, the session should not reference a user identity at all. If either of these two conditions are violated, the user is logged out, their session is destroyed, and they are redirected back to the login page.

I wrote CoupleSessionAndFormsAuth.cs, an MVC Action Filter Attribute, to accomplish the validation. The comments in the code explain the filter’s goals and how it works.


Next, I added the filter to the global filters list, within Global.asax.cs, so it would be executed for every controller and action.


Lastly, I added one line of code to the AccountController to ensure the user’s identity was added to session variables when the user logs in.

Burp Suite Plugin: WCF Binary SOAP Scanner Insertion Point

In a previous post, I showed how the Burp Suite API can be used to view and modify WCF Binary SOAP messages to assist with manual testing and analysis.  Next, I wanted to allow Burp to perform automated scans on WCF Binary SOAP requests. This post demonstrates use of the Scanner Insertion Point Provider to accomplish that goal.

There are two roles that the plugin must fulfill.  First, the plugin must identify potential insertion points for scanner payloads. Second, the plugin must accept attack payloads from Burp and construct valid requests containing them. Insertion points are identified by decoding the WCF Binary SOAP message (into an XML format) and iterating through the DOM to identify nodes that contain text. Each of these nodes is then sent back to Burp as a WCFBinaryInsertionPoint instance. Once Burp is ready to scan the URL, it passes attack payloads to each instance of WCFBinaryInsertionPoint it received.  The original request is then decoded, the payload is inserted into the correct XML node, and the request is re-encoded and returned to Burp.

I verified the solution by chaining two instances of Burp together and watching scanner traffic.  In the second Burp instance, I could watch WCF Binary SOAP requests being sent to the server containing the attack payload in the correct XML node.  I then ensured that the response did not contain an exception indicating that the server could not understand the message. One item I did not verify was whether or not Burp could successfully identify vulnerabilities in the responses since they are still in a WCF Binary SOAP format.  My guess would be that any signatures that match error messages like “System.Data.SqlClient.SqlException: Unclosed quotation mark” would still succeed. Results could likely be improved by registering an HTTP Listener within burp that automatically decodes responses for the Scanner if they have a content type of “application/msbin1”.

Previously, I mentioned that stack traces for run-time exceptions in Burp only referenced obfuscated class names. Before writing this plugin, I upgraded to the latest version (1.5.04 at the time) and noticed that this has changed.  Stack traces are now very helpful and greatly reduce the amount of debugging time required.

The plugin code is available as a gist at:

Previous Burp Plugin: Burp Suite Plugin: View and Modify WCF Binary SOAP Messages

Non-Negotiable Elements of a Secure Software Development Process: Part 3 - Validation Criteria

In September, I gave a presentation focused on helping quality assurance professionals understand how they fit into a secure software development process (SSDP) and how they can take an active role in improving software security.  In that presentation, I discussed essential elements that make up a successful SSDP.  These elements are: security requirements (expectations); secure architecture, configuration, and coding patterns (how to satisfy an expectation); and validation criteria (verification that expectations have been met).  These elements allow an organization to be transparent regarding its security goals and performance.  They also facilitate communication with customers, developers, managers, and other project stakeholders.

This article is part 3 in the Non-Negotiable Elements of a Secure Software Development Process series and focuses on defining validation criteria. In part 1 of the series, we discussed how security requirements set clear and reasonable expectations that development teams can plan for and meet to satisfy a specific level of security assurance.  Part 2 discussed how to use secure architecture, configuration, and coding patterns to satisfy security requirements and reduce the ongoing cost of developing secure code.All three articles are listed below:

Part 1: Security Requirements

Part 2: Secure Architecture, Configuration, and Coding Patterns

Part 3: Validation Criteria

Why Validation Criteria?
Many organizations measure how secure their applications and infrastructure are using assessments.  They might use application security assessments, penetration tests, design reviews, threat models, or many other fault finding activities.  These can be good risk indicators, and they are often important activities to include within an application security program, but they fall short in actually telling us whether our application is secure. They focus on finding problems, telling us when an asset is insecure, and remain silent about everything else.

With the declaration of security requirements and secure architecture, configuration, and coding patterns (secure patterns), we now have a list of positive characteristics about the application that we would like to affirm. If we can validate these elements, then we can get a more comprehensive understand regarding how our application is secured.  We can then focus on using assessments to identify missing security requirements and improve our overall process as a whole instead of a single application.

What to Validate?
If we try to validate the code in a vacuum, we will fall back into the same old process:
  1. Use a code review tool
  2. Find problems
  3. Fix them
  4. Repeat
Instead, we need to validate that we have satisfied the security requirements defined for a specific application.  This simplifies and narrows our validation scope and approach, because the entire team should be using the predefined secure patterns to satisfy requirements.  Each secure pattern should be linked to specific validation criteria (including test cases, desired results, and the reporting format).  Before the development team ever implements a secure pattern, these criteria must already be defined.

Validation Criteria Approaches
Validation steps can focus on testing code, application behavior, or both.  The effectiveness of testing code versus application behavior will depend on the security requirement or secure pattern that is being tested. For example, if one were to verify the code/configuration of an application to determine whether it used SSL/TLS, they would fail to test whether the content delivery network (CDN) required SSL/TLS.  In this case, testing the application behavior of each infrastructure element would be more effective than testing the code or configuration.  On the other hand, validating that an application used only prepared statements would be more effective in code. The final series of slides at the end of the presentation demonstrates this further.

Additionally, validation criteria can affirm controls are in place (positive testing) or find deviations from the defined secure pattern (negative testing).  Positive testing should be the primary approach used for all validation criteria.  Negative testing should simply be used to provide additional confidence in test results.

Test Case Techniques
Teams can define test cases within their validation criteria in a number of ways.  A few examples are listed below.  Each is demonstrated in the presentation slides.
  • Manual test cases
    • Step by step instructions
  • Automated test cases
    • Grep or Findstr
    • Selenium
  • Exploratory Testing
  • Security testing tools
    • Free/Open Source
    • Commercial (Not Recommended!)
I recommend starting simply and then using more automation and sophistication as the need builds.  If teams start out with manual test cases they can gradually acclimate to new requirements and the process as a whole. Eventually a large number of requirements will be defined and it will be too time consuming to manage and test all of them.  Automation will be the natural next step.  Teams can start out using a simple grep tool and selenium to complete most automated test cases. Custom scripting may follow.

I don’t recommend starting with commercial automated security testing tools like a static code analyzers or dynamic application testing platform.  Many times teams spend a lot of time and money to get it set up and are disappointed with the results.  Before investing in one of these tools, it's important to have clear goals, realistic cost and personnel expectations, and a strategy for how the tool will fit in with a larger application security program. If the team is moving towards using  a commercial security tool, consider starting with free tools first like FxCop for .NET applications and write your own custom rules.

The key advantage to writing your own manual or automated test cases is that you can customize them for the application and its business context. This isn't something a commercial, general purpose security scanning/code review tool is designed to do.

Once a team has a fairly comprehensive set of security requirements; is effective at implementing their security patterns; and desires to grow further, exploratory testing could become valuable.  Exploratory testing should be used to identify vulnerabilities that do not yet have a security requirement already defined.  This testing approach is used to improve the overall secure software development process, expand the list of requirements, and improve all the organization’s applications - not to focus on the faults of one single application.

How to Win at Software Security
If all of these components are put in place:
  1. Security and Compliance Requirements
  2. Secure Architecture, Configuration, and Coding Patterns that Satisfy the Requirements
  3. Security Test Cases and Validation Criteria that Affirm Patterns are Implemented
AND you have a specific well defined business goal you are trying to satisfy with these requirements, you can provide compelling evidence that your application is "secure enough" by the standards or goals you have set.

Burp Suite Plugin: View and Modify WCF Binary SOAP Messages

Microsoft’s WCF Web Services have a binary encoded SOAP messaging mode available that Silverlight, WPF, and other thick client applications can use to communicate with an application server.  This format cannot be digested natively by Burp Suite, making it time consuming to analyze requests and responses. This post describes how the new extension API for Burp was used to overcome this challenge.

Several years back, Brian Holyfield wrote a plugin to add support for binary SOAP messages, but the extension framework for Burp at that time was limiting.  He was forced to use two Burp instances to accomplish encoding and decoding.  Now that a new extension framework has been released for Burp, I have reused some of his code and the NBFS.exe .NET console application to encode and decode WCF binary SOAP requests in a single Burp tab. The code and several screenshots can be found below.

WCF Binary SOAP Request:

SOAP Binary -> XML Request Body:

The plugin code is available as a gist at:

Previous Burp Plugin: New Burp Suite (>= 1.5.01) Extensibility and an Example Editor Tab Plugin

Practical Analysis of New Password Cracker

Just before the holidays, I saw a press release regarding some state-of-the-art hash cracking hardware and the headlines made it sound like it was a big deal:
“New 25 GPU Monster Devours Passwords In Seconds”
Well, along with my general interest in hashing and cryptography, we had discussed salted-MD5 password hashing that week with a client. The press release provided an interesting aside to that discussion, so I did some analysis on the benchmarks from which we could draw practical conclusions about its impact. In this article, I’ll provide a high-level overview of why you should be moving from salted hashes to Key Derivation Functions (KDFs), and then plunge into the mathematics and information theory analysis I performed so we can all understand exactly what these benchmarks mean from a practical point of view.

The High-Level

After all is said and done, hash algorithms like MD5, SHA-1, SHA-256, SHA-512, and SHA-3 are designed to be as fast as they possibly can while not producing collisions or reversible outputs. Speed may sound like a good thing but it’s a problem for password hashing. Speed works in the attacker’s favor because his job is to recompute the hashes one-by-one until he lands on the correct value. And while the complete keyspace of even MD5 is well beyond the realm of plausible brute-force attack, the limited keyspace we tend to use makes brute-force attack extremely plausible. Moving from MD5 to SHA-3 will increase your possible keyspace but if you’re still using 8-character passwords, the only security benefit you get is that SHA-3 is marginally slower than MD5.

Well, custom password cracking machines like the one in the article will keep getting faster and we likely still have 6-12 character passwords. There are strong arguments for coming up with something other than passwords for authentication purposes, and people are working on that, but there is a current solution to the brute-force problem. For those of us who still have passwords, the cryptographic solution is to use a key derivation function like S/I S2K (PGP), PBKDF2 (RSA), BCrypt (OpenBSD), or SCrypt (Tarsnap), in place of a salted hash. All of these functions were designed to be deliberately slow in order to thwart brute-force calculation. The idea is that the authentication system only has to compute the hash once given the correct password, but the attacker has to compute the hash, literally, 6 million billion times.

Mathematics & Information Theory

The headline of the article refers to LM (Microsoft’s LAN Manager) hashes but as I mentioned above, I was more interested in the MD5 benchmark. If you’re interested in the other benchmarks, I’ve provided all of the formulas and logic you’ll need to perform the same analysis against those. The benchmark that the author provides for MD5 is 180 billion (180x10^9) guesses per second. If the entire keyspace, meaning all possible one-to-one values, of MD5 were used that would present the attacker with 2^128 possible hashes to calculate for every password. To determine the relationship between 180x10^9 and 2^128, we need to do some math which I’ll go over in the next section.

Practical Speed Analysis

First, we’ll simplify everything greatly by converting the base 2 number (2^128) into a base 10 number because it’s easy for most people to convert other numbers to base 10 notation in their heads for comparison.

Here is the formula:
Let x be the exponent, let a be the starting base, and b the target base, and y be the result:
y = x*ln(a)/ln(b)

To convert 2^128 to base 10, we do: 128*ln(2)/ln(10) = 38.5
Therefore, 2^128 = 10^38.5 = 1x10^38.5 = 1e38.5

So, the password cracker can perform 180x10^9 hashes per second and it has to calculate 1x10^38.5 hashes for every password. The following formula  shows how long that will take:

1x10^38.5 / 180x10^9 ~= 1.75x10^27 seconds

Given 1 year = 3.15569x10^7 seconds
1.75x10^27 / 3.15569x10^7 ~= 5.5x10^19 years

So, with state-of-the-art custom hardware, it would take 5.5x10^19 (10 billion billion) years to crack one salted MD5 hash. That seems pretty secure even given the ½ probability that you will guess the correct value at random. It WOULD be if the entire keyspace could legitimately be used. Unfortunately, the passwords that people can effectively type into a prompt are pretty limited.

Entropy & Other Limitations

A standard ASCII character consumes 1 byte (8 bits) and each byte can theoretically have 2^8 (256) unique values. Because each character consumes 8 bits, 16 possible characters can fit into the keyspace of MD5. To figure out the work needed to guess all of the combinations, we’ll consider the typeable characters: a-z, A-Z, 0-9, and 32 special characters for a total of 94 possible values per byte. 94 possible values is being VERY generous. In practice, I see only 4-6 permitted special characters for a total of 68 possible values per byte but we’ll work with 94 to analyze the best-case scenario. So, in reality, rather than having to try 2^128 possible values, we only have to try 94^16 (~2^105) possible values.

On top of the ASCII entropy limitation, people have a hard time remembering passwords, so most password policies allow between 6 and 12 characters. This further reduces the work from 94^16 to between 94^6 and 94^12.

For comparison, the conversions are:
6 Characters: 94^6 = 6*ln(94)/ln(2) ~= 2^39 ~= 7x10^11
8 Characters: 94^8 = 8*ln(94)/ln(2) ~= 2^52 ~= 6x10^15
10 Characters: 94^10 = 10*ln(94)/ln(2) ~= 2^65.5 ~= 5x10^19
12 Characters: 94^12 = 12*ln(94)/ln(2) ~= 2^79 ~= 5x10^23

Due to practical limitations, the key space of a 128 bit hash has been reduced from 1x10^38.5 to between 7x10^11 and 5x10^23. If we factor in fewer special characters, it is much lower.

Final Analysis

Recall that the hash cracker is capable of generating 180x10^9 MD5 hashes per second. Below is the table of time to crack a salted MD5 password with this technology:

6 Characters: 7x10^11 / 180x10^9 = 3.8 seconds
8 Characters: 6x10^15 / 180x10^9 ~= 9 hours
10 Characters: 5x10^19 / 180x10^9 ~= 8 years
12 Characters: 5x10^23 / 180x10^9 ~= 9 million years


So, there you have it. If you are using 6 or even 8 character passwords, your salted MD5 hashes are well within the practical realm of being cracked if they were to fall into the hands of an attacker.

As I mentioned above, moving to a different fast hashing algorithm isn’t the answer as all of them will eventually be in the same boat. Consider moving to a key derivation function for your password hashing needs.

Non-Negotiable Elements of a Secure Software Development Process: Part 2 - Secure Architecture, Configuration, and Coding Patterns

In September, I gave a presentation focused on helping quality assurance professionals understand how they fit into a secure software development process (SSDP) and how they can take an active role in improving software security.  In that presentation, I discussed essential elements that make up a successful SSDP.  These elements are: security requirements (expectations), secure architecture, configuration, and coding patterns (how to satisfy an expectation), and validation criteria (verification that expectations have been met).  These elements allow an organization to be transparent regarding its security goals and performance.  They also facilitate communication with customers, developers, managers, and other project stakeholders.

This article is part 2 in the series discussing non-negotiable elements of a secure software development process. In part 1 of the series, we discussed how security requirements set clear and reasonable expectations that development teams can plan for and meet to satisfy a specific level of security assurance.  This article focuses on secure architecture, configuration, and coding patterns that equip development teams to meet those requirements.
All three articles are listed below:

Part 1: Security Requirements

Part 2: Secure Architecture, Configuration, and Coding Patterns

Part 3: Validation Criteria

What are Secure Architecture, Configuration, and Coding Patterns?
Secure architecture, configuration, and coding patterns are language specific implementations of code, frameworks, configuration, and application designs that satisfy a security requirement.  They provide development teams with positive examples and instructions to successfully adhere to security practices without requiring them to be a security expert.  

For example, if a team chose to use Hibernate as their data persistence layer, a secure pattern would demonstrate how the team should define domain objects, map those objects to the database, and how to securely retrieve objects from the database programmatically. Specific instructions, code and configuration examples, and discussion should be provided in this pattern to ensure the reader understands the proper implementation.  One element that this pattern would include is how to retrieve objects programmatically using a parameterized hibernate query, shown below.

Query safeHQLQuery = session.createQuery("from Inventory where productID=:productid");
safeHQLQuery.setParameter("productid", userSuppliedParameter);

A similar approach could be used for ASP.NET parameterized queries without the use of an ORM framework:

string sql = "SELECT * FROM Customers WHERE CustomerId = @CustomerId";

SqlCommand command = new SqlCommand(sql);

command.Parameters.Add(new SqlParameter("@CustomerId", System.Data.SqlDbType.Int));
command.Parameters["@CustomerId"].Value = 1;

Examples and instructions should cover all relevant cases of the pattern. In the examples above, this would include INSERT, UPDATE, DELETE, SELECT, and store procedure calls.

Once these patterns have been defined and accepted, the lead developer should communicate them to the rest of the team and train members in how to apply it successfully.

In general, architecture and configuration patterns will be far more efficient and effective than coding patterns.  If the architecture and configuration of the application is secure by default or forces developers to adhere to coding conventions that are secure by default, then fewer mistakes will be made and less time will be spent writing secure code.  As a general recommendation, try to satisfy security requirements by choosing secure designs, frameworks, libraries, services, or configurations.  If those options aren’t available, then define specific coding patterns for the team to implement.  If coding patterns will be written many times, consider writing a reusable module to implement the pattern.

The Cost of Writing Secure Patterns
I want to briefly discuss the cost associate with this approach. The implementation of secure architecture, configuration, and coding patterns is the most expensive SSDP element. This cost is significantly greater than the cost of developing the original security requirements. Security requirements are practices that can be defined once and apply to all projects in the organization; whereas, patterns must be defined for each group of projects that use similar technologies. In order to write security requirements, an individual with a security background is necessary; however, secure patterns may require a developer with a deeper understanding of security in order to select optimal solutions to satisfy requirements. Ideally, this upfront cost greatly reduces the ongoing cost when writing secure code.

In some cases, an organization may not have the expertise necessary to create security requirements or secure patterns.  In these cases, teams can use outside resources such as The Open Web Application Security Project (OWASP) or an application security specific consulting organization to kick start the process.

Benefits of Writing Secure Patterns
The primary benefit of defining secure architecture, configuration, and coding patterns is that every team member is equipped to satisfy security requirements without needing a deep application security background.  There’s no question about how to prevent SQL injection or cross-site scripting; in fact, specific vulnerabilities may not even have to be mentioned.  Instead, developers have a list of “answers” or a guide that describes how the team has chosen to architect and develop their application.  This guide naturally causes the team to write code free from vulnerabilities that the organization has selected against while writing security requirements. Additionally, the development team can reference these patterns throughout the life of the application rather than relearning or researching how to avoid or remediate vulnerabilities. To be clear, education is important, but this approach helps reduce the burden on developers to recall information from classes to implement secure code.

By repeating one unified pattern across the entire application, secure patterns also become easy to test and verify.  If one pattern is used throughout an entire application, than one set of test cases can be applied to validate those practices (these test cases are discussed in the next article).  In some cases, it may be possible to certify that an application is free of a particular vulnerability.

Secure Architecture, Configuration, and Coding Patterns Wrap-up
Language and framework specific secure architecture, configuration, and coding patterns equip development teams to satisfy security requirements for their project.  This unified approach to preventing application security vulnerabilities can be verified through test cases customized for each pattern.  These test cases and verification criteria are the topic of the next article.

New Burp Suite (>= 1.5.01) Extensibility and an Example Editor Tab Plugin

Burp Suite has a new extensibility API! In December, I wrote a plugin that uses the new API to speed up a security assessment of a Silverlight application using WCF web services. The code and explanation below helps demonstrate some of the new features in Burp.

The Silverlight application interface communicated with a SOAP based web service; however, the web service responses weren’t ordinary XML.  Instead, they contained a Base 64 encoded value.  After digging into the application, we discovered that the web services zipped and then Base 64 encoded XML response data.  

Initially, I wrote a python script to decode and unzip the data; however, it was time consuming to copy, paste, and unzip each response over and over again.  My solution was to use the new API to add an Editor Tab to Burp.  This editor tab automatically detects whether the HTTP response needed to be processed and then unzipped the value right in the proxy tool.  The plugin source code and several screenshots are available below.  The code may not be the most elegant solution, but it met my needs for being fast to develop and functional.

As I wrote the plugin, I noticed that Burp provides a useful stack trace if an exception occurs while loading the plugin; however, once the plugin is fully loaded, stack traces only show obfuscated class names for Burp.  You can work around this challenge by mocking up any data needed, writing all code that doesn’t rely on the Burp API separately, and then run it with the Jython interpreter.  When integrating it as a plugin, my approach was to use a lot of println statements. I had a general idea of how to use the framework, but I still needed to work out the details.

HTTP SOAP Request:

Base 64 Encoded SOAP Response:

Base 64 Decoded, Unzipped XML Content:

The plugin code is available as a gist at:

Non-Negotiable Elements of a Secure Software Development Process: Part 1 - Security Requirements

In September, I gave a presentation focused on helping quality assurance professionals understand how they fit into a secure software development process (SSDP) and how they can take an active role in improving software security.  In that presentation, I discussed essential elements that make up a successful SSDP.  These elements are: security requirements (expectations); secure architecture, configuration, and coding patterns (how to satisfy an expectation); and validation criteria (verification that expectations have been met). These elements allow an organization to be transparent regarding its security goals and performance.  They also facilitate communication with customers, developers, managers, and other project stakeholders.

This is part 1 in a series of articles discussing Non-Negotiable elements of a secure software development process. This article focuses on security requirements. All thr
ee articles are listed below:

Part 1: Security Requirements

Part 2: Secure Architecture, Configuration, and Coding Patterns

Part 3: Validation Criteria

What are Security Requirements?
Security requirements are intended to be language and framework agnostic statements that communicate the organization’s expectation around a security practice.  Security requirements are applicable for any project or team regardless of whether they are using ASP.NET MVC, J2EE with Spring MVC, or Ruby on Rails.  They use positive statements to describe the type of behavior desired, and use negative statements to provide additional clarity.  The ideal is to whitelist specific approaches to developing software.

An example security requirement intended to prevent SQL injection that meets this criteria might be:

“Applications that use an SQL database must use parameterized queries or prepared statements for all transactions, including SELECT, INSERT, UPDATE, DELETE, and stored procedure calls. Define the SQL query and use placeholders to denote the location in which parameter values will be added later. Then, add each value to the statement as a parameter. All variables must be added as parameters rather than being concatenated with the SQL query.”

This requirement is clear and understandable, it is easy to validate (more on this later), and it leaves the implementation up to the team.  Developers can choose to write dynamic queries that use prepared statements or they can use a framework, like Hibernate or LINQ to SQL that also follow this practice but abstracts the actual queries.

An example of a well intentioned but unsuccessful security requirement that also seeks to address SQL injection is:

“Write queries that do not allow untrusted user input to be interpreted by the database as SQL commands to prevent SQL injection.”

While this example is probably too simplistic, the reality is that most organization start with requirements that are just as ill-defined. The typical starting point is to focus on writing requirements that state what not to do instead of identifying the positive practice that will eliminate the vulnerability.  Ill-defined requirements like this one do not equip the development team with the means to succeed at meeting security expectations.

Benefits of Defining Security Requirements
Developers, customers, managers, and the organization at large benefit from security requirements in several ways.  First, security requirements can be discussed during the project planning stages with stakeholders.  Teams can decide how expensive it might be to satisfy a specific requirement and build that time into the software development process.  If these security requirements are linked with real business related consequences, customers or stakeholders can decide how much to spend on securing the application based on their budget, the sensitivity of data, or how critical the application is to the business.  

Developers benefit by having well defined requirements, clear expectations, and timelines that account for the implementation of agreed upon security components.  The team should also be able to articulate which types of threats the application is designed to repel as well as those that it is not, making security much more transparent.

Organizations benefit by being able to define collections of security requirements as assurance standards.  These collections of security requirements can be designed to satisfy PCI requirements, contractual obligations, or minimum security baselines.  Next, the assessors or evaluators of these standards can quickly and cheaply identify how an application meets the requirements.  In addition, organizations can leverage the security story of a specific application for marketing purposes or as a competitive advantage.

Security Requirements Wrap-up
Ideally organizations should define realistic, understandable, and measurable security requirements. By explicitly stating these requirements and using positive statements to define achievable development practices, the team is able to plan for, communicate, and meet the organization's security goals.  These software language and framework agnostic requirements naturally give way to specific implementation details for each project in the form of secure architecture, configuration coding patterns.  These patterns will be discussed in part 2.