OAuth Resource Owner Password Credentials Grant Implementation in WebAPI 2

A few customers have been asking about the proper implementation of an OAuth server using Microsoft's WebAPI 2. I spent some time implementing one (just to be knowledgeable both with OAuth and WebAPI) and struggled to find really good resources for using the OWIN OAuth 2.0 Authorization Server (and middleware). I was able to piece together information from a variety of blogs, forum posts, and other sources, but I realized part way through that there was a need to publish additional information to help others. I have provided the source code for a Visual Studio 2013 Express project implementing the Resource Owner Password Credentials Grant, Refresh Token Grant, and an endpoint for revoking access tokens.

Before you dig into the code, I want to stress that I'm not done! Because of project work and a period of vacation, I will not be able to continue working on it for a month or so. But, I wanted to provide what I had so far. Currently, the code is functional and the example requests and documentation on the Google code page (linked to at the bottom) work. It's ready to be used as a platform to learn on.

I am not a full time developer; I just happen to like writing C# code. That means, I may not have the prettiest, most efficient code. Also, this code may not be secure. I used it to learn with, and yes I considered security requirements while developing it, but I haven't had the chance to review it for security vulnerabilities.

The Google Code Project Home page contains:
  • Request and response examples for each endpoint
  • A sequence diagram showing which methods called on a particular provider
  • A list of top blogs, videos, or other resources I used 
  • A list of all the files I remember modifying when implementing the OAuth server
I hope the following resource helps others learn to write OAuth servers using WebAPI 2:

Forms Authentication Token Termination in ASP.NET WCF Services

In my last post (Session Fixation & Forms Authentication Token Termination in ASP.NET), I talked about ways to mitigate two types of session related vulnerabilities in an ASP.NET MVC 4 application. One of these vulnerabilities is also present in many WCF web services. In one mode of operation, WCF web services can authenticate users and issue forms authentication cookies. Since this token contains an encrypted set of values and resides only on the client-side, the server cannot choose to invalidated that token and end a user’s authenticated session. This allows attackers to continue using stolen tokens, even after the user logs out.

One solution for fixing this vulnerability is to issue an ASP.NET_SessionId cookie and to tightly couple it with the forms authentication cookie as described previously. Whenever a web service request is issued, the ASP.NET_SessionId value should be referenced to determine if the session store contains the username and it matches the value stored by the forms authentication token. This approach is sound; however, my implementation is experimental. I’m not a WCF or ASP.NET expert.

In my first attempt to implement this model, I tried using the built in Windows Communication Foundation Authentication Service (System.Web.ApplicationServices.AuthenticationService). There were some critical modifications I needed to make to this service for it satisfy all my security needs; however, due to its construct and scope, I couldn’t find a good way to extend it or to use a decorator pattern to utilize it. Instead, I chose to write my own authentication service. The code can be found below:

AuthenticationService.cs:



Next, I extended the ServiceAuthorizationManager class to provide the capability to validate users’ session and forms authentication cookies for web service calls. It ensures the user has authenticated, and that the identity in the session store matches the identity in the forms authentication token.

MyServiceAuthorizationManager.cs



Finally, in the web.config file, I created two service behaviors. One for unauthenticated access to the login service (anonymousServiceBehavior), and one for authenticated access to all other web services (authenticatedServiceBehavior). Then in the service definition, I applied each of those behaviors using the behaviorConfiguration attribute.

Web.config

The result is that all WCF calls to the IngredientsService and the ShoppingListService require authentication and ensure users forms authentication token is tightly coupled with the ASP.NET_SessionId.

Session Fixation & Forms Authentication Token Termination in ASP.NET

ASP.NET applications commonly have one or more vulnerabilities associated with the use of ASP.NET_SessionId cookies and forms authentication cookies. This article briefly discusses those common vulnerabilities and explains one method of mitigating them in an ASP.NET MVC 4 application. Explanation of the exploits are not included, but I linked many of the keywords to OWASP or MSDN articles to provide more details. The security best practices for session cookies and use of sessions in general are provided in the OWASP Session Management Cheat Sheet.

Background
ASP.NET_SessionId cookies and forms authentication cookies can be used alone or together to maintain state with a user’s browser. Each cookie works a little bit differently. The ASP.NET_SessionId cookie value is an identifier used to look up session variables stored on the server-side; the cookie itself does not contain any data. The forms authentication cookie, named .ASPXAUTH by default, contains encrypted data, stored only on the client-side. When it is submitted in a request to the server, it is decrypted and used by custom application code to make authorization decisions.

ASP.NET_SessionId Alone: Session Fixation
There are three common ways to use these cookies that result in risk. First, when the ASP.NET_SessionId cookie is used alone, the application is vulnerable to session fixation attacks. The root cause of this vulnerability is that the ASP.NET_SessionId cookie value isn’t changed or regenerated after users log in (or cross any kind of authentication boundary). In fact, Session IDs are intentionally reused in ASP.NET. If an attacker steals an ASP.NET_SessionId prior to a victim authenticating, then the attacker can use the cookie value to impersonate the victim after he or she logs in. This gives the attacker unauthorized access to the victim’s account.

Forms Authentication Cookie Alone: Can’t Terminate Authentication Token on the Server
Second, when a forms authentication cookie is used alone, applications give users (and potentially attackers) control over when to end a session. This occurs because the forms authentication ticket is an encrypted set of fields stored only on the client-side. The server can only request that users stop using the value when they log out. The ASP.NET framework does not have a built-in feature to invalidate the cookie on the server-side. That means, clients (or attackers) can continue using a forms authentication ticket even after logged out. This allows an attacker to continue using a stolen forms authentication token despite a user logging out to protect him or herself.

Loosely Coupled ASP.NET_SessionID and Forms Authentication Cookies: Still Vulnerable
Lastly, applications can combine both strategies and use forms authentication and sessions. In this arrangement, the forms authentication cookie is used for authentication and authorization decisions, and the session cookie is used to store additional state information. I’ve seen a 50/50 split between applications retrieving identity information from session variables versus the forms authentication ticket however. Either way, the ASP.NET framework does not explicitly couple a specific forms authentication cookie to an ASP.NET_SessionId. Any valid forms authentication cookie can be used with any other valid session cookie. Depending on the implementation, this results in a session fixation vulnerability (for the ASP.NET_SessionId cookie), the inability to terminate authenticated sessions on the server side (for the forms authentication cookie), or both vulnerabilities.

One Possible Solution: Tightly Couple ASP.NET_SessionIDs to Forms Authentication Identities
There are a variety of potential solutions to mitigate these risks. One of those solutions is to use both the ASP.NET_SessionId cookie and a forms authentication cookie, AND to tightly couple them using the user’s identity as the link. In an application that uses forms authentication, this means that the identity of the user is stored in session variables (you have to do this manually) AND the forms authentication ticket (occurs through the normal use of forms authentication). Then, on each request to the application, the identity associated with each cookie should be compared. If they do not match, invalidate the user’s session and log them out.

This solution prevents session fixation by ensuring that an ASP.NET_SessionId cookie (vulnerable to session fixation) MUST be coupled the user’s own forms authentication token (not vulnerable to session fixation) rather than just any individual’s forms authentication token. Additionally, it allows forms authentication tokens to be indirectly invalidated on the server-side by destroying the session that is associated with it. Since both cookies have to be present and linked by the user’s identity, each protects against the weaknesses of the other.

Solution Implementation in an ASP.NET MVC 4 Application
I created an ASP.NET MVC 4 application to test this solution and to be able to demonstrate code for it. My goal was to create a global function that would execute for every controller and action to ensure the user identity referenced in session variables matched those stored by the forms authentication ticket. For unauthenticated users, the session should not reference a user identity at all. If either of these two conditions are violated, the user is logged out, their session is destroyed, and they are redirected back to the login page.

I wrote CoupleSessionAndFormsAuth.cs, an MVC Action Filter Attribute, to accomplish the validation. The comments in the code explain the filter’s goals and how it works.

CoupleSessionAndFormsAuth.cs:
https://gist.github.com/sekhmetn/5629190


Next, I added the filter to the global filters list, within Global.asax.cs, so it would be executed for every controller and action.

Global.asax.cs:
https://gist.github.com/sekhmetn/5629201


Lastly, I added one line of code to the AccountController to ensure the user’s identity was added to session variables when the user logs in.
https://gist.github.com/sekhmetn/5629290

Burp Suite Plugin: WCF Binary SOAP Scanner Insertion Point

In a previous post, I showed how the Burp Suite API can be used to view and modify WCF Binary SOAP messages to assist with manual testing and analysis.  Next, I wanted to allow Burp to perform automated scans on WCF Binary SOAP requests. This post demonstrates use of the Scanner Insertion Point Provider to accomplish that goal.

There are two roles that the plugin must fulfill.  First, the plugin must identify potential insertion points for scanner payloads. Second, the plugin must accept attack payloads from Burp and construct valid requests containing them. Insertion points are identified by decoding the WCF Binary SOAP message (into an XML format) and iterating through the DOM to identify nodes that contain text. Each of these nodes is then sent back to Burp as a WCFBinaryInsertionPoint instance. Once Burp is ready to scan the URL, it passes attack payloads to each instance of WCFBinaryInsertionPoint it received.  The original request is then decoded, the payload is inserted into the correct XML node, and the request is re-encoded and returned to Burp.

I verified the solution by chaining two instances of Burp together and watching scanner traffic.  In the second Burp instance, I could watch WCF Binary SOAP requests being sent to the server containing the attack payload in the correct XML node.  I then ensured that the response did not contain an exception indicating that the server could not understand the message. One item I did not verify was whether or not Burp could successfully identify vulnerabilities in the responses since they are still in a WCF Binary SOAP format.  My guess would be that any signatures that match error messages like “System.Data.SqlClient.SqlException: Unclosed quotation mark” would still succeed. Results could likely be improved by registering an HTTP Listener within burp that automatically decodes responses for the Scanner if they have a content type of “application/msbin1”.

Previously, I mentioned that stack traces for run-time exceptions in Burp only referenced obfuscated class names. Before writing this plugin, I upgraded to the latest version (1.5.04 at the time) and noticed that this has changed.  Stack traces are now very helpful and greatly reduce the amount of debugging time required.

The plugin code is available as a gist at:


Previous Burp Plugin: Burp Suite Plugin: View and Modify WCF Binary SOAP Messages

Non-Negotiable Elements of a Secure Software Development Process: Part 3 - Validation Criteria

In September, I gave a presentation focused on helping quality assurance professionals understand how they fit into a secure software development process (SSDP) and how they can take an active role in improving software security.  In that presentation, I discussed essential elements that make up a successful SSDP.  These elements are: security requirements (expectations); secure architecture, configuration, and coding patterns (how to satisfy an expectation); and validation criteria (verification that expectations have been met).  These elements allow an organization to be transparent regarding its security goals and performance.  They also facilitate communication with customers, developers, managers, and other project stakeholders.

This article is part 3 in the Non-Negotiable Elements of a Secure Software Development Process series and focuses on defining validation criteria. In part 1 of the series, we discussed how security requirements set clear and reasonable expectations that development teams can plan for and meet to satisfy a specific level of security assurance.  Part 2 discussed how to use secure architecture, configuration, and coding patterns to satisfy security requirements and reduce the ongoing cost of developing secure code.All three articles are listed below:

Part 1: Security Requirements
http://blog.securityps.com/2013/01/non-negotiable-elements-of-secure.html

Part 2: Secure Architecture, Configuration, and Coding Patterns

http://blog.securityps.com/2013/01/non-negotiable-elements-of-secure_15.html

Part 3: Validation Criteria

http://blog.securityps.com/2013/03/non-negotiable-elements-of-secure.html


Why Validation Criteria?
Many organizations measure how secure their applications and infrastructure are using assessments.  They might use application security assessments, penetration tests, design reviews, threat models, or many other fault finding activities.  These can be good risk indicators, and they are often important activities to include within an application security program, but they fall short in actually telling us whether our application is secure. They focus on finding problems, telling us when an asset is insecure, and remain silent about everything else.

With the declaration of security requirements and secure architecture, configuration, and coding patterns (secure patterns), we now have a list of positive characteristics about the application that we would like to affirm. If we can validate these elements, then we can get a more comprehensive understand regarding how our application is secured.  We can then focus on using assessments to identify missing security requirements and improve our overall process as a whole instead of a single application.

What to Validate?
If we try to validate the code in a vacuum, we will fall back into the same old process:
  1. Use a code review tool
  2. Find problems
  3. Fix them
  4. Repeat
Instead, we need to validate that we have satisfied the security requirements defined for a specific application.  This simplifies and narrows our validation scope and approach, because the entire team should be using the predefined secure patterns to satisfy requirements.  Each secure pattern should be linked to specific validation criteria (including test cases, desired results, and the reporting format).  Before the development team ever implements a secure pattern, these criteria must already be defined.

Validation Criteria Approaches
Validation steps can focus on testing code, application behavior, or both.  The effectiveness of testing code versus application behavior will depend on the security requirement or secure pattern that is being tested. For example, if one were to verify the code/configuration of an application to determine whether it used SSL/TLS, they would fail to test whether the content delivery network (CDN) required SSL/TLS.  In this case, testing the application behavior of each infrastructure element would be more effective than testing the code or configuration.  On the other hand, validating that an application used only prepared statements would be more effective in code. The final series of slides at the end of the presentation demonstrates this further.

Additionally, validation criteria can affirm controls are in place (positive testing) or find deviations from the defined secure pattern (negative testing).  Positive testing should be the primary approach used for all validation criteria.  Negative testing should simply be used to provide additional confidence in test results.

Test Case Techniques
Teams can define test cases within their validation criteria in a number of ways.  A few examples are listed below.  Each is demonstrated in the presentation slides.
  • Manual test cases
    • Step by step instructions
  • Automated test cases
    • Grep or Findstr
    • Selenium
  • Exploratory Testing
  • Security testing tools
    • Free/Open Source
    • Commercial (Not Recommended!)
I recommend starting simply and then using more automation and sophistication as the need builds.  If teams start out with manual test cases they can gradually acclimate to new requirements and the process as a whole. Eventually a large number of requirements will be defined and it will be too time consuming to manage and test all of them.  Automation will be the natural next step.  Teams can start out using a simple grep tool and selenium to complete most automated test cases. Custom scripting may follow.

I don’t recommend starting with commercial automated security testing tools like a static code analyzers or dynamic application testing platform.  Many times teams spend a lot of time and money to get it set up and are disappointed with the results.  Before investing in one of these tools, it's important to have clear goals, realistic cost and personnel expectations, and a strategy for how the tool will fit in with a larger application security program. If the team is moving towards using  a commercial security tool, consider starting with free tools first like FxCop for .NET applications and write your own custom rules.

The key advantage to writing your own manual or automated test cases is that you can customize them for the application and its business context. This isn't something a commercial, general purpose security scanning/code review tool is designed to do.

Once a team has a fairly comprehensive set of security requirements; is effective at implementing their security patterns; and desires to grow further, exploratory testing could become valuable.  Exploratory testing should be used to identify vulnerabilities that do not yet have a security requirement already defined.  This testing approach is used to improve the overall secure software development process, expand the list of requirements, and improve all the organization’s applications - not to focus on the faults of one single application.

How to Win at Software Security
If all of these components are put in place:
  1. Security and Compliance Requirements
  2. Secure Architecture, Configuration, and Coding Patterns that Satisfy the Requirements
  3. Security Test Cases and Validation Criteria that Affirm Patterns are Implemented
AND you have a specific well defined business goal you are trying to satisfy with these requirements, you can provide compelling evidence that your application is "secure enough" by the standards or goals you have set.

Burp Suite Plugin: View and Modify WCF Binary SOAP Messages

Microsoft’s WCF Web Services have a binary encoded SOAP messaging mode available that Silverlight, WPF, and other thick client applications can use to communicate with an application server.  This format cannot be digested natively by Burp Suite, making it time consuming to analyze requests and responses. This post describes how the new extension API for Burp was used to overcome this challenge.

Several years back, Brian Holyfield wrote a plugin to add support for binary SOAP messages, but the extension framework for Burp at that time was limiting.  He was forced to use two Burp instances to accomplish encoding and decoding.  Now that a new extension framework has been released for Burp, I have reused some of his code and the NBFS.exe .NET console application to encode and decode WCF binary SOAP requests in a single Burp tab. The code and several screenshots can be found below.

WCF Binary SOAP Request:


SOAP Binary -> XML Request Body:


The plugin code is available as a gist at: https://gist.github.com/4420532

Previous Burp Plugin: New Burp Suite (>= 1.5.01) Extensibility and an Example Editor Tab Plugin




Practical Analysis of New Password Cracker

Just before the holidays, I saw a press release regarding some state-of-the-art hash cracking hardware and the headlines made it sound like it was a big deal:
“New 25 GPU Monster Devours Passwords In Seconds”
http://securityledger.com/new-25-gpu-monster-devours-passwords-in-seconds/
Well, along with my general interest in hashing and cryptography, we had discussed salted-MD5 password hashing that week with a client. The press release provided an interesting aside to that discussion, so I did some analysis on the benchmarks from which we could draw practical conclusions about its impact. In this article, I’ll provide a high-level overview of why you should be moving from salted hashes to Key Derivation Functions (KDFs), and then plunge into the mathematics and information theory analysis I performed so we can all understand exactly what these benchmarks mean from a practical point of view.

The High-Level

 
After all is said and done, hash algorithms like MD5, SHA-1, SHA-256, SHA-512, and SHA-3 are designed to be as fast as they possibly can while not producing collisions or reversible outputs. Speed may sound like a good thing but it’s a problem for password hashing. Speed works in the attacker’s favor because his job is to recompute the hashes one-by-one until he lands on the correct value. And while the complete keyspace of even MD5 is well beyond the realm of plausible brute-force attack, the limited keyspace we tend to use makes brute-force attack extremely plausible. Moving from MD5 to SHA-3 will increase your possible keyspace but if you’re still using 8-character passwords, the only security benefit you get is that SHA-3 is marginally slower than MD5.

Well, custom password cracking machines like the one in the article will keep getting faster and we likely still have 6-12 character passwords. There are strong arguments for coming up with something other than passwords for authentication purposes, and people are working on that, but there is a current solution to the brute-force problem. For those of us who still have passwords, the cryptographic solution is to use a key derivation function like S/I S2K (PGP), PBKDF2 (RSA), BCrypt (OpenBSD), or SCrypt (Tarsnap), in place of a salted hash. All of these functions were designed to be deliberately slow in order to thwart brute-force calculation. The idea is that the authentication system only has to compute the hash once given the correct password, but the attacker has to compute the hash, literally, 6 million billion times.

Mathematics & Information Theory


The headline of the article refers to LM (Microsoft’s LAN Manager) hashes but as I mentioned above, I was more interested in the MD5 benchmark. If you’re interested in the other benchmarks, I’ve provided all of the formulas and logic you’ll need to perform the same analysis against those. The benchmark that the author provides for MD5 is 180 billion (180x10^9) guesses per second. If the entire keyspace, meaning all possible one-to-one values, of MD5 were used that would present the attacker with 2^128 possible hashes to calculate for every password. To determine the relationship between 180x10^9 and 2^128, we need to do some math which I’ll go over in the next section.

Practical Speed Analysis

First, we’ll simplify everything greatly by converting the base 2 number (2^128) into a base 10 number because it’s easy for most people to convert other numbers to base 10 notation in their heads for comparison.

Here is the formula:
Let x be the exponent, let a be the starting base, and b the target base, and y be the result:
y = x*ln(a)/ln(b)

To convert 2^128 to base 10, we do: 128*ln(2)/ln(10) = 38.5
Therefore, 2^128 = 10^38.5 = 1x10^38.5 = 1e38.5

So, the password cracker can perform 180x10^9 hashes per second and it has to calculate 1x10^38.5 hashes for every password. The following formula  shows how long that will take:

1x10^38.5 / 180x10^9 ~= 1.75x10^27 seconds

Given 1 year = 3.15569x10^7 seconds
1.75x10^27 / 3.15569x10^7 ~= 5.5x10^19 years

So, with state-of-the-art custom hardware, it would take 5.5x10^19 (10 billion billion) years to crack one salted MD5 hash. That seems pretty secure even given the ½ probability that you will guess the correct value at random. It WOULD be if the entire keyspace could legitimately be used. Unfortunately, the passwords that people can effectively type into a prompt are pretty limited.

Entropy & Other Limitations

A standard ASCII character consumes 1 byte (8 bits) and each byte can theoretically have 2^8 (256) unique values. Because each character consumes 8 bits, 16 possible characters can fit into the keyspace of MD5. To figure out the work needed to guess all of the combinations, we’ll consider the typeable characters: a-z, A-Z, 0-9, and 32 special characters for a total of 94 possible values per byte. 94 possible values is being VERY generous. In practice, I see only 4-6 permitted special characters for a total of 68 possible values per byte but we’ll work with 94 to analyze the best-case scenario. So, in reality, rather than having to try 2^128 possible values, we only have to try 94^16 (~2^105) possible values.

On top of the ASCII entropy limitation, people have a hard time remembering passwords, so most password policies allow between 6 and 12 characters. This further reduces the work from 94^16 to between 94^6 and 94^12.

For comparison, the conversions are:
6 Characters: 94^6 = 6*ln(94)/ln(2) ~= 2^39 ~= 7x10^11
8 Characters: 94^8 = 8*ln(94)/ln(2) ~= 2^52 ~= 6x10^15
10 Characters: 94^10 = 10*ln(94)/ln(2) ~= 2^65.5 ~= 5x10^19
12 Characters: 94^12 = 12*ln(94)/ln(2) ~= 2^79 ~= 5x10^23

Due to practical limitations, the key space of a 128 bit hash has been reduced from 1x10^38.5 to between 7x10^11 and 5x10^23. If we factor in fewer special characters, it is much lower.

Final Analysis

Recall that the hash cracker is capable of generating 180x10^9 MD5 hashes per second. Below is the table of time to crack a salted MD5 password with this technology:

6 Characters: 7x10^11 / 180x10^9 = 3.8 seconds
8 Characters: 6x10^15 / 180x10^9 ~= 9 hours
10 Characters: 5x10^19 / 180x10^9 ~= 8 years
12 Characters: 5x10^23 / 180x10^9 ~= 9 million years


Conclusion


So, there you have it. If you are using 6 or even 8 character passwords, your salted MD5 hashes are well within the practical realm of being cracked if they were to fall into the hands of an attacker.

As I mentioned above, moving to a different fast hashing algorithm isn’t the answer as all of them will eventually be in the same boat. Consider moving to a key derivation function for your password hashing needs.