OAuth Is Not Meant For Authentication!

As we work with software development teams to help them apply security principles and practices to their applications, we commonly identify misunderstandings or gaps in the team's understanding regarding security features, APIs, or frameworks they are using. It's important to identify and correct these misunderstandings as early on as possible. When such security elements are misused, systemic security flaws are produced in the application that are difficult to resolve without significant reworking of the code or architecture.

One such example is the use of OAuth. As useful as OAuth is, it must be used for its intended purpose. If we try to make it do things it wasn't designed or intended to do, we get into trouble. Let's clarify the fundamental purpose and use of OAuth and in doing so, clear up a common misunderstanding with it.

OAuth is not meant for authentication. OAuth is for authorization.

Here are a few points demonstrating why:

OAuth has four Grant Types:
  • Authorization Code
  • Implicit
  • Resource Owner Password Credential
  • Client Credentials
For "Authorization Code" and "Implicit" grants the specification doesn't govern the submission of a username or password. It's something totally outside of the scope of OAuth. This is a great warning flag that OAuth is not intended to be used directly for authentication.

"Client Credentials" does have a username and password. It is sent as a Basic Authorization Header (Base64 encoded "username:password"). BUT, it's not a grant used by users. Here's what the specification says:
"Client credentials are used as an authorization grant typically when the client is acting on its own behalf (the client is also the resource owner) or is requesting access to protected resources based on an authorization previously arranged with the authorization server." - https://tools.ietf.org/html/rfc6749#section-1.3.4
An example client could be a third-party API, that a user has granted an "offline" token. That third-party API may make requests without a user interacting with it.

Now for "Resource Owner Password Credentials." Yes, you can use it to login with a username and password, but you probably shouldn't. Not because it's insecure, but because it doesn't scale well and isn't flexible. The specification says:
"The resource owner password credentials (i.e., username and password) can be used directly as an authorization grant to obtain an access token.  The credentials should only be used when there is a high degree of trust between the resource owner and the client (e.g., the client is part of the device operating system or a highly privileged application), and when other authorization grant types are not available (such as an authorization code)."
So why shouldn't you use the Resource Owner Password Credentials grant for authentication? Well, let's start by looking at the login request and response:
POST /token HTTP/1.1
Host: server.example.com
Content-Type: application/x-www-form-urlencoded

HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache 
You submit a username and password and you get back and access token. The access token can then be used to call an API. Sounds ok, right? Let's add some complexities. First, OAuth is often used in combination with a stateless REST service. There's no session on the server-side. There's just the access token sent by the client, which is often a base 64 encoded set of claims with a signature (like a JWT). With that in mind, what if you need to do multi-factor authentication? What about security questions? What if there are several different ways a user can log in? How do you integrate all those options with the OAuth Resource Owner Password Credentials grant?

One common approach is to make it an API call and have a mobile or web application force you to complete it. But, if the application is stateless and you already have an access token why not just call any other API method directly with that token and ignore the secondary authentication step. It's trivial to bypass in a client-side application (mobile, thick-client, web page). So, that means attackers can bypass that multi-factor system that helps meet compliance and regulatory requirements.

Ok, how about if it's made as part of the login process? Well, that's not really OAuth any more. You have to add fields, add steps, and/or go through more process before issuing an access token. Are you going to write your own custom OAuth client library and server to do it? You might as well write a normal forms based authentication process instead?


How do others do it then? Instead, they use the OAuth Authorization Code or Implicit grants and a separate login server (or identity provider) to handle all the authentication and pass back a user with an access token. In fact, that's exactly what the Authorization Code and Implicit grant is for. That identity server can offer as many options and schemes for authenticating users as it wants. The authentication process is centralized and isolated from the applications that rely upon them. When it's done authenticating the user, it passes the user back to the application fully authenticated. With this in mind, you can see that this is exactly what the OAuth Specification Authors had in mind when you read the Introduction section here:  https://tools.ietf.org/html/rfc6749#section-1

This issue seems to come up in assessments more and more often lately. I keep seeing software development teams download a copy of Thinktecture's IdentityServer (a great open source product by the way), and then implement it just for their application using the resource owner password credentials. Then, they later bolt on security questions, finger print scanners, multi-factor authentication, and "remember me" features. As a result, their stateless application has easily bypassable authentication controls that are very time consuming to fix (or they have to compromise on having the API be stateless).

If you are considering implementing OAuth or you already have, reach out to Security PS to help with the design and architecture. You could also watch some of these videos to help avoid common mistakes:

Improving User Acceptance of Account Lockout Responses for Login Processes

The purpose of a login process is to identify a particular individual and validate their identity before granting them access to an application. It's critical that the process only allows the owner of an account to login, and it must prevent an attacker from logging in as another user. This post discusses one aspect of protecting authentication processes: using an account lockout response. And, it specifically focuses on decreasing the frustration users experience as a result of that control.

An account lockout response is a security control developers apply to all of the application's authentication processes to limit the number of times an individual can enter the wrong credentials consecutively. For example, if an attacker incorrectly guesses another user's password five times in a row, the application will disable the user's account and notify the user by email. Organizations must choose an appropriate lockout threshold and choose how accounts are unlocked.

When should an organization use an account lockout response? That's difficult to answer unless a company is compelled to implement the control due to a regulation or compliance requirement. The development team, security team, and marketing or user experience groups really need to discuss the pros and cons of such a process. On one hand, the application will have significantly more resistance to password guessing attacks, protecting users' accounts from being compromised. On the other hand, it may frustrate users, raise customer support costs, or even drive customers away from using the application. If an account lockout response is implemented (which Security PS generally encourages), it must be carefully designed to increase user acceptance.

One of the frustrations users experience related to account lockout responses is that they may not know their password (or sometimes their username) and they lockout their account accidentally. On top of that, the user doesn't know their account is locked out. This occurs because the application cannot display notifications on the login page that the account is locked out. If it did, the process would inform an attacker that a particular username is valid resulting in a username harvesting vulnerability. This is one of the key challenges to solve in order to increase user acceptance of the account lockout response control.

To address the notification challenge, Security PS recommends several user experience improvements that don't expose the application to additional risk. First, the application can email the user when a failed login attempt occurs. Additionally, if the account is locked out, the application can immediately email the user instructions for unlocking the account. These notifications do not cause username harvesting vulnerabilities, because only the account owner will receive those email notifications not the attacker.

Email notifications are helpful, but what if the user doesn't check their email while using the application? They can still get frustrated easily. So, developers should consider sending SMS notifications when a user's account is locked out or potentially before the lockout occurs. The message can be short, direct, and can point the user to their emailed instructions for unlocking their account or resetting their password. The hope is that the user receives this notification before getting frustrated that they can't login.

Finally, the messaging in the application itself can remind users that a lockout response is present and that they can check their email if they believe their account is locked out. This messaging can be displayed all the time or after a specific number of failed attempts per session. Key here is that this is not a specific number of failed attempts per username or account, but per session. Otherwise, username harvesting vulnerabilities are introduced.

Authentication processes, especially complex, multi-step, multi-credential authentication processes are difficult to get correct. It's easy to introduce vulnerabilities in the user creation/registration step, forgot username/password step, and login process itself. If you are in the process of designing an authentication process, whether it's using an OAuth2, OpenID Connect, or custom forms based authentication, contact Security PS to have a partner come along side you and help ensure the design and implementation are secure.

5 Things to Avoid When Implementing the CSF

In my last post, I gave a quick recap of what the Cybersecurity Framework is, how it differs from other standards and the importance it carries with both regulated and non-regulated organizations.  This week, I wanted provide some quick lessons learned by many organizations, not only with the CSF itself, but with many of the standards used within the categories of the framework.  Listed below are 5 quick things your organization should consider when implementing any security framework or standard.

  1. Don’t assume the CSF is only for “Critical Infrastructure” or Federally regulated organizations: Although the Executive Order is titled as such, it is meant for all organizations, in both public and private sectors.  The same can be said for NIST 800-53 controls; it’s not just for Federal agencies. 

  2. Don’t try to do it all yourself: The implementation of the CSF requires the input and collaboration of almost every vertical within the organization.  It can not be done solely by one person.  Often times it requires outside help with subject matter experts for implementing various requirements.

  3. Don’t adopt controls, just to adopt controls: This is one of the most common pitfalls.  The informative references in the CSF are not a list of mandated controls which must be adopted for each category.  They are to be considered as examples or possible suggestions.  Each category must be carefully examined and the organization must ultimately decide which controls fit and which ones do not. When gaps exist, a risk assessment should be conducted to determine if the control is even necessary. All successful information security programs are built on risk management, not controls.

  4. Don’t assume there is only one way for implementation: Every organization has their own business goals, risk levels and security requirements.  One size does not fit all and neither does the implementation of the CSF.  The NIST web site, along with many others offer unique approaches to implementing the framework.  Security PS recommends that each organization carefully weigh the many options and decide which method, or combination of methods is right for your environment.

  5. Don’t ever consider it “Finished”: Risk management and information security in general is a lifecycle or reiterative approach; the CSF is designed to evolve in the same way. Requirements change, new technologies and vulnerabilities emerge and risk levels alter over the course of time, which requires constant improvement of the organization’s program.

What challenges have you faced when implementing the CSF or other framework?  We’d like to hear from you!  Please let us know in the comments below.

Manual Application-Layer Security Testing AND Automated Scanning Tools

There are many automated application security tools available on the market. They are useful tools for identifying vulnerabilities in your company's applications, but, they shouldn't be used alone as part of a risk identification process. This post discusses the advantages of automated tools and identifies gaps that need to be filled with manual testing techniques for a more comprehensive view of application risk.

For my purpose here, I'm going to consider automated scanning tools like WebInspect, AppScan, and Acunetix. These are all automated dynamic analysis tools, but there are quite a few other options such as Automated Code Review, Binary Analyzers, and even newer technologies that instrument application code and analyze it during runtime. The capabilities of each of these types of tools differ, but many of the pros and cons are similar.

Automated tools require at least a one-time set up step to configure it for your application. Once configured, the tools can run on a scheduled basis or even as part of a continuous integration build process. Automated tools can scan an application and deliver results very quickly, often in hours. They can scan large numbers of applications too. They are great at identifying vulnerabilities that can be identified by sending attack input and analyzing the output of the application for vulnerability signatures. The tools can detect popular vulnerabilities like SQL injection, cross-site scripting, disclosure of stack traces or error messages, disclosure of sensitive information (like credit card numbers or SSNs), open redirects, and more. They generally perform best identifying non-complex to moderately complex vulnerabilities. This makes automated tools great for use cases such as:
  • A first time look at the security of a web application
  • Scanning all of an organization's web applications for the first time or on a periodic basis
  • Integration with other automated processes, such as the build step of a continuous integration server (probably on a schedule. i.e. every night)
After understanding the value that automated tools can provide, it's also important to understand their limitations. The primary limitation is that they aren't human. They are written to find a concrete, specific set of issues and to be able to identify those issues based on signatures or algorithms. An experienced application security tester's knowledge and expertise will far outshine a tool allowing them to identify tremendously more issues and interpret complex application behavior to understand whether a vulnerability is present. This typically means manual testing is required to identify vulnerabilities related to:
  • Authentication process steps including login, forgot username/password, and registration
  • Authorization, especially determining if data is accessed in excess of a user's role or entitlements or data that belongs to another tenant
  • Business logic rules
  • Session management
  • Complex injection flaws, especially those that span multiple applications (for example a customer application accepts and stores a cross-site scripting vulnerability, but the exploit executes in the admin application)
  • Use of cryptography
  • The architecture and design of the application and related components
The issues listed above are extremely important! For example, it's unacceptable for an attacker to be able to read and modify any other user's data. But, an automated tool isn't going to be able to identify this type of flaw. These tools also tend to perform poorly on web services, REST services, thick-clients, mobile applications, single-page applications. For these reasons, manual testing is absolutely essential for identifying risk in an application.

If manual testing can identify all the same issues and more versus an automated scanning tool, why bother with the automated scanning tool? Well, sometimes you don't need the automated scanning tool. But most of the time, it's still very helpful. The key factors are speed and scale. You can scan a lot of web applications very quickly, receive results, and fix them. THEN, follow up with manual testing. The caution is that scanning alone and waiting to do manual testing may leave critical risk vulnerabilities undiscovered in the application, so don't wait too long afterward.

If your organization needs assistance choosing and adopting automated scanning tools or would like more information about manual application-layer security testing, please contact Security PS. Security PS does not sell automated tools, but we have advised many of our clients regarding how to choose an appropriate tool, prepare staff for using that tool, and update processes to include its usage.

ASP.NET Core Basic Security Settings Cheatsheet

When starting a new project, looking at a new framework, or fixing vulnerabilities identified as part of an assessment or tool, its nice to have one place to refer to the fixes for common security issues. This post provides solutions for some of the more basic issues, especially those around configuration. Most of these answers can be found in Microsoft's documentation or by doing a quick Google search. But hopefully having it all right here will save others some time.

Enabling An Account Lockout Response

To enable the account lockout response for ASP.NET Identity, first modify the Startup.cs file to choose appropriate settings. In the ConfigureServices method, add the following code:
services.Configure<IdentityOptions>(options =>
  options.Lockout.AllowedForNewUsers = true;
  //requires manual unlock
  options.Lockout.DefaultLockoutTimeSpan = TimeSpan.MaxValue;
  //three failed attempts before lockout
  options.Lockout.MaxFailedAccessAttempts = 3; 
With the settings configured, lockout still needs to be enabled in the login method of the account controller. In AccountController -> Login(LoginViewModel model, string returnUrl = null), change lockoutOnFailure from false to true as shown below:
var result = await _signInManager.PasswordSignInAsync(model.Email, model.Password, model.RememberMe, lockoutOnFailure: true);

ASP.NET Identity comes with a class that validates passwords. It is configurable and allows one to decide if passwords should require a digit, uppercase letters, lowercase letters, numbers, and/or a symbol. This policy can be further customized by implementing the IPasswordValidator interface or extending the Microsoft.AspNetCore.Identity.PasswordValidator. The code below extends the PasswordValidator and ensures the password does not contain an individual's username.
using ASPNETCoreKestrelResearch.Models;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using System.Threading.Tasks;

namespace ASPNETCoreKestrelResearch.Security
  public class CustomPasswordValidator<TUser> : PasswordValidator<TUser> where TUser : IdentityUser
        public override async Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password)
            IdentityResult baseResult = await base.ValidateAsync(manager, user, password);

            if (!baseResult.Succeeded)
                return baseResult;
                if (password.ToLower().Contains(user.UserName.ToLower()))
                    return IdentityResult.Failed(new IdentityError
                        Code = "UsernameInPassword",
                        Description = "Your password cannot contain your username"
                    return IdentityResult.Success;
Next, ASP.NET Identity needs to be told to use that class. In the ConfigureServices method of Startup.cs, find services.AddIdentity and add ".AddPasswordValidator<CustomPasswordValidator<ApplicationUser>>();" as shown below.
services.AddIdentity<ApplicationUser, IdentityRole>()

Choosing a Session Timeout Value

Developers can choose how long a session cookie remains valid and whether a sliding expiration should be used by adding the following code to the ConfigureServices method of Startup.cs:
services.Configure<IdentityOptions>(options =>
  options.Cookies.ApplicationCookie.ExpireTimeSpan = TimeSpan.FromMinutes(10);
  options.Cookies.ApplicationCookie.SlidingExpiration = true;

Enabling the HTTPOnly and Secure Flag for Authentication Cookies

First, if you are using Kestrel, HTTPS (TLS) is not supported. Instead, it is implemented by HAProxy, Nginix, Apache, IIS, or some other web server you place in front of the application. If you are using Kestrel, the Secure flag cannot be enabled properly from the application code. However, if you are hosting the application in IIS directly, then it will work. The following code demonstrates enabling both the HTTPOnly and Secure flags for cookie middleware in ASP.NET Identity through the ConfigureServices method in Startup.cs.
services.Configure<IdentityOptions>(options =>
  options.Cookies.ApplicationCookie.CookieHttpOnly = true;
  options.Cookies.ApplicationCookie.CookieSecure = CookieSecurePolicy.Always;

Enabling Cache-Control: no-store

When applications contain sensitive information that should not be stored on a user's local hard drive, The Cache-Control: no-store HTTP response header can help provide that guidance to browsers. To enable that feature, add the following code to the ConfigureServices method in Startup.cs.
services.Configure<MvcOptions>(options =>
  options.CacheProfiles.Add("DefaultNoCacheProfile", new CacheProfile
    NoStore = true,
    Location = ResponseCacheLocation.None
  options.Filters.Add(new ResponseCacheAttribute
    CacheProfileName = "DefaultNoCacheProfile"                    

Disabling the Browser's Autocomplete Feature for Login Forms

The changes to ASP.NET's razor views makes this super simple. Just add the autocomplere="off" attribute as if it were a normal HTML input field:
<input asp-for="Email" class="form-control" autocomplete="off"/>
<input asp-for="Password" class="form-control" autocomplete="off"/>

Modify The Iterations Count for the Password Hasher's Key Derivation Function

First, I believe the default right now is 10,000 and the algorithm is PBKDF2. The code below won't change that default iteration count, but it will show how it can be done. In ConfigureService in Startup.cs add the following code.
services.Configure<PasswordHasherOptions>(options =>
  options.IterationCount = 10000;

Enforcing HTTPS and Choosing Appropriate TLS Protocols and Cipher Suites

As mentioned above, if you are using Kestrel you won't be able to use HTTPS directly. Therefore, you won't do this in your code. You will need to look up how to do this in HAProxy, Nginx, Apache, IIS, etc. If you are hosting your application using IIS directly, then you can enforce the use of HTTPS using something like https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/RequireHttpsAttribute.cs BUT, it will only be applied to your MVC controllers/views. It will not be enforced for static content (see https://github.com/aspnet/Home/issues/895). If you want to do this in code, you will need to write some middleware to enforce it across the entire application. Finally, the choice of cipher suites offered cannot be changed using code.

Enabling a Global Error Handler

A custom global error handler is demonstrated by the Visual Studio template. The following relevant code can be found in the Configure method of Startup.cs.
if (env.IsDevelopment())

Removing the Server HTTP Response Header

All responses from the server are going to return "Server: Kestrel" by default. To remove that value, modify UseKestrel() in Program.cs to include the following settings change:
public static void Main(string[] args)
  var host = new WebHostBuilder()
    .UseKestrel(options =>
      options.AddServerHeader = false;


X-Frame-Options, Content-Security-Policy, and Strict-Transport-Security HTTP Response Headers

The following post seems to cover most of these headers well: http://andrewlock.net/adding-default-security-headers-in-asp-net-core/. I haven't evaluated its design, but I did verify that I can install it and the headers are added successfully. Since Kestrel does not support HTTPS, consider whether its appropriate to implement the Strict-Transport-Security header using code or by configuring the web server placed in front of the application.

I installed this nuget package using "Install-Package NetEscapades.AspNetCore.SecurityHeaders". Then, I made sure to have the following imports in Startup.cs:
using NetEscapades.AspNetCore.SecurityHeaders;
using NetEscapades.AspNetCore.SecurityHeaders.Infrastructure;
I added the following code to the ConfigureService method of Startup.cs:
Last, I added this code to the Configure method of Startup.cs:
app.UseCustomHeadersMiddleware(new HeaderPolicyCollection()
  //.AddCustomHeader("Content-Security-Policy", "somevaluehere")
  //.AddCustomHeader("X-Content-Security-Policy", "somevaluehere")
  //.AddCustomHeader("X-Webkit-CSP", "somevaluehere")
Make sure you add this code BEFORE app.UseStaticFiles();, otherwise the headers will not be applied to your static files.

Why Use the NIST CSF?

You may have heard about a recent framework that has been gaining traction since its inception a few years ago called the Cybersecurity Framework (CSF).  If not, I’ll give you a quick recap.  This framework was drafted by the Commerce Department’s National Institute of Standards and Technology (NIST) back in February of 2013 from an Executive Order by the President entitled “Improving Critical Infrastructure Cybersecurity”.  Following almost a year of collaborative discussions with thousands of security professionals across both public and private sectors, a framework was developed that is comprised of guidelines that can help organizations identify, implement, and improve cybersecurity practices as well as their overall security program as a whole.  The framework is architected to be a continuous process to grow in sync with the constant changes in cybersecurity threats, processes and technologies.  It was also designed to be revised periodically to incorporate lessons learned and industry feedback.  At its core, the principles of the framework conceives cybersecurity as a progressive, continuous lifecycle that identifies and responds to threats, vulnerabilities, and solutions. The CSF provides the channels to allow organizations to determine their current cybersecurity state and capabilities, set goals for a desired outcomes, and establish a plan for improving and maintaining the overall security program. The framework itself is available here.

So, what makes the CSF different from NIST 800-53 or ISO 27001/27002?  By definition, these are detailed regulatory documents which provide requirements for adhering to specific control standards. In comparison, the CSF provides a high-level framework for how to access and prioritize functions within a security program from these existing standards.  Due to it’s high-level scope and common structure, the CSF is also much more suitable for those with non-technical backgrounds and C-Level executives.  It was created with the realization in mind that many of the required controls and processes for a security program have already been created and duplicated across these standards.  In effect, it provides the mechanisms for a common structure within the industry that allows for any organization to drive growth and maturity of cybersecurity practices, and to shift from a reactive state to a proactive state of risk management.

For organizations that are Federally regulated, the CSF may be of particular importance.  Many top level Directors have expressed that an industry driven cybersecurity model is much more preferred over prescriptive regulatory approaches from the Federal government.  Even though the CSF is currently voluntary for both public and private sectors, it is important to realize that with a high degree of probability, this will not be the case in the future.  Discussions have already taken place amongst Federal regulators and Congressional lawmakers that this voluntary framework should be used as the baseline for best security practices, including assessing legal or regulatory exposure and for insurance purposes. If these types of suggestions become reality, implementing the CSF now could allow organizations much more flexibility and cost savings in how it is implemented.

In addition to staying ahead of possible new laws and federal mandates, the CSF provides any organization, regulated or not, a number of other benefits, all of which support a stronger cybersecurity posture.  Some of these benefits include:
  • A common language and structure across all industries
  • Opportunities for collaboration amongst public and private sectors
  • The ability to demonstrate due-diligence and due-care by adopting the framework
  • Greater ease in adhering to compliance regulations or industry standards
  • Improved cost efficiency
  • Flexibility in using any existing security standards, such as HiTrust, 800-53, ISO 27002, etc.
Though it is difficult to express all the possible benefits in this short post, Security PS highly recommends to any organization that they take a good look at the CSF and consider their options for implementation and future laws that influence its use.


If you have more questions, please consider contacting us for additional details.  We’ll be glad to assist you and your organization.

ASP.NET Core and Docker Update: Docker Compose!

Previously in ASP.NET Core, PostgreSQL, Docker, and Continuous Integration with Jenkins, I wrote about my experience getting started with ASP.NET Core and Docker. In that post, I was running a script to stop and remove my previous containers each time I deployed, and I had separate scripts to build and run each Docker container. In a recent conference video, a speaker referenced Docker Compose, and I'm so glad he did. It simplifies my set up greatly! This post describes the changes I made to my project to use Docker Compose.

I reused all my previous DockerFiles and I needed to reference them in a docker-compose.yml file in the root of my project. Here's that file:

After trying to run this (which I will get to later), I discovered that there were some differences in how linking two containers works. Instead of automatically providing environment variables that reference the linked containers, it sets up hostnames that match the services above. So, I needed to make a few changes to my configuration files. First, I updated my appsettings.json file to change the host referenced by the ConnectionString.

In the haproxy.cfg configuration file, I referenced the web1 and web2 hosts. I also learned how to correctly configured the proxy to set a cookie and direct users to the same Kestrel instance each time.

With those changes, I was ready to build and run all the containers.

Since I'm deploying a new instance of the database server and an empty database, I also need to run my Entity Framework Migrations. So the next step is to run "docker-compose exec -d web1 dotnet ef database update".

Finally, when I need to stop and remove the containers to make way for a newer build, Docker Compose takes care of that as well.