Manual Application-Layer Security Testing AND Automated Scanning Tools


There are many automated application security tools available on the market. They are useful tools for identifying vulnerabilities in your company's applications, but, they shouldn't be used alone as part of a risk identification process. This post discusses the advantages of automated tools and identifies gaps that need to be filled with manual testing techniques for a more comprehensive view of application risk.

For my purpose here, I'm going to consider automated scanning tools like WebInspect, AppScan, and Acunetix. These are all automated dynamic analysis tools, but there are quite a few other options such as Automated Code Review, Binary Analyzers, and even newer technologies that instrument application code and analyze it during runtime. The capabilities of each of these types of tools differ, but many of the pros and cons are similar.

Automated tools require at least a one-time set up step to configure it for your application. Once configured, the tools can run on a scheduled basis or even as part of a continuous integration build process. Automated tools can scan an application and deliver results very quickly, often in hours. They can scan large numbers of applications too. They are great at identifying vulnerabilities that can be identified by sending attack input and analyzing the output of the application for vulnerability signatures. The tools can detect popular vulnerabilities like SQL injection, cross-site scripting, disclosure of stack traces or error messages, disclosure of sensitive information (like credit card numbers or SSNs), open redirects, and more. They generally perform best identifying non-complex to moderately complex vulnerabilities. This makes automated tools great for use cases such as:
  • A first time look at the security of a web application
  • Scanning all of an organization's web applications for the first time or on a periodic basis
  • Integration with other automated processes, such as the build step of a continuous integration server (probably on a schedule. i.e. every night)
After understanding the value that automated tools can provide, it's also important to understand their limitations. The primary limitation is that they aren't human. They are written to find a concrete, specific set of issues and to be able to identify those issues based on signatures or algorithms. An experienced application security tester's knowledge and expertise will far outshine a tool allowing them to identify tremendously more issues and interpret complex application behavior to understand whether a vulnerability is present. This typically means manual testing is required to identify vulnerabilities related to:
  • Authentication process steps including login, forgot username/password, and registration
  • Authorization, especially determining if data is accessed in excess of a user's role or entitlements or data that belongs to another tenant
  • Business logic rules
  • Session management
  • Complex injection flaws, especially those that span multiple applications (for example a customer application accepts and stores a cross-site scripting vulnerability, but the exploit executes in the admin application)
  • Use of cryptography
  • The architecture and design of the application and related components
The issues listed above are extremely important! For example, it's unacceptable for an attacker to be able to read and modify any other user's data. But, an automated tool isn't going to be able to identify this type of flaw. These tools also tend to perform poorly on web services, REST services, thick-clients, mobile applications, single-page applications. For these reasons, manual testing is absolutely essential for identifying risk in an application.

If manual testing can identify all the same issues and more versus an automated scanning tool, why bother with the automated scanning tool? Well, sometimes you don't need the automated scanning tool. But most of the time, it's still very helpful. The key factors are speed and scale. You can scan a lot of web applications very quickly, receive results, and fix them. THEN, follow up with manual testing. The caution is that scanning alone and waiting to do manual testing may leave critical risk vulnerabilities undiscovered in the application, so don't wait too long afterward.

If your organization needs assistance choosing and adopting automated scanning tools or would like more information about manual application-layer security testing, please contact Security PS. Security PS does not sell automated tools, but we have advised many of our clients regarding how to choose an appropriate tool, prepare staff for using that tool, and update processes to include its usage.

ASP.NET Core Basic Security Settings Cheatsheet

When starting a new project, looking at a new framework, or fixing vulnerabilities identified as part of an assessment or tool, its nice to have one place to refer to the fixes for common security issues. This post provides solutions for some of the more basic issues, especially those around configuration. Most of these answers can be found in Microsoft's documentation or by doing a quick Google search. But hopefully having it all right here will save others some time.

Enabling An Account Lockout Response

To enable the account lockout response for ASP.NET Identity, first modify the Startup.cs file to choose appropriate settings. In the ConfigureServices method, add the following code:
services.Configure<IdentityOptions>(options =>
{
  //optional
  options.Lockout.AllowedForNewUsers = true;
  //requires manual unlock
  options.Lockout.DefaultLockoutTimeSpan = TimeSpan.MaxValue;
  //three failed attempts before lockout
  options.Lockout.MaxFailedAccessAttempts = 3; 
});
With the settings configured, lockout still needs to be enabled in the login method of the account controller. In AccountController -> Login(LoginViewModel model, string returnUrl = null), change lockoutOnFailure from false to true as shown below:
var result = await _signInManager.PasswordSignInAsync(model.Email, model.Password, model.RememberMe, lockoutOnFailure: true);

References:
ASP.NET Identity comes with a class that validates passwords. It is configurable and allows one to decide if passwords should require a digit, uppercase letters, lowercase letters, numbers, and/or a symbol. This policy can be further customized by implementing the IPasswordValidator interface or extending the Microsoft.AspNetCore.Identity.PasswordValidator. The code below extends the PasswordValidator and ensures the password does not contain an individual's username.
using ASPNETCoreKestrelResearch.Models;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using System.Threading.Tasks;

namespace ASPNETCoreKestrelResearch.Security
{
  public class CustomPasswordValidator<TUser> : PasswordValidator<TUser> where TUser : IdentityUser
    {
        public override async Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password)
        {            
            IdentityResult baseResult = await base.ValidateAsync(manager, user, password);

            if (!baseResult.Succeeded)
                return baseResult;
            else
            {
                if (password.ToLower().Contains(user.UserName.ToLower()))
                {                    
                    return IdentityResult.Failed(new IdentityError
                    {
                        Code = "UsernameInPassword",
                        Description = "Your password cannot contain your username"
                    });
                }
                else
                    return IdentityResult.Success;
            }
        }
    }
}
Next, ASP.NET Identity needs to be told to use that class. In the ConfigureServices method of Startup.cs, find services.AddIdentity and add ".AddPasswordValidator<CustomPasswordValidator<ApplicationUser>>();" as shown below.
services.AddIdentity<ApplicationUser, IdentityRole>()
  .AddEntityFrameworkStores<ApplicationDbContext>()
  .AddDefaultTokenProviders()
  .AddPasswordValidator<CustomPasswordValidator<ApplicationUser>>();

Choosing a Session Timeout Value

Developers can choose how long a session cookie remains valid and whether a sliding expiration should be used by adding the following code to the ConfigureServices method of Startup.cs:
services.Configure<IdentityOptions>(options =>
{
  options.Cookies.ApplicationCookie.ExpireTimeSpan = TimeSpan.FromMinutes(10);
  options.Cookies.ApplicationCookie.SlidingExpiration = true;
});

Enabling the HTTPOnly and Secure Flag for Authentication Cookies

First, if you are using Kestrel, HTTPS (TLS) is not supported. Instead, it is implemented by HAProxy, Nginix, Apache, IIS, or some other web server you place in front of the application. If you are using Kestrel, the Secure flag cannot be enabled properly from the application code. However, if you are hosting the application in IIS directly, then it will work. The following code demonstrates enabling both the HTTPOnly and Secure flags for cookie middleware in ASP.NET Identity through the ConfigureServices method in Startup.cs.
services.Configure<IdentityOptions>(options =>
{
  options.Cookies.ApplicationCookie.CookieHttpOnly = true;
  options.Cookies.ApplicationCookie.CookieSecure = CookieSecurePolicy.Always;
});

Enabling Cache-Control: no-store

When applications contain sensitive information that should not be stored on a user's local hard drive, The Cache-Control: no-store HTTP response header can help provide that guidance to browsers. To enable that feature, add the following code to the ConfigureServices method in Startup.cs.
services.Configure<MvcOptions>(options =>
{
  options.CacheProfiles.Add("DefaultNoCacheProfile", new CacheProfile
  {
    NoStore = true,
    Location = ResponseCacheLocation.None
  });
  options.Filters.Add(new ResponseCacheAttribute
  {
    CacheProfileName = "DefaultNoCacheProfile"                    
  });
});

Disabling the Browser's Autocomplete Feature for Login Forms

The changes to ASP.NET's razor views makes this super simple. Just add the autocomplere="off" attribute as if it were a normal HTML input field:
<input asp-for="Email" class="form-control" autocomplete="off"/>
<input asp-for="Password" class="form-control" autocomplete="off"/>

Modify The Iterations Count for the Password Hasher's Key Derivation Function

First, I believe the default right now is 10,000 and the algorithm is PBKDF2. The code below won't change that default iteration count, but it will show how it can be done. In ConfigureService in Startup.cs add the following code.
services.Configure<PasswordHasherOptions>(options =>
{                
  options.IterationCount = 10000;
});

Enforcing HTTPS and Choosing Appropriate TLS Protocols and Cipher Suites

As mentioned above, if you are using Kestrel you won't be able to use HTTPS directly. Therefore, you won't do this in your code. You will need to look up how to do this in HAProxy, Nginx, Apache, IIS, etc. If you are hosting your application using IIS directly, then you can enforce the use of HTTPS using something like https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/RequireHttpsAttribute.cs BUT, it will only be applied to your MVC controllers/views. It will not be enforced for static content (see https://github.com/aspnet/Home/issues/895). If you want to do this in code, you will need to write some middleware to enforce it across the entire application. Finally, the choice of cipher suites offered cannot be changed using code.

Enabling a Global Error Handler

A custom global error handler is demonstrated by the Visual Studio template. The following relevant code can be found in the Configure method of Startup.cs.
if (env.IsDevelopment())
{
  app.UseDeveloperExceptionPage();
  app.UseDatabaseErrorPage();
  app.UseBrowserLink();
}
else
{
  app.UseExceptionHandler("/Home/Error");
}

Removing the Server HTTP Response Header

All responses from the server are going to return "Server: Kestrel" by default. To remove that value, modify UseKestrel() in Program.cs to include the following settings change:
public static void Main(string[] args)
{
  var host = new WebHostBuilder()
    .UseKestrel(options =>
    {
      options.AddServerHeader = false;
    })
  .UseContentRoot(Directory.GetCurrentDirectory())
  .UseIISIntegration()
  .UseStartup<Startup>()
  .UseUrls("http://0.0.0.0:5000")
  .Build();

  host.Run();
}

X-Frame-Options, Content-Security-Policy, and Strict-Transport-Security HTTP Response Headers

The following post seems to cover most of these headers well: http://andrewlock.net/adding-default-security-headers-in-asp-net-core/. I haven't evaluated its design, but I did verify that I can install it and the headers are added successfully. Since Kestrel does not support HTTPS, consider whether its appropriate to implement the Strict-Transport-Security header using code or by configuring the web server placed in front of the application.

I installed this nuget package using "Install-Package NetEscapades.AspNetCore.SecurityHeaders". Then, I made sure to have the following imports in Startup.cs:
using NetEscapades.AspNetCore.SecurityHeaders;
using NetEscapades.AspNetCore.SecurityHeaders.Infrastructure;
I added the following code to the ConfigureService method of Startup.cs:
services.AddCustomHeaders();
Last, I added this code to the Configure method of Startup.cs:
app.UseCustomHeadersMiddleware(new HeaderPolicyCollection()
  .AddContentTypeOptionsNoSniff()
  .AddFrameOptionsDeny()
  //.AddStrictTransportSecurityMaxAge()
  .AddXssProtectionBlock()
  //.AddCustomHeader("Content-Security-Policy", "somevaluehere")
  //.AddCustomHeader("X-Content-Security-Policy", "somevaluehere")
  //.AddCustomHeader("X-Webkit-CSP", "somevaluehere")
);
Make sure you add this code BEFORE app.UseStaticFiles();, otherwise the headers will not be applied to your static files.

Why Use the NIST CSF?

You may have heard about a recent framework that has been gaining traction since its inception a few years ago called the Cybersecurity Framework (CSF).  If not, I’ll give you a quick recap.  This framework was drafted by the Commerce Department’s National Institute of Standards and Technology (NIST) back in February of 2013 from an Executive Order by the President entitled “Improving Critical Infrastructure Cybersecurity”.  Following almost a year of collaborative discussions with thousands of security professionals across both public and private sectors, a framework was developed that is comprised of guidelines that can help organizations identify, implement, and improve cybersecurity practices as well as their overall security program as a whole.  The framework is architected to be a continuous process to grow in sync with the constant changes in cybersecurity threats, processes and technologies.  It was also designed to be revised periodically to incorporate lessons learned and industry feedback.  At its core, the principles of the framework conceives cybersecurity as a progressive, continuous lifecycle that identifies and responds to threats, vulnerabilities, and solutions. The CSF provides the channels to allow organizations to determine their current cybersecurity state and capabilities, set goals for a desired outcomes, and establish a plan for improving and maintaining the overall security program. The framework itself is available here.

So, what makes the CSF different from NIST 800-53 or ISO 27001/27002?  By definition, these are detailed regulatory documents which provide requirements for adhering to specific control standards. In comparison, the CSF provides a high-level framework for how to access and prioritize functions within a security program from these existing standards.  Due to it’s high-level scope and common structure, the CSF is also much more suitable for those with non-technical backgrounds and C-Level executives.  It was created with the realization in mind that many of the required controls and processes for a security program have already been created and duplicated across these standards.  In effect, it provides the mechanisms for a common structure within the industry that allows for any organization to drive growth and maturity of cybersecurity practices, and to shift from a reactive state to a proactive state of risk management.

For organizations that are Federally regulated, the CSF may be of particular importance.  Many top level Directors have expressed that an industry driven cybersecurity model is much more preferred over prescriptive regulatory approaches from the Federal government.  Even though the CSF is currently voluntary for both public and private sectors, it is important to realize that with a high degree of probability, this will not be the case in the future.  Discussions have already taken place amongst Federal regulators and Congressional lawmakers that this voluntary framework should be used as the baseline for best security practices, including assessing legal or regulatory exposure and for insurance purposes. If these types of suggestions become reality, implementing the CSF now could allow organizations much more flexibility and cost savings in how it is implemented.

In addition to staying ahead of possible new laws and federal mandates, the CSF provides any organization, regulated or not, a number of other benefits, all of which support a stronger cybersecurity posture.  Some of these benefits include:
  • A common language and structure across all industries
  • Opportunities for collaboration amongst public and private sectors
  • The ability to demonstrate due-diligence and due-care by adopting the framework
  • Greater ease in adhering to compliance regulations or industry standards
  • Improved cost efficiency
  • Flexibility in using any existing security standards, such as HiTrust, 800-53, ISO 27002, etc.
Though it is difficult to express all the possible benefits in this short post, Security PS highly recommends to any organization that they take a good look at the CSF and consider their options for implementation and future laws that influence its use.

Questions?

If you have more questions, please consider contacting us for additional details.  We’ll be glad to assist you and your organization.