My KringleCon 2018 Experience!

Hello World

Hi, my name is Christian, and I joined the Security PS team back in July as an Associate Application Security Engineer. As an associate, I get to spend a significant amount of time training to build out my application security knowledge and experience so I can grow in my technical testing skills and consulting ability. I know there are many others out there who are working to grow their security knowledge as well, so I've decided to blog a bit of what I'm learning in hopes that my "aha" moments may help and spur on others on a similar journey.

In December, I participated in SANS Holiday Hackfest. In this post, I want to share the top four things I learned from the challenge. They are broken down into the two concepts and two tools I found most exciting to learn about. These concepts and tools apply web app security, network security, and best practices. The first concept I would like to share is a best network practice: keep credentials out of system commands.

The Event

At the gates of Santa’s Castle
 Each year for the last eleven years, the SANS Institute hosts a holiday-themed hacking challenge. For 2018, SANS requested the help of the hacking-challenge attendees in assisting Santa save the North Pole from a malicious cyber takeover. To assist the hacking challenge attendees, SANS composed a free, virtual conference that went hand-in-hand with the hacking challenge. This virtual conference was known as KringleCon, and all of the conference videos were hosted on Youtube. In addition to the in-game hints and resources, the conference presenters delivered talks on relevant topics, concepts, and tools, with several of the videos containing hints as to how to solve the objectives in the hackfest. Beyond the fun of the hacking challenges, KringleCon provided an extremely effective manner through which to teach new tools and security concepts. Here are a few of those concepts I encountered.

Concept 1: Keeping Credentials Out of System Commands

As I was helping to secure the North Pole’s networks, I approached one of Santa’s elves, who needed to access a networked SMB share to upload a job report. Simple enough, except the elf forgot his password and needed assistance in recovering his forgotten credentials. Thankfully, the elf in question provided some resources to assist in triaging the network. With a little elbow grease and some nifty output formatting, I was able to help the elf retrieve his credentials.
What I learned is that, when running instructions via the command line, it is important to take care to not enter plaintext credentials on the commandline because these commands are viewable by listing the running processes. This presents a vulnerable practice in which malicious actors on the local network may be able to retrieve command line credentials for restricted resources by viewing the running processes.

The most significant lesson I took from this exercise is to not enter credentials on the command line. However, this isn’t always possible. If command line system credentials are required in a corporate environment, the best solution would be to find a way to use the tools differently so as to not enter the credentials on the command line. Alternatively, one workaround is to store the credentials into a file and read the credentials from this file using the necessary input/output commands. Of course, this makes it imperative to ensure the appropriate protections are applied here, such as enabling strict read/write privileges to the file or using an encrypted storage method.

Next, let’s talk about a web application tool that makes it easy to find lost, forgotten, or hidden artifacts: trufflehog.

Tool 1: Using Trufflehog to Dig Through Git Repositories

KringleCon badge view showing the hints and resources pane
GitHub is a phenomenal resource for team collaboration on, and development of, software applications, but, like any repository, artifacts may inadvertently be uploaded to GitHub which should not for security reasons. Examples of critical information to be wary of uploading to remote repositories includes SSH keys, RSA keys, artifact pathways, clear text credentials, and more. While data from these commits can be removed, artifacts may still remain in the .head folder of the git repository.

Cue trufflehog. According to the tool’s developer, trufflehog “searches through git repositories for secrets, digging deep into commit history and branches. This is effective at finding secrets accidentally committed.” In the context of KringleCon, one of the grand challenges involved retrieving the password for an encrypted zip file. Using trufflehog, the credentials were quickly and easily retrieved. A great video from the KringleCon conference demonstrates the value of trufflehog.

Based on my experience with trufflehog during KringleCon, I plan to include trufflehog in my open source reconnaissance should I find a development team’s GitHub repository during application assessments. Furthermore, tools similar to trufflehog may be useful for development teams to run on their own git repositories to determine if any sensitive information is stashed anywhere in the repository history prior to executing a push or pull request.

The next stop on the KringleCon tour is the concept of dynamic data exchange and how it may be exploited via CSV files.

Concept 2: Exploiting Dynamic Data Exchange via CSV Injection

Being in infosec, we often hear about the ever-ominous Advanced Persistent Threat (APT), usually in reference to nation-state actors. Well, the next concept I learned about from KringleCon is actually a vulnerability that exists in common office applications that APT28 (Sofacy aka Fancy Bear aka Strontium aka the Russian GRU) have exploited in the past: dynamic data exchange.

In true web application fashion, Santa set up a web form which accepts resumes uploaded in CSV format. Unfortunately, Santa was not aware of dynamic data exchange (DDE) that allows for CSV injection. Also known as formula injection, inherent within Microsoft Excel and LibreOffice Calc is the ability to define a spreadsheet cell formula of which the system then computes and renders the result. This video from KringleCon does a very good job of explaining and demonstrating the vulnerability. Malicious users can harness these formulas to exploit vulnerabilities within the spreadsheet software, trick the user into ignoring security warnings, or read and exfiltrate other data.

Getting back to the North Pole, thanks to the in-game objectives, I knew Santa’s CSV resume upload feature was exploitable. I uploaded a typical Microsoft excel document replete with fake job information and an invisible embedded formula that copied a local file to the public internet directory. After playing with the formulas on my local system, I was able to upload the malicious CSV file and pull down the local file from Santa’s HR network.

Ultimately, the CSV DDE exploit is a well-known injection trajectory and works mainly as a payload for a social engineering attack. Critically, this exploit depends on the victim ignoring system warnings and clicking through to open up the attachment. At Security PS, we test to ensure the fidelity of file upload features - but this attack vector goes further and takes advantage of the core functioning of office application interdependencies.

Finally, KringleCon provided me the opportunity to learn a new tool which helps to visualize resource authorization controls in Active Directory networks: Bloodhound.

Tool 2: Using Bloodhound to Graph Active Directory Trust Relationships

As far as business resource authorization controls go, Active Directory (AD) has become a mainstay of corporate networks and one attackers seek to abuse. However, the trust relationship structure in AD is difficult to visualize and, as such, unintended trust relationships could form that leave open vectors of attack. This is where Bloodhound comes into play.

More appropriate for network penetration tests rather than web application assessments, Bloodhound is a tool that needs three core pieces of information from an AD environment: who is logged onto which computers, what users and groups belong to the different AD groups, and who has admin rights on which computers. The Bloodhound tool then takes all of this information and presents the data using graph visualization tools. KringleCon linked to a great video that shows the tool in action here.

The true value from Bloodhound is its ability to map the AD relationships and, with these, show the security assessor the recommended attack paths for privilege escalation. In a similar vein, defenders can utilize Bloodhound to identify the same attack paths in order to neutralize possible exploit vectors. These relationships would otherwise be too obscure to be noticed or be too time consuming to identify the weaknesses in the formed associations.

As relates to KringleCon, I utilized Bloodhound to view a particular AD environment. The objective was to identify a reliable path from a Kerberoastable user to the Domain Admin user group. Using Bloodhound’s natural in-built query language along with their standard queries, the attack vector and the specific user to target was made readily known.

Sum Up

KringleCon Youtube Virtual Conference Talks

Competitions and virtual conferences such as KringleCon offer a wealth of practice and knowledge that otherwise would take significant time and experience to amass. Not all capture-the-flag competitions are created equal, and, indeed, some end up being unrealistic puzzle boxes with no real-world applicability. Fortunately, KringleCon walked the line between fun challenges and educational virtual conference. As a new infosec community member, KringleCon taught me very valuable lessons which I intend to utilize in my role as a web application security engineer. It also gave me a vehicle to plug in to the infosec community and was a very fun experience as well. I highly recommend newbies and seasoned security practitioners take time to experience CTFs like KringleCon.

Protecting Thick-Client Applications From Attack (Or How To Not Have To)

Security PS Application Security
In the previous post, I discussed security testing techniques Security PS used to assess a complex thick-client application. After the assessment was complete, our client asked:
  • Is .NET less secure than other languages since these techniques are possible?
  • How do I stop attackers from manipulating my applications?
This post answers those questions and discusses best practices around securing client-server architectures.

Security PS tested the thick-client application with a variety of techniques including:
  • Reusing the application's DLLs to communicate with the server and decrypt data
  • Using a debugger to interactively modify variables and program flow
  • Disassembling, modifying, and reassembling the thick-client application to integrate it with a custom testing framework
Considering these methods, how does .NET compare to other platforms? Is .NET less secure than another choice?
.NET is not unique. In other assessments, Security PS has used the same techniques to assess Android, Java, and Native (C/C++ executables) applications. Based on my quick research some or all of the techniques work for iOS applications as well. The only differences between these platforms are the level of complexity and the toolset required. .NET is not any more or less secure than any other platform in this way.

How do you stop attackers from reusing DLLs, interactively debugging applications, or modifying applications?
You shouldn't need to in most cases. For a client-server architecture, the thick-client resides on a user's (or attacker's) computer. That environment cannot and should not be trusted to enforce security controls or protect sensitive data. Client-side security controls can be defeated or bypassed completely, and any data sent to the client can be obtained by an attacker (even if it is encrypted).

Instead, organizations should spend their time architecting and designing applications that enforce security controls on the server-side. If all the security controls are implemented on the server-side, then it does not matter whether the attacker manipulates the thick-client (or writes his or her own client application). This security best practice applies to web applications, web services, and client-server applications.

If an organization still wishes to protect client-side code from analysis and manipulation, what are the options? If you search on the Internet, you may find these choices:
  • Strong name verification (for .NET)
  • Obfuscation
  • Native compilation (for .NET)
  • Encryption
  • and more...
Each option can be used to slow down an attacker and will make analysis or modification more difficult. But, none of them prevent a skilled and determined attacker from eventually reaching their goal. Let's briefly dig in to each one.

Strong name verification enables an assembly to identify and reference the correct version of a DLL. Some Internet sources recommend using strong name verification to prevent attackers from modifying DLLs. But, according to Microsoft, it should not be used as a security control. Security PS's experience agrees with that assertion. It is trivial to bypass Strong name verification, especially with local administration privileges on a computer.

A non-technical explanation of Obfuscation is that a tool jumbles up the variable names, program structure, and or program flow before it is distributed to users. Then, when an attacker uses an interactive debugger or reflection to view the code, he or she has difficulty following and understanding the program's logic. There are many free and commercial tools to provide this protection, and it does demotivate casual attackers from performing analysis. However, there are also tools to help deobfuscate applications or track program flow.

Obfuscation tools can also make it difficult to use reverse engineering tools like ILSpy, dnSpy, and ILDasm/ILAsm. The tools can corrupt or mangle portions of the application to crash an attacker's toolsets. Additionally, encryption can be applied to strings, resources, or the code within a method. This makes it difficult to use reflection to see the original code and more complex to modify the IL code. However, eventually the code must be decrypted so it can be run, making it available to attackers.

Security PS's research into two obfuscation tools (ConfuserEx and Dotfuscator Community Edition), showed that most controls can be bypassed by a skilled attacker or worked around using WinDbg and managed code extensions. Additionally, there's a significant performance impact to using some of the Obfuscation controls.

Native compilation (i.e. Ngen) compiles a .NET DLL into processor-specific machine-code. Security PS found that native compiled .NET applications still allow an attacker to use interactive debuggers to introspect and control program. Additionally, there's no mention of using it as a security feature in Microsoft's documentation. Therefore, this technique does not provide a significant amount of protection.

There are even more techniques then I've named here. But, the important points to remember are:
  • Implement and enforce security controls on the server-side
  • Only send information to the client-side that you want the user (or attacker) to see (even encrypted)
  • Don't rely upon thick-client, browser, desktop application, etc. to provide any sort of reliable level of security
  • Only apply protection mechanisms to the executable if you absolutely have to and/or if it is nearly free (money, time, operationally, etc.)

Lessons From Attacking Complex Thick-Client Applications

Security PS performs assessments on a wide variety of software architectures and platforms, some of which cannot be tested effectively using the more standard testing tools and methods. Recently, our team performed an assessment on a more complex application architecture. In this case, a .NET thick-client communicated with a variety of server-side components using either signed SOAP messages or with custom TCP messages. These factors meant our consultants couldn't use a proxy tool to directly manipulate traffic for security testing purposes. This post discusses some of the techniques our application security team used to overcome those challenges and successfully complete the assessment.

Security PS Application Security
Security PS used three techniques to manipulate both the signed SOAP requests and the custom TCP messages:
  • Writing custom code and reusing thick-client libraries
  • Attaching a debugger to the running application and manipulating variables
  • Disassembling, modifying, and reassembling the application
Code is often written in a modular way that makes it easy to reuse existing libraries. In this assessment, Security PS wrote GUI applications that reused the thick-client's libraries to decrypt data or send data to the server. This technique involved creating a new Visual Studio Project, adding the DLLs as a reference, and then writing code that calls functions within those thick-client libraries.

Next, Security PS needed to modify a field within a signed SOAP request to test authorization controls. Our team used a debugger and breakpoints to perform this modification. For .NET thick-clients, this attack is possible after disassembling and reassembling the application with debugging enabled.

Finally, we needed a way to quickly and easily manipulate custom TCP messages to identify vulnerabilities. Use of the debugger and breakpoints was too slow. Use of a custom written testing tool meant having to understand and duplicate some complex interactions that the thick-client managed. So, Security PS chose to directly modify the thick-client to allow interactive modification of TCP messages by consultants. For that to be possible, we needed to disassemble the thick-client, modify the intermediate language code, and then reassemble it.

Using these testing techniques, Security PS identified a number of high impact vulnerabilities. After discussing the vulnerabilities with the client, two of the questions they asked were:
  • Is .NET less secure than other languages since these techniques are possible?
  • How do I stop attackers from manipulating my applications?
The next post will consider these questions more, but the primary message we communicated to our client focused on a critical best practice for secure software design: all security controls must be implemented on a trusted component in the application architecture. In this case, security controls must be implemented on the server-side rather than on the client-side. The client operates on the attacker's computer where everything can be analyzed and modified regardless of the security controls used. The architecture of the application must assume the client environment cannot be trusted.  While additional controls can be applied to increase the difficultly an attacker would have in attempting to manipulate client-side security controls, it is important to recognize that the root of this security weakness is fundamentally a design flaw that would need to be addressed to fully mitigate the risks.

Stay tuned for a follow-up on the questions brought up above.

OAuth Is Not Meant For Authentication!

As we work with software development teams to help them apply security principles and practices to their applications, we commonly identify misunderstandings or gaps in the team's understanding regarding security features, APIs, or frameworks they are using. It's important to identify and correct these misunderstandings as early on as possible. When such security elements are misused, systemic security flaws are produced in the application that are difficult to resolve without significant reworking of the code or architecture.

One such example is the use of OAuth. As useful as OAuth is, it must be used for its intended purpose. If we try to make it do things it wasn't designed or intended to do, we get into trouble. Let's clarify the fundamental purpose and use of OAuth and in doing so, clear up a common misunderstanding with it.

OAuth is not meant for authentication. OAuth is for authorization.

Here are a few points demonstrating why:

OAuth has four Grant Types:
  • Authorization Code
  • Implicit
  • Resource Owner Password Credential
  • Client Credentials
For "Authorization Code" and "Implicit" grants the specification doesn't govern the submission of a username or password. It's something totally outside of the scope of OAuth. This is a great warning flag that OAuth is not intended to be used directly for authentication.

"Client Credentials" does have a username and password. It is sent as a Basic Authorization Header (Base64 encoded "username:password"). BUT, it's not a grant used by users. Here's what the specification says:
"Client credentials are used as an authorization grant typically when the client is acting on its own behalf (the client is also the resource owner) or is requesting access to protected resources based on an authorization previously arranged with the authorization server." -
An example client could be a third-party API, that a user has granted an "offline" token. That third-party API may make requests without a user interacting with it.

Now for "Resource Owner Password Credentials." Yes, you can use it to login with a username and password, but you probably shouldn't. Not because it's insecure, but because it doesn't scale well and isn't flexible. The specification says:
"The resource owner password credentials (i.e., username and password) can be used directly as an authorization grant to obtain an access token.  The credentials should only be used when there is a high degree of trust between the resource owner and the client (e.g., the client is part of the device operating system or a highly privileged application), and when other authorization grant types are not available (such as an authorization code)."
So why shouldn't you use the Resource Owner Password Credentials grant for authentication? Well, let's start by looking at the login request and response:
POST /token HTTP/1.1
Content-Type: application/x-www-form-urlencoded

HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache 
You submit a username and password and you get back and access token. The access token can then be used to call an API. Sounds ok, right? Let's add some complexities. First, OAuth is often used in combination with a stateless REST service. There's no session on the server-side. There's just the access token sent by the client, which is often a base 64 encoded set of claims with a signature (like a JWT). With that in mind, what if you need to do multi-factor authentication? What about security questions? What if there are several different ways a user can log in? How do you integrate all those options with the OAuth Resource Owner Password Credentials grant?

One common approach is to make it an API call and have a mobile or web application force you to complete it. But, if the application is stateless and you already have an access token why not just call any other API method directly with that token and ignore the secondary authentication step. It's trivial to bypass in a client-side application (mobile, thick-client, web page). So, that means attackers can bypass that multi-factor system that helps meet compliance and regulatory requirements.

Ok, how about if it's made as part of the login process? Well, that's not really OAuth any more. You have to add fields, add steps, and/or go through more process before issuing an access token. Are you going to write your own custom OAuth client library and server to do it? You might as well write a normal forms based authentication process instead?


How do others do it then? Instead, they use the OAuth Authorization Code or Implicit grants and a separate login server (or identity provider) to handle all the authentication and pass back a user with an access token. In fact, that's exactly what the Authorization Code and Implicit grant is for. That identity server can offer as many options and schemes for authenticating users as it wants. The authentication process is centralized and isolated from the applications that rely upon them. When it's done authenticating the user, it passes the user back to the application fully authenticated. With this in mind, you can see that this is exactly what the OAuth Specification Authors had in mind when you read the Introduction section here:

This issue seems to come up in assessments more and more often lately. I keep seeing software development teams download a copy of Thinktecture's IdentityServer (a great open source product by the way), and then implement it just for their application using the resource owner password credentials. Then, they later bolt on security questions, finger print scanners, multi-factor authentication, and "remember me" features. As a result, their stateless application has easily bypassable authentication controls that are very time consuming to fix (or they have to compromise on having the API be stateless).

If you are considering implementing OAuth or you already have, reach out to Security PS to help with the design and architecture. You could also watch some of these videos to help avoid common mistakes:

Improving User Acceptance of Account Lockout Responses for Login Processes

The purpose of a login process is to identify a particular individual and validate their identity before granting them access to an application. It's critical that the process only allows the owner of an account to login, and it must prevent an attacker from logging in as another user. This post discusses one aspect of protecting authentication processes: using an account lockout response. And, it specifically focuses on decreasing the frustration users experience as a result of that control.

An account lockout response is a security control developers apply to all of the application's authentication processes to limit the number of times an individual can enter the wrong credentials consecutively. For example, if an attacker incorrectly guesses another user's password five times in a row, the application will disable the user's account and notify the user by email. Organizations must choose an appropriate lockout threshold and choose how accounts are unlocked.

When should an organization use an account lockout response? That's difficult to answer unless a company is compelled to implement the control due to a regulation or compliance requirement. The development team, security team, and marketing or user experience groups really need to discuss the pros and cons of such a process. On one hand, the application will have significantly more resistance to password guessing attacks, protecting users' accounts from being compromised. On the other hand, it may frustrate users, raise customer support costs, or even drive customers away from using the application. If an account lockout response is implemented (which Security PS generally encourages), it must be carefully designed to increase user acceptance.

One of the frustrations users experience related to account lockout responses is that they may not know their password (or sometimes their username) and they lockout their account accidentally. On top of that, the user doesn't know their account is locked out. This occurs because the application cannot display notifications on the login page that the account is locked out. If it did, the process would inform an attacker that a particular username is valid resulting in a username harvesting vulnerability. This is one of the key challenges to solve in order to increase user acceptance of the account lockout response control.

To address the notification challenge, Security PS recommends several user experience improvements that don't expose the application to additional risk. First, the application can email the user when a failed login attempt occurs. Additionally, if the account is locked out, the application can immediately email the user instructions for unlocking the account. These notifications do not cause username harvesting vulnerabilities, because only the account owner will receive those email notifications not the attacker.

Email notifications are helpful, but what if the user doesn't check their email while using the application? They can still get frustrated easily. So, developers should consider sending SMS notifications when a user's account is locked out or potentially before the lockout occurs. The message can be short, direct, and can point the user to their emailed instructions for unlocking their account or resetting their password. The hope is that the user receives this notification before getting frustrated that they can't login.

Finally, the messaging in the application itself can remind users that a lockout response is present and that they can check their email if they believe their account is locked out. This messaging can be displayed all the time or after a specific number of failed attempts per session. Key here is that this is not a specific number of failed attempts per username or account, but per session. Otherwise, username harvesting vulnerabilities are introduced.

Authentication processes, especially complex, multi-step, multi-credential authentication processes are difficult to get correct. It's easy to introduce vulnerabilities in the user creation/registration step, forgot username/password step, and login process itself. If you are in the process of designing an authentication process, whether it's using an OAuth2, OpenID Connect, or custom forms based authentication, contact Security PS to have a partner come along side you and help ensure the design and implementation are secure.

5 Things to Avoid When Implementing the CSF

In my last post, I gave a quick recap of what the Cybersecurity Framework is, how it differs from other standards and the importance it carries with both regulated and non-regulated organizations.  This week, I wanted provide some quick lessons learned by many organizations, not only with the CSF itself, but with many of the standards used within the categories of the framework.  Listed below are 5 quick things your organization should consider when implementing any security framework or standard.

  1. Don’t assume the CSF is only for “Critical Infrastructure” or Federally regulated organizations: Although the Executive Order is titled as such, it is meant for all organizations, in both public and private sectors.  The same can be said for NIST 800-53 controls; it’s not just for Federal agencies. 

  2. Don’t try to do it all yourself: The implementation of the CSF requires the input and collaboration of almost every vertical within the organization.  It can not be done solely by one person.  Often times it requires outside help with subject matter experts for implementing various requirements.

  3. Don’t adopt controls, just to adopt controls: This is one of the most common pitfalls.  The informative references in the CSF are not a list of mandated controls which must be adopted for each category.  They are to be considered as examples or possible suggestions.  Each category must be carefully examined and the organization must ultimately decide which controls fit and which ones do not. When gaps exist, a risk assessment should be conducted to determine if the control is even necessary. All successful information security programs are built on risk management, not controls.

  4. Don’t assume there is only one way for implementation: Every organization has their own business goals, risk levels and security requirements.  One size does not fit all and neither does the implementation of the CSF.  The NIST web site, along with many others offer unique approaches to implementing the framework.  Security PS recommends that each organization carefully weigh the many options and decide which method, or combination of methods is right for your environment.

  5. Don’t ever consider it “Finished”: Risk management and information security in general is a lifecycle or reiterative approach; the CSF is designed to evolve in the same way. Requirements change, new technologies and vulnerabilities emerge and risk levels alter over the course of time, which requires constant improvement of the organization’s program.

What challenges have you faced when implementing the CSF or other framework?  We’d like to hear from you!  Please let us know in the comments below.

Manual Application-Layer Security Testing AND Automated Scanning Tools

There are many automated application security tools available on the market. They are useful tools for identifying vulnerabilities in your company's applications, but, they shouldn't be used alone as part of a risk identification process. This post discusses the advantages of automated tools and identifies gaps that need to be filled with manual testing techniques for a more comprehensive view of application risk.

For my purpose here, I'm going to consider automated scanning tools like WebInspect, AppScan, and Acunetix. These are all automated dynamic analysis tools, but there are quite a few other options such as Automated Code Review, Binary Analyzers, and even newer technologies that instrument application code and analyze it during runtime. The capabilities of each of these types of tools differ, but many of the pros and cons are similar.

Automated tools require at least a one-time set up step to configure it for your application. Once configured, the tools can run on a scheduled basis or even as part of a continuous integration build process. Automated tools can scan an application and deliver results very quickly, often in hours. They can scan large numbers of applications too. They are great at identifying vulnerabilities that can be identified by sending attack input and analyzing the output of the application for vulnerability signatures. The tools can detect popular vulnerabilities like SQL injection, cross-site scripting, disclosure of stack traces or error messages, disclosure of sensitive information (like credit card numbers or SSNs), open redirects, and more. They generally perform best identifying non-complex to moderately complex vulnerabilities. This makes automated tools great for use cases such as:
  • A first time look at the security of a web application
  • Scanning all of an organization's web applications for the first time or on a periodic basis
  • Integration with other automated processes, such as the build step of a continuous integration server (probably on a schedule. i.e. every night)
After understanding the value that automated tools can provide, it's also important to understand their limitations. The primary limitation is that they aren't human. They are written to find a concrete, specific set of issues and to be able to identify those issues based on signatures or algorithms. An experienced application security tester's knowledge and expertise will far outshine a tool allowing them to identify tremendously more issues and interpret complex application behavior to understand whether a vulnerability is present. This typically means manual testing is required to identify vulnerabilities related to:
  • Authentication process steps including login, forgot username/password, and registration
  • Authorization, especially determining if data is accessed in excess of a user's role or entitlements or data that belongs to another tenant
  • Business logic rules
  • Session management
  • Complex injection flaws, especially those that span multiple applications (for example a customer application accepts and stores a cross-site scripting vulnerability, but the exploit executes in the admin application)
  • Use of cryptography
  • The architecture and design of the application and related components
The issues listed above are extremely important! For example, it's unacceptable for an attacker to be able to read and modify any other user's data. But, an automated tool isn't going to be able to identify this type of flaw. These tools also tend to perform poorly on web services, REST services, thick-clients, mobile applications, single-page applications. For these reasons, manual testing is absolutely essential for identifying risk in an application.

If manual testing can identify all the same issues and more versus an automated scanning tool, why bother with the automated scanning tool? Well, sometimes you don't need the automated scanning tool. But most of the time, it's still very helpful. The key factors are speed and scale. You can scan a lot of web applications very quickly, receive results, and fix them. THEN, follow up with manual testing. The caution is that scanning alone and waiting to do manual testing may leave critical risk vulnerabilities undiscovered in the application, so don't wait too long afterward.

If your organization needs assistance choosing and adopting automated scanning tools or would like more information about manual application-layer security testing, please contact Security PS. Security PS does not sell automated tools, but we have advised many of our clients regarding how to choose an appropriate tool, prepare staff for using that tool, and update processes to include its usage.