THE SECURITY PS BLOG:
Observations and insights from the Security PS Team.

Non-Negotiable Elements of a Secure Software Development Process: Part 3 - Validation Criteria

In September, I gave a presentation focused on helping quality assurance professionals understand how they fit into a secure software development process (SSDP) and how they can take an active role in improving software security.  In that presentation, I discussed essential elements that make up a successful SSDP.  These elements are: security requirements (expectations); secure architecture, configuration, and coding patterns (how to satisfy an expectation); and validation criteria (verification that expectations have been met).  These elements allow an organization to be transparent regarding its security goals and performance.  They also facilitate communication with customers, developers, managers, and other project stakeholders.

This article is part 3 in the Non-Negotiable Elements of a Secure Software Development Process series and focuses on defining validation criteria. In part 1 of the series, we discussed how security requirements set clear and reasonable expectations that development teams can plan for and meet to satisfy a specific level of security assurance.  Part 2 discussed how to use secure architecture, configuration, and coding patterns to satisfy security requirements and reduce the ongoing cost of developing secure code.All three articles are listed below:

Part 1: Security Requirements
http://blog.securityps.com/2013/01/non-negotiable-elements-of-secure.html

Part 2: Secure Architecture, Configuration, and Coding Patterns

http://blog.securityps.com/2013/01/non-negotiable-elements-of-secure_15.html

Part 3: Validation Criteria

http://blog.securityps.com/2013/03/non-negotiable-elements-of-secure.html


Why Validation Criteria?
Many organizations measure how secure their applications and infrastructure are using assessments.  They might use application security assessments, penetration tests, design reviews, threat models, or many other fault finding activities.  These can be good risk indicators, and they are often important activities to include within an application security program, but they fall short in actually telling us whether our application is secure. They focus on finding problems, telling us when an asset is insecure, and remain silent about everything else.

With the declaration of security requirements and secure architecture, configuration, and coding patterns (secure patterns), we now have a list of positive characteristics about the application that we would like to affirm. If we can validate these elements, then we can get a more comprehensive understand regarding how our application is secured.  We can then focus on using assessments to identify missing security requirements and improve our overall process as a whole instead of a single application.

What to Validate?
If we try to validate the code in a vacuum, we will fall back into the same old process:
  1. Use a code review tool
  2. Find problems
  3. Fix them
  4. Repeat
Instead, we need to validate that we have satisfied the security requirements defined for a specific application.  This simplifies and narrows our validation scope and approach, because the entire team should be using the predefined secure patterns to satisfy requirements.  Each secure pattern should be linked to specific validation criteria (including test cases, desired results, and the reporting format).  Before the development team ever implements a secure pattern, these criteria must already be defined.

Validation Criteria Approaches
Validation steps can focus on testing code, application behavior, or both.  The effectiveness of testing code versus application behavior will depend on the security requirement or secure pattern that is being tested. For example, if one were to verify the code/configuration of an application to determine whether it used SSL/TLS, they would fail to test whether the content delivery network (CDN) required SSL/TLS.  In this case, testing the application behavior of each infrastructure element would be more effective than testing the code or configuration.  On the other hand, validating that an application used only prepared statements would be more effective in code. The final series of slides at the end of the presentation demonstrates this further.

Additionally, validation criteria can affirm controls are in place (positive testing) or find deviations from the defined secure pattern (negative testing).  Positive testing should be the primary approach used for all validation criteria.  Negative testing should simply be used to provide additional confidence in test results.

Test Case Techniques
Teams can define test cases within their validation criteria in a number of ways.  A few examples are listed below.  Each is demonstrated in the presentation slides.
  • Manual test cases
    • Step by step instructions
  • Automated test cases
    • Grep or Findstr
    • Selenium
  • Exploratory Testing
  • Security testing tools
    • Free/Open Source
    • Commercial (Not Recommended!)
I recommend starting simply and then using more automation and sophistication as the need builds.  If teams start out with manual test cases they can gradually acclimate to new requirements and the process as a whole. Eventually a large number of requirements will be defined and it will be too time consuming to manage and test all of them.  Automation will be the natural next step.  Teams can start out using a simple grep tool and selenium to complete most automated test cases. Custom scripting may follow.

I don’t recommend starting with commercial automated security testing tools like a static code analyzers or dynamic application testing platform.  Many times teams spend a lot of time and money to get it set up and are disappointed with the results.  Before investing in one of these tools, it's important to have clear goals, realistic cost and personnel expectations, and a strategy for how the tool will fit in with a larger application security program. If the team is moving towards using  a commercial security tool, consider starting with free tools first like FxCop for .NET applications and write your own custom rules.

The key advantage to writing your own manual or automated test cases is that you can customize them for the application and its business context. This isn't something a commercial, general purpose security scanning/code review tool is designed to do.

Once a team has a fairly comprehensive set of security requirements; is effective at implementing their security patterns; and desires to grow further, exploratory testing could become valuable.  Exploratory testing should be used to identify vulnerabilities that do not yet have a security requirement already defined.  This testing approach is used to improve the overall secure software development process, expand the list of requirements, and improve all the organization’s applications - not to focus on the faults of one single application.

How to Win at Software Security
If all of these components are put in place:
  1. Security and Compliance Requirements
  2. Secure Architecture, Configuration, and Coding Patterns that Satisfy the Requirements
  3. Security Test Cases and Validation Criteria that Affirm Patterns are Implemented
AND you have a specific well defined business goal you are trying to satisfy with these requirements, you can provide compelling evidence that your application is "secure enough" by the standards or goals you have set.
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment