5 Things to Avoid When Implementing the CSF

In my last post, I gave a quick recap of what the Cybersecurity Framework is, how it differs from other standards and the importance it carries with both regulated and non-regulated organizations.  This week, I wanted provide some quick lessons learned by many organizations, not only with the CSF itself, but with many of the standards used within the categories of the framework.  Listed below are 5 quick things your organization should consider when implementing any security framework or standard.

  1. Don’t assume the CSF is only for “Critical Infrastructure” or Federally regulated organizations: Although the Executive Order is titled as such, it is meant for all organizations, in both public and private sectors.  The same can be said for NIST 800-53 controls; it’s not just for Federal agencies. 

  2. Don’t try to do it all yourself: The implementation of the CSF requires the input and collaboration of almost every vertical within the organization.  It can not be done solely by one person.  Often times it requires outside help with subject matter experts for implementing various requirements.

  3. Don’t adopt controls, just to adopt controls: This is one of the most common pitfalls.  The informative references in the CSF are not a list of mandated controls which must be adopted for each category.  They are to be considered as examples or possible suggestions.  Each category must be carefully examined and the organization must ultimately decide which controls fit and which ones do not. When gaps exist, a risk assessment should be conducted to determine if the control is even necessary. All successful information security programs are built on risk management, not controls.

  4. Don’t assume there is only one way for implementation: Every organization has their own business goals, risk levels and security requirements.  One size does not fit all and neither does the implementation of the CSF.  The NIST web site, along with many others offer unique approaches to implementing the framework.  Security PS recommends that each organization carefully weigh the many options and decide which method, or combination of methods is right for your environment.

  5. Don’t ever consider it “Finished”: Risk management and information security in general is a lifecycle or reiterative approach; the CSF is designed to evolve in the same way. Requirements change, new technologies and vulnerabilities emerge and risk levels alter over the course of time, which requires constant improvement of the organization’s program.

What challenges have you faced when implementing the CSF or other framework?  We’d like to hear from you!  Please let us know in the comments below.

Manual Application-Layer Security Testing AND Automated Scanning Tools

There are many automated application security tools available on the market. They are useful tools for identifying vulnerabilities in your company's applications, but, they shouldn't be used alone as part of a risk identification process. This post discusses the advantages of automated tools and identifies gaps that need to be filled with manual testing techniques for a more comprehensive view of application risk.

For my purpose here, I'm going to consider automated scanning tools like WebInspect, AppScan, and Acunetix. These are all automated dynamic analysis tools, but there are quite a few other options such as Automated Code Review, Binary Analyzers, and even newer technologies that instrument application code and analyze it during runtime. The capabilities of each of these types of tools differ, but many of the pros and cons are similar.

Automated tools require at least a one-time set up step to configure it for your application. Once configured, the tools can run on a scheduled basis or even as part of a continuous integration build process. Automated tools can scan an application and deliver results very quickly, often in hours. They can scan large numbers of applications too. They are great at identifying vulnerabilities that can be identified by sending attack input and analyzing the output of the application for vulnerability signatures. The tools can detect popular vulnerabilities like SQL injection, cross-site scripting, disclosure of stack traces or error messages, disclosure of sensitive information (like credit card numbers or SSNs), open redirects, and more. They generally perform best identifying non-complex to moderately complex vulnerabilities. This makes automated tools great for use cases such as:
  • A first time look at the security of a web application
  • Scanning all of an organization's web applications for the first time or on a periodic basis
  • Integration with other automated processes, such as the build step of a continuous integration server (probably on a schedule. i.e. every night)
After understanding the value that automated tools can provide, it's also important to understand their limitations. The primary limitation is that they aren't human. They are written to find a concrete, specific set of issues and to be able to identify those issues based on signatures or algorithms. An experienced application security tester's knowledge and expertise will far outshine a tool allowing them to identify tremendously more issues and interpret complex application behavior to understand whether a vulnerability is present. This typically means manual testing is required to identify vulnerabilities related to:
  • Authentication process steps including login, forgot username/password, and registration
  • Authorization, especially determining if data is accessed in excess of a user's role or entitlements or data that belongs to another tenant
  • Business logic rules
  • Session management
  • Complex injection flaws, especially those that span multiple applications (for example a customer application accepts and stores a cross-site scripting vulnerability, but the exploit executes in the admin application)
  • Use of cryptography
  • The architecture and design of the application and related components
The issues listed above are extremely important! For example, it's unacceptable for an attacker to be able to read and modify any other user's data. But, an automated tool isn't going to be able to identify this type of flaw. These tools also tend to perform poorly on web services, REST services, thick-clients, mobile applications, single-page applications. For these reasons, manual testing is absolutely essential for identifying risk in an application.

If manual testing can identify all the same issues and more versus an automated scanning tool, why bother with the automated scanning tool? Well, sometimes you don't need the automated scanning tool. But most of the time, it's still very helpful. The key factors are speed and scale. You can scan a lot of web applications very quickly, receive results, and fix them. THEN, follow up with manual testing. The caution is that scanning alone and waiting to do manual testing may leave critical risk vulnerabilities undiscovered in the application, so don't wait too long afterward.

If your organization needs assistance choosing and adopting automated scanning tools or would like more information about manual application-layer security testing, please contact Security PS. Security PS does not sell automated tools, but we have advised many of our clients regarding how to choose an appropriate tool, prepare staff for using that tool, and update processes to include its usage.

ASP.NET Core Basic Security Settings Cheatsheet

When starting a new project, looking at a new framework, or fixing vulnerabilities identified as part of an assessment or tool, its nice to have one place to refer to the fixes for common security issues. This post provides solutions for some of the more basic issues, especially those around configuration. Most of these answers can be found in Microsoft's documentation or by doing a quick Google search. But hopefully having it all right here will save others some time.

Enabling An Account Lockout Response

To enable the account lockout response for ASP.NET Identity, first modify the Startup.cs file to choose appropriate settings. In the ConfigureServices method, add the following code:
services.Configure<IdentityOptions>(options =>
  options.Lockout.AllowedForNewUsers = true;
  //requires manual unlock
  options.Lockout.DefaultLockoutTimeSpan = TimeSpan.MaxValue;
  //three failed attempts before lockout
  options.Lockout.MaxFailedAccessAttempts = 3; 
With the settings configured, lockout still needs to be enabled in the login method of the account controller. In AccountController -> Login(LoginViewModel model, string returnUrl = null), change lockoutOnFailure from false to true as shown below:
var result = await _signInManager.PasswordSignInAsync(model.Email, model.Password, model.RememberMe, lockoutOnFailure: true);

ASP.NET Identity comes with a class that validates passwords. It is configurable and allows one to decide if passwords should require a digit, uppercase letters, lowercase letters, numbers, and/or a symbol. This policy can be further customized by implementing the IPasswordValidator interface or extending the Microsoft.AspNetCore.Identity.PasswordValidator. The code below extends the PasswordValidator and ensures the password does not contain an individual's username.
using ASPNETCoreKestrelResearch.Models;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using System.Threading.Tasks;

namespace ASPNETCoreKestrelResearch.Security
  public class CustomPasswordValidator<TUser> : PasswordValidator<TUser> where TUser : IdentityUser
        public override async Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password)
            IdentityResult baseResult = await base.ValidateAsync(manager, user, password);

            if (!baseResult.Succeeded)
                return baseResult;
                if (password.ToLower().Contains(user.UserName.ToLower()))
                    return IdentityResult.Failed(new IdentityError
                        Code = "UsernameInPassword",
                        Description = "Your password cannot contain your username"
                    return IdentityResult.Success;
Next, ASP.NET Identity needs to be told to use that class. In the ConfigureServices method of Startup.cs, find services.AddIdentity and add ".AddPasswordValidator<CustomPasswordValidator<ApplicationUser>>();" as shown below.
services.AddIdentity<ApplicationUser, IdentityRole>()

Choosing a Session Timeout Value

Developers can choose how long a session cookie remains valid and whether a sliding expiration should be used by adding the following code to the ConfigureServices method of Startup.cs:
services.Configure<IdentityOptions>(options =>
  options.Cookies.ApplicationCookie.ExpireTimeSpan = TimeSpan.FromMinutes(10);
  options.Cookies.ApplicationCookie.SlidingExpiration = true;

Enabling the HTTPOnly and Secure Flag for Authentication Cookies

First, if you are using Kestrel, HTTPS (TLS) is not supported. Instead, it is implemented by HAProxy, Nginix, Apache, IIS, or some other web server you place in front of the application. If you are using Kestrel, the Secure flag cannot be enabled properly from the application code. However, if you are hosting the application in IIS directly, then it will work. The following code demonstrates enabling both the HTTPOnly and Secure flags for cookie middleware in ASP.NET Identity through the ConfigureServices method in Startup.cs.
services.Configure<IdentityOptions>(options =>
  options.Cookies.ApplicationCookie.CookieHttpOnly = true;
  options.Cookies.ApplicationCookie.CookieSecure = CookieSecurePolicy.Always;

Enabling Cache-Control: no-store

When applications contain sensitive information that should not be stored on a user's local hard drive, The Cache-Control: no-store HTTP response header can help provide that guidance to browsers. To enable that feature, add the following code to the ConfigureServices method in Startup.cs.
services.Configure<MvcOptions>(options =>
  options.CacheProfiles.Add("DefaultNoCacheProfile", new CacheProfile
    NoStore = true,
    Location = ResponseCacheLocation.None
  options.Filters.Add(new ResponseCacheAttribute
    CacheProfileName = "DefaultNoCacheProfile"                    

Disabling the Browser's Autocomplete Feature for Login Forms

The changes to ASP.NET's razor views makes this super simple. Just add the autocomplere="off" attribute as if it were a normal HTML input field:
<input asp-for="Email" class="form-control" autocomplete="off"/>
<input asp-for="Password" class="form-control" autocomplete="off"/>

Modify The Iterations Count for the Password Hasher's Key Derivation Function

First, I believe the default right now is 10,000 and the algorithm is PBKDF2. The code below won't change that default iteration count, but it will show how it can be done. In ConfigureService in Startup.cs add the following code.
services.Configure<PasswordHasherOptions>(options =>
  options.IterationCount = 10000;

Enforcing HTTPS and Choosing Appropriate TLS Protocols and Cipher Suites

As mentioned above, if you are using Kestrel you won't be able to use HTTPS directly. Therefore, you won't do this in your code. You will need to look up how to do this in HAProxy, Nginx, Apache, IIS, etc. If you are hosting your application using IIS directly, then you can enforce the use of HTTPS using something like https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/RequireHttpsAttribute.cs BUT, it will only be applied to your MVC controllers/views. It will not be enforced for static content (see https://github.com/aspnet/Home/issues/895). If you want to do this in code, you will need to write some middleware to enforce it across the entire application. Finally, the choice of cipher suites offered cannot be changed using code.

Enabling a Global Error Handler

A custom global error handler is demonstrated by the Visual Studio template. The following relevant code can be found in the Configure method of Startup.cs.
if (env.IsDevelopment())

Removing the Server HTTP Response Header

All responses from the server are going to return "Server: Kestrel" by default. To remove that value, modify UseKestrel() in Program.cs to include the following settings change:
public static void Main(string[] args)
  var host = new WebHostBuilder()
    .UseKestrel(options =>
      options.AddServerHeader = false;


X-Frame-Options, Content-Security-Policy, and Strict-Transport-Security HTTP Response Headers

The following post seems to cover most of these headers well: http://andrewlock.net/adding-default-security-headers-in-asp-net-core/. I haven't evaluated its design, but I did verify that I can install it and the headers are added successfully. Since Kestrel does not support HTTPS, consider whether its appropriate to implement the Strict-Transport-Security header using code or by configuring the web server placed in front of the application.

I installed this nuget package using "Install-Package NetEscapades.AspNetCore.SecurityHeaders". Then, I made sure to have the following imports in Startup.cs:
using NetEscapades.AspNetCore.SecurityHeaders;
using NetEscapades.AspNetCore.SecurityHeaders.Infrastructure;
I added the following code to the ConfigureService method of Startup.cs:
Last, I added this code to the Configure method of Startup.cs:
app.UseCustomHeadersMiddleware(new HeaderPolicyCollection()
  //.AddCustomHeader("Content-Security-Policy", "somevaluehere")
  //.AddCustomHeader("X-Content-Security-Policy", "somevaluehere")
  //.AddCustomHeader("X-Webkit-CSP", "somevaluehere")
Make sure you add this code BEFORE app.UseStaticFiles();, otherwise the headers will not be applied to your static files.

Why Use the NIST CSF?

You may have heard about a recent framework that has been gaining traction since its inception a few years ago called the Cybersecurity Framework (CSF).  If not, I’ll give you a quick recap.  This framework was drafted by the Commerce Department’s National Institute of Standards and Technology (NIST) back in February of 2013 from an Executive Order by the President entitled “Improving Critical Infrastructure Cybersecurity”.  Following almost a year of collaborative discussions with thousands of security professionals across both public and private sectors, a framework was developed that is comprised of guidelines that can help organizations identify, implement, and improve cybersecurity practices as well as their overall security program as a whole.  The framework is architected to be a continuous process to grow in sync with the constant changes in cybersecurity threats, processes and technologies.  It was also designed to be revised periodically to incorporate lessons learned and industry feedback.  At its core, the principles of the framework conceives cybersecurity as a progressive, continuous lifecycle that identifies and responds to threats, vulnerabilities, and solutions. The CSF provides the channels to allow organizations to determine their current cybersecurity state and capabilities, set goals for a desired outcomes, and establish a plan for improving and maintaining the overall security program. The framework itself is available here.

So, what makes the CSF different from NIST 800-53 or ISO 27001/27002?  By definition, these are detailed regulatory documents which provide requirements for adhering to specific control standards. In comparison, the CSF provides a high-level framework for how to access and prioritize functions within a security program from these existing standards.  Due to it’s high-level scope and common structure, the CSF is also much more suitable for those with non-technical backgrounds and C-Level executives.  It was created with the realization in mind that many of the required controls and processes for a security program have already been created and duplicated across these standards.  In effect, it provides the mechanisms for a common structure within the industry that allows for any organization to drive growth and maturity of cybersecurity practices, and to shift from a reactive state to a proactive state of risk management.

For organizations that are Federally regulated, the CSF may be of particular importance.  Many top level Directors have expressed that an industry driven cybersecurity model is much more preferred over prescriptive regulatory approaches from the Federal government.  Even though the CSF is currently voluntary for both public and private sectors, it is important to realize that with a high degree of probability, this will not be the case in the future.  Discussions have already taken place amongst Federal regulators and Congressional lawmakers that this voluntary framework should be used as the baseline for best security practices, including assessing legal or regulatory exposure and for insurance purposes. If these types of suggestions become reality, implementing the CSF now could allow organizations much more flexibility and cost savings in how it is implemented.

In addition to staying ahead of possible new laws and federal mandates, the CSF provides any organization, regulated or not, a number of other benefits, all of which support a stronger cybersecurity posture.  Some of these benefits include:
  • A common language and structure across all industries
  • Opportunities for collaboration amongst public and private sectors
  • The ability to demonstrate due-diligence and due-care by adopting the framework
  • Greater ease in adhering to compliance regulations or industry standards
  • Improved cost efficiency
  • Flexibility in using any existing security standards, such as HiTrust, 800-53, ISO 27002, etc.
Though it is difficult to express all the possible benefits in this short post, Security PS highly recommends to any organization that they take a good look at the CSF and consider their options for implementation and future laws that influence its use.


If you have more questions, please consider contacting us for additional details.  We’ll be glad to assist you and your organization.

ASP.NET Core and Docker Update: Docker Compose!

Previously in ASP.NET Core, PostgreSQL, Docker, and Continuous Integration with Jenkins, I wrote about my experience getting started with ASP.NET Core and Docker. In that post, I was running a script to stop and remove my previous containers each time I deployed, and I had separate scripts to build and run each Docker container. In a recent conference video, a speaker referenced Docker Compose, and I'm so glad he did. It simplifies my set up greatly! This post describes the changes I made to my project to use Docker Compose.

I reused all my previous DockerFiles and I needed to reference them in a docker-compose.yml file in the root of my project. Here's that file:

After trying to run this (which I will get to later), I discovered that there were some differences in how linking two containers works. Instead of automatically providing environment variables that reference the linked containers, it sets up hostnames that match the services above. So, I needed to make a few changes to my configuration files. First, I updated my appsettings.json file to change the host referenced by the ConnectionString.

In the haproxy.cfg configuration file, I referenced the web1 and web2 hosts. I also learned how to correctly configured the proxy to set a cookie and direct users to the same Kestrel instance each time.

With those changes, I was ready to build and run all the containers.

Since I'm deploying a new instance of the database server and an empty database, I also need to run my Entity Framework Migrations. So the next step is to run "docker-compose exec -d web1 dotnet ef database update".

Finally, when I need to stop and remove the containers to make way for a newer build, Docker Compose takes care of that as well.

ASP.NET Core, PostgreSQL, Docker, and Continuous Integration with Jenkins

Following the Kansas City Developer Conference and the release of ASP.NET Core 1.0, I decided to try out the new framework, to deploy infrastructure with my application, and to use a continuous integration server. This post summarizes what I did and the result of that effort; however, I want to stress that this is in no way a recommendation of how one should securely build and deploy an application. A lot of these technologies are brand new to me, and my goal was just to get them to work. But, instead of waiting until everything is perfect, I wanted to write about what I had so far in case it helps someone else.

First, a list of the technologies I used and how far I took them:
  • ASP.NET Core with ASP.NET MVC 6 - Deploy and run the default template with Entity Framework and ASP.NET Identity (including ability to register and login) on Linux using Kestrel and NOT IIS or SQL Server
  • PostgreSQL - Used as my database instead of the more traditional choice of SQL Server
  • Docker - Used to host Linux containers for my ASP.NET MVC 6 applications, PostgreSQL database, and HA Proxy load balancer. Also allows me to deploy the infrastructure with the application
  • HA Proxy - Used as a load balancer
  • Jenkins - Used as my continuous integration server to automatically build and deploy the application AND its infrastructure
ASP.NET Core with MVC 6
With Visual Studio 2015 completely up to date, I started with File -> New Project -> .NET Core -> ASP.NET Core Web Application. In the next dialog, I chose "Web Application" and for authentication, I chose "Individual User Account". I made sure nothing was checked for Azure. Next, I modified the Main method of Program.cs to ensure Kestrel would listen on ALL interfaces instead of just localhost. To do that, I added .UseUrls("") as shown below.

In order to use the PostgreSQL database, I uninstalled the Microsoft.EntityFrameworkCore.SqlServer and Microsoft.EntityFrameworkCore.SqlServer.Design packages. Then, I installed Microsoft.EntityFrameworkCore.Tools.Core (use -Pre for Install-Package), Microsoft.EntityFrameworkCore.Tools (use -Pre for Install-Package),  and Npgsql.EntityFrameworkCore.PostgreSQL. Here are my packages afterward:

Once the project has support for PostgreSQL in Entity Framework, the ConnectionString needs to be updated and EntityFramework needs to be configured to use PostgreSQL. So, I modified the default ConnectionString in the appsettings.json file as shown below.

Then, I found the existing services.AddDbContext related code in the ConfigureServices method in Startup.cs, and I modified it to use PostgreSQL. The code is shown below.

Docker (For Windows)
Yes, I'm using Docker on Windows. That means I can use Visual Studio AND I can deploy to a Linux container. Later, I can run it on a production system with a Linux without having to change a single thing. So, using windows works great for my purposes. You could also do this on Mac OS X, your favorite Linux distro, etc. but you would need to use Visual Studio Code as your IDE instead. To set up Docker, I went to their website, downloaded Docker for Windows and installed it.

PostgreSQL (Using Docker)
For my database instance, I used a Linux Docker container to host PostgreSQL. I used the "official" image found at https://hub.docker.com/_/postgres/. That official image allows for a script to be run at start up to get the database set up the way you want. The script must be named "init-user-db.sh" and must be placed in the /docker-entrypoint-initdb.d/ directory of the Docker container. My script creates a database and a user for accessing that database.

 Next, I set up the DockerFile for that container. It looks like this:

I can now build and run the container:

Apply Entity Framework Migration
Now that the database is running, I applied the Entity Framework Migrations so the Login and Register features will work. To do that, I used the command line and changed directory to the root of the application (the place where project.json is located) and ran "dotnet ef database update".

Run the Application
Finally, I ran the application with "dotnet run", and visited the site in a browser at http://localhost:5000/. I can now register a new user and login. All of that data is being stored in the PostgreSQL Docker container's database.

Hosting The ASP.NET Core MVC6 Template in a Docker Container
After verifying I could run the application and connect to the database, I wanted to be able to host the application itself in a docker container. Microsoft has some ready-made Docker images (https://hub.docker.com/r/microsoft/dotnet/) that I used to accomplish it. My DockerFile file follows their instructions for copying the application's root into the container, running "dotnet restore" to download all the required Nuget packages, and "dotnet run" to start the application.

Here's how I built and ran that Docker container:

Finally, I visited http://localhost:5000 to test that the application is running.

This is the first time I've used HAProxy. Basically, I saw that other people had used it and I thought I would try it. To get it to work, I found an example configuration file, made a casual attempt to understand some of the settings in HAProxy's documentation, and then just messed with it until it worked. I don't recommend doing it this way for a real application. I was very happy when it finally forwarded my traffic correctly!

One of the key challenges I had was that I couldn't point HAProxy's configuration at localhost:5000 and localhost:5001 for two exposed instances of my ASP.NET Core MVC application. Since I wasn't familiar with Docker, it took a while to figure out what to do. I eventually learned that you can "link" containers together and that some environment variables are automatically populated to help refer to those instances. Also, HAProxy can use environment variables in the configuration file. As a result, here's the haproxy.cfg configuration I came up with:

In the configuration, you can see some references to using a cookie to ensure the same client hits the same server each time. I never actually got that part to work.

My DockerFile for HAProxy can be found below. Again, I used an official image (https://hub.docker.com/_/haproxy/) and followed their instructions.

Here are the commands I ran to bring up the web servers, the load balancer, and link them together.

Next, I visited http://localhost:80 to see that everything worked.

I used Jenkins throughout this process as a continuous integration server. There's a whole lot I don't know how to do correctly with Jenkins, but I will share the build script I used. I have separate batch files for each step of the build process to ensure each return code is evaluated and the build process will fail if any of them return anything other than 0. Here's a screenshot of the Jenkins configuration for running those batch files:

And here are those build scripts:

I don't think this one is actually necessary. I think I was experimenting with publishing, but I'm going to include it here just in case.

There's a lot to be improved upon, but for now, I don't have time to work on it further. One thing I really wanted to get working was to "publish" the application instead of simply running the code within the Docker container. It would be nice to deploy the full application without needing to redownload all the Nuget packages every time. I was able to get a basic version of this working, but then I ran into an issue in which my views weren't actually being updated after I modified them. I wasn't able to pursue it more to troubleshoot that issue.

Nick's KCDC Summary

I attended the Kansas City Developer Conference in June and was really pleased by the talks. I wanted to share about my experience and what I got out of the conference. I also provide links to videos from other conferences that are either the same presentation or a related presentation.

I went to the conference with the goals of:
  • Learning more about ASP.NET Core and
  • Understanding options for deploying software into production on a continuous basis (every check in, every day, once a week, etc...)
For ASP.NET Core, I attended:
Matt's presentation was particularly detailed, but both of the presenters provided some great information for understanding whether .NET Core and EF Core were ready to be used in a production application. For .NET Core, the presenter related his experience building tools that would run on as wide of variety of platforms and runtimes as possible. There were some interesting challenges he had to solve to get his code to work. He used compiler flags to conditionally include code for different framework versions and had to choose different libraries because they weren't supported by .NET Core. His advice was to do a lot of research before committing to converting an existing .NET application to a cross-platform .NET Core application. Otherwise, it will work great for new development or upgrading an existing application to ASP.NET 4.6.3.

My takeaway from the EF Core talk was that the framework is missing a few features that are present in EF6, and I think the presenter's advice was to wait. But, you can easily make your own decision by looking at the feature comparison chart here: https://docs.efproject.net/en/latest/efcore-vs-ef6/features.html.

I also saw the following related .NET talks:
  • I'll Get Back to You: Task, Await, and Asynchronous Methods - Jeremy Clark (video from NDC)
  • Token Authentication in ASP.NET - Nate Barbettini
I really liked Jeremy's talk. After watching it, I decided to look into more Task API related presentations and resources. I found the following videos and resources helpful:
Finally, some additional rabbit holes I went down:
Continuous Deployment/Delivery
Damian Brady had two great talks at KCDC that introduced me to some new vocabulary and ways of thinking about deployment. His two talks were:
I didn't retain as much as I would have liked from his talks because they were so full of information, but they did get me fired up about doing my own experiments. I wanted to be able to deploy not only code, but infrastructure that automatically builds and deploys using a continuous integration server. I ended up writing an ASP.NET Core application and DockerFiles to run PostgreSQL, HAProxy, and Linux containers to run ASP.NET Core. Then, I had Jenkins build and deploy them. I hope to write another blog post about that in the future.

During that process, I found a few more presentations that I really enjoyed: