Protecting Thick-Client Applications From Attack (Or How To Not Have To)

Security PS Application Security
In the previous post, I discussed security testing techniques Security PS used to assess a complex thick-client application. After the assessment was complete, our client asked:
  • Is .NET less secure than other languages since these techniques are possible?
  • How do I stop attackers from manipulating my applications?
This post answers those questions and discusses best practices around securing client-server architectures.

Security PS tested the thick-client application with a variety of techniques including:
  • Reusing the application's DLLs to communicate with the server and decrypt data
  • Using a debugger to interactively modify variables and program flow
  • Disassembling, modifying, and reassembling the thick-client application to integrate it with a custom testing framework
Considering these methods, how does .NET compare to other platforms? Is .NET less secure than another choice?
.NET is not unique. In other assessments, Security PS has used the same techniques to assess Android, Java, and Native (C/C++ executables) applications. Based on my quick research some or all of the techniques work for iOS applications as well. The only differences between these platforms are the level of complexity and the toolset required. .NET is not any more or less secure than any other platform in this way.

How do you stop attackers from reusing DLLs, interactively debugging applications, or modifying applications?
You shouldn't need to in most cases. For a client-server architecture, the thick-client resides on a user's (or attacker's) computer. That environment cannot and should not be trusted to enforce security controls or protect sensitive data. Client-side security controls can be defeated or bypassed completely, and any data sent to the client can be obtained by an attacker (even if it is encrypted).

Instead, organizations should spend their time architecting and designing applications that enforce security controls on the server-side. If all the security controls are implemented on the server-side, then it does not matter whether the attacker manipulates the thick-client (or writes his or her own client application). This security best practice applies to web applications, web services, and client-server applications.

If an organization still wishes to protect client-side code from analysis and manipulation, what are the options? If you search on the Internet, you may find these choices:
  • Strong name verification (for .NET)
  • Obfuscation
  • Native compilation (for .NET)
  • Encryption
  • and more...
Each option can be used to slow down an attacker and will make analysis or modification more difficult. But, none of them prevent a skilled and determined attacker from eventually reaching their goal. Let's briefly dig in to each one.

Strong name verification enables an assembly to identify and reference the correct version of a DLL. Some Internet sources recommend using strong name verification to prevent attackers from modifying DLLs. But, according to Microsoft, it should not be used as a security control. Security PS's experience agrees with that assertion. It is trivial to bypass Strong name verification, especially with local administration privileges on a computer.

A non-technical explanation of Obfuscation is that a tool jumbles up the variable names, program structure, and or program flow before it is distributed to users. Then, when an attacker uses an interactive debugger or reflection to view the code, he or she has difficulty following and understanding the program's logic. There are many free and commercial tools to provide this protection, and it does demotivate casual attackers from performing analysis. However, there are also tools to help deobfuscate applications or track program flow.

Obfuscation tools can also make it difficult to use reverse engineering tools like ILSpy, dnSpy, and ILDasm/ILAsm. The tools can corrupt or mangle portions of the application to crash an attacker's toolsets. Additionally, encryption can be applied to strings, resources, or the code within a method. This makes it difficult to use reflection to see the original code and more complex to modify the IL code. However, eventually the code must be decrypted so it can be run, making it available to attackers.

Security PS's research into two obfuscation tools (ConfuserEx and Dotfuscator Community Edition), showed that most controls can be bypassed by a skilled attacker or worked around using WinDbg and managed code extensions. Additionally, there's a significant performance impact to using some of the Obfuscation controls.

Native compilation (i.e. Ngen) compiles a .NET DLL into processor-specific machine-code. Security PS found that native compiled .NET applications still allow an attacker to use interactive debuggers to introspect and control program. Additionally, there's no mention of using it as a security feature in Microsoft's documentation. Therefore, this technique does not provide a significant amount of protection.

There are even more techniques then I've named here. But, the important points to remember are:
  • Implement and enforce security controls on the server-side
  • Only send information to the client-side that you want the user (or attacker) to see (even encrypted)
  • Don't rely upon thick-client, browser, desktop application, etc. to provide any sort of reliable level of security
  • Only apply protection mechanisms to the executable if you absolutely have to and/or if it is nearly free (money, time, operationally, etc.)

Lessons From Attacking Complex Thick-Client Applications

Security PS performs assessments on a wide variety of software architectures and platforms, some of which cannot be tested effectively using the more standard testing tools and methods. Recently, our team performed an assessment on a more complex application architecture. In this case, a .NET thick-client communicated with a variety of server-side components using either signed SOAP messages or with custom TCP messages. These factors meant our consultants couldn't use a proxy tool to directly manipulate traffic for security testing purposes. This post discusses some of the techniques our application security team used to overcome those challenges and successfully complete the assessment.

Security PS Application Security
Security PS used three techniques to manipulate both the signed SOAP requests and the custom TCP messages:
  • Writing custom code and reusing thick-client libraries
  • Attaching a debugger to the running application and manipulating variables
  • Disassembling, modifying, and reassembling the application
Code is often written in a modular way that makes it easy to reuse existing libraries. In this assessment, Security PS wrote GUI applications that reused the thick-client's libraries to decrypt data or send data to the server. This technique involved creating a new Visual Studio Project, adding the DLLs as a reference, and then writing code that calls functions within those thick-client libraries.

Next, Security PS needed to modify a field within a signed SOAP request to test authorization controls. Our team used a debugger and breakpoints to perform this modification. For .NET thick-clients, this attack is possible after disassembling and reassembling the application with debugging enabled.

Finally, we needed a way to quickly and easily manipulate custom TCP messages to identify vulnerabilities. Use of the debugger and breakpoints was too slow. Use of a custom written testing tool meant having to understand and duplicate some complex interactions that the thick-client managed. So, Security PS chose to directly modify the thick-client to allow interactive modification of TCP messages by consultants. For that to be possible, we needed to disassemble the thick-client, modify the intermediate language code, and then reassemble it.

Using these testing techniques, Security PS identified a number of high impact vulnerabilities. After discussing the vulnerabilities with the client, two of the questions they asked were:
  • Is .NET less secure than other languages since these techniques are possible?
  • How do I stop attackers from manipulating my applications?
The next post will consider these questions more, but the primary message we communicated to our client focused on a critical best practice for secure software design: all security controls must be implemented on a trusted component in the application architecture. In this case, security controls must be implemented on the server-side rather than on the client-side. The client operates on the attacker's computer where everything can be analyzed and modified regardless of the security controls used. The architecture of the application must assume the client environment cannot be trusted.  While additional controls can be applied to increase the difficultly an attacker would have in attempting to manipulate client-side security controls, it is important to recognize that the root of this security weakness is fundamentally a design flaw that would need to be addressed to fully mitigate the risks.

Stay tuned for a follow-up on the questions brought up above.

OAuth Is Not Meant For Authentication!

As we work with software development teams to help them apply security principles and practices to their applications, we commonly identify misunderstandings or gaps in the team's understanding regarding security features, APIs, or frameworks they are using. It's important to identify and correct these misunderstandings as early on as possible. When such security elements are misused, systemic security flaws are produced in the application that are difficult to resolve without significant reworking of the code or architecture.

One such example is the use of OAuth. As useful as OAuth is, it must be used for its intended purpose. If we try to make it do things it wasn't designed or intended to do, we get into trouble. Let's clarify the fundamental purpose and use of OAuth and in doing so, clear up a common misunderstanding with it.

OAuth is not meant for authentication. OAuth is for authorization.

Here are a few points demonstrating why:

OAuth has four Grant Types:
  • Authorization Code
  • Implicit
  • Resource Owner Password Credential
  • Client Credentials
For "Authorization Code" and "Implicit" grants the specification doesn't govern the submission of a username or password. It's something totally outside of the scope of OAuth. This is a great warning flag that OAuth is not intended to be used directly for authentication.

"Client Credentials" does have a username and password. It is sent as a Basic Authorization Header (Base64 encoded "username:password"). BUT, it's not a grant used by users. Here's what the specification says:
"Client credentials are used as an authorization grant typically when the client is acting on its own behalf (the client is also the resource owner) or is requesting access to protected resources based on an authorization previously arranged with the authorization server." -
An example client could be a third-party API, that a user has granted an "offline" token. That third-party API may make requests without a user interacting with it.

Now for "Resource Owner Password Credentials." Yes, you can use it to login with a username and password, but you probably shouldn't. Not because it's insecure, but because it doesn't scale well and isn't flexible. The specification says:
"The resource owner password credentials (i.e., username and password) can be used directly as an authorization grant to obtain an access token.  The credentials should only be used when there is a high degree of trust between the resource owner and the client (e.g., the client is part of the device operating system or a highly privileged application), and when other authorization grant types are not available (such as an authorization code)."
So why shouldn't you use the Resource Owner Password Credentials grant for authentication? Well, let's start by looking at the login request and response:
POST /token HTTP/1.1
Content-Type: application/x-www-form-urlencoded

HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache 
You submit a username and password and you get back and access token. The access token can then be used to call an API. Sounds ok, right? Let's add some complexities. First, OAuth is often used in combination with a stateless REST service. There's no session on the server-side. There's just the access token sent by the client, which is often a base 64 encoded set of claims with a signature (like a JWT). With that in mind, what if you need to do multi-factor authentication? What about security questions? What if there are several different ways a user can log in? How do you integrate all those options with the OAuth Resource Owner Password Credentials grant?

One common approach is to make it an API call and have a mobile or web application force you to complete it. But, if the application is stateless and you already have an access token why not just call any other API method directly with that token and ignore the secondary authentication step. It's trivial to bypass in a client-side application (mobile, thick-client, web page). So, that means attackers can bypass that multi-factor system that helps meet compliance and regulatory requirements.

Ok, how about if it's made as part of the login process? Well, that's not really OAuth any more. You have to add fields, add steps, and/or go through more process before issuing an access token. Are you going to write your own custom OAuth client library and server to do it? You might as well write a normal forms based authentication process instead?


How do others do it then? Instead, they use the OAuth Authorization Code or Implicit grants and a separate login server (or identity provider) to handle all the authentication and pass back a user with an access token. In fact, that's exactly what the Authorization Code and Implicit grant is for. That identity server can offer as many options and schemes for authenticating users as it wants. The authentication process is centralized and isolated from the applications that rely upon them. When it's done authenticating the user, it passes the user back to the application fully authenticated. With this in mind, you can see that this is exactly what the OAuth Specification Authors had in mind when you read the Introduction section here:

This issue seems to come up in assessments more and more often lately. I keep seeing software development teams download a copy of Thinktecture's IdentityServer (a great open source product by the way), and then implement it just for their application using the resource owner password credentials. Then, they later bolt on security questions, finger print scanners, multi-factor authentication, and "remember me" features. As a result, their stateless application has easily bypassable authentication controls that are very time consuming to fix (or they have to compromise on having the API be stateless).

If you are considering implementing OAuth or you already have, reach out to Security PS to help with the design and architecture. You could also watch some of these videos to help avoid common mistakes:

Improving User Acceptance of Account Lockout Responses for Login Processes

The purpose of a login process is to identify a particular individual and validate their identity before granting them access to an application. It's critical that the process only allows the owner of an account to login, and it must prevent an attacker from logging in as another user. This post discusses one aspect of protecting authentication processes: using an account lockout response. And, it specifically focuses on decreasing the frustration users experience as a result of that control.

An account lockout response is a security control developers apply to all of the application's authentication processes to limit the number of times an individual can enter the wrong credentials consecutively. For example, if an attacker incorrectly guesses another user's password five times in a row, the application will disable the user's account and notify the user by email. Organizations must choose an appropriate lockout threshold and choose how accounts are unlocked.

When should an organization use an account lockout response? That's difficult to answer unless a company is compelled to implement the control due to a regulation or compliance requirement. The development team, security team, and marketing or user experience groups really need to discuss the pros and cons of such a process. On one hand, the application will have significantly more resistance to password guessing attacks, protecting users' accounts from being compromised. On the other hand, it may frustrate users, raise customer support costs, or even drive customers away from using the application. If an account lockout response is implemented (which Security PS generally encourages), it must be carefully designed to increase user acceptance.

One of the frustrations users experience related to account lockout responses is that they may not know their password (or sometimes their username) and they lockout their account accidentally. On top of that, the user doesn't know their account is locked out. This occurs because the application cannot display notifications on the login page that the account is locked out. If it did, the process would inform an attacker that a particular username is valid resulting in a username harvesting vulnerability. This is one of the key challenges to solve in order to increase user acceptance of the account lockout response control.

To address the notification challenge, Security PS recommends several user experience improvements that don't expose the application to additional risk. First, the application can email the user when a failed login attempt occurs. Additionally, if the account is locked out, the application can immediately email the user instructions for unlocking the account. These notifications do not cause username harvesting vulnerabilities, because only the account owner will receive those email notifications not the attacker.

Email notifications are helpful, but what if the user doesn't check their email while using the application? They can still get frustrated easily. So, developers should consider sending SMS notifications when a user's account is locked out or potentially before the lockout occurs. The message can be short, direct, and can point the user to their emailed instructions for unlocking their account or resetting their password. The hope is that the user receives this notification before getting frustrated that they can't login.

Finally, the messaging in the application itself can remind users that a lockout response is present and that they can check their email if they believe their account is locked out. This messaging can be displayed all the time or after a specific number of failed attempts per session. Key here is that this is not a specific number of failed attempts per username or account, but per session. Otherwise, username harvesting vulnerabilities are introduced.

Authentication processes, especially complex, multi-step, multi-credential authentication processes are difficult to get correct. It's easy to introduce vulnerabilities in the user creation/registration step, forgot username/password step, and login process itself. If you are in the process of designing an authentication process, whether it's using an OAuth2, OpenID Connect, or custom forms based authentication, contact Security PS to have a partner come along side you and help ensure the design and implementation are secure.

5 Things to Avoid When Implementing the CSF

In my last post, I gave a quick recap of what the Cybersecurity Framework is, how it differs from other standards and the importance it carries with both regulated and non-regulated organizations.  This week, I wanted provide some quick lessons learned by many organizations, not only with the CSF itself, but with many of the standards used within the categories of the framework.  Listed below are 5 quick things your organization should consider when implementing any security framework or standard.

  1. Don’t assume the CSF is only for “Critical Infrastructure” or Federally regulated organizations: Although the Executive Order is titled as such, it is meant for all organizations, in both public and private sectors.  The same can be said for NIST 800-53 controls; it’s not just for Federal agencies. 

  2. Don’t try to do it all yourself: The implementation of the CSF requires the input and collaboration of almost every vertical within the organization.  It can not be done solely by one person.  Often times it requires outside help with subject matter experts for implementing various requirements.

  3. Don’t adopt controls, just to adopt controls: This is one of the most common pitfalls.  The informative references in the CSF are not a list of mandated controls which must be adopted for each category.  They are to be considered as examples or possible suggestions.  Each category must be carefully examined and the organization must ultimately decide which controls fit and which ones do not. When gaps exist, a risk assessment should be conducted to determine if the control is even necessary. All successful information security programs are built on risk management, not controls.

  4. Don’t assume there is only one way for implementation: Every organization has their own business goals, risk levels and security requirements.  One size does not fit all and neither does the implementation of the CSF.  The NIST web site, along with many others offer unique approaches to implementing the framework.  Security PS recommends that each organization carefully weigh the many options and decide which method, or combination of methods is right for your environment.

  5. Don’t ever consider it “Finished”: Risk management and information security in general is a lifecycle or reiterative approach; the CSF is designed to evolve in the same way. Requirements change, new technologies and vulnerabilities emerge and risk levels alter over the course of time, which requires constant improvement of the organization’s program.

What challenges have you faced when implementing the CSF or other framework?  We’d like to hear from you!  Please let us know in the comments below.

Manual Application-Layer Security Testing AND Automated Scanning Tools

There are many automated application security tools available on the market. They are useful tools for identifying vulnerabilities in your company's applications, but, they shouldn't be used alone as part of a risk identification process. This post discusses the advantages of automated tools and identifies gaps that need to be filled with manual testing techniques for a more comprehensive view of application risk.

For my purpose here, I'm going to consider automated scanning tools like WebInspect, AppScan, and Acunetix. These are all automated dynamic analysis tools, but there are quite a few other options such as Automated Code Review, Binary Analyzers, and even newer technologies that instrument application code and analyze it during runtime. The capabilities of each of these types of tools differ, but many of the pros and cons are similar.

Automated tools require at least a one-time set up step to configure it for your application. Once configured, the tools can run on a scheduled basis or even as part of a continuous integration build process. Automated tools can scan an application and deliver results very quickly, often in hours. They can scan large numbers of applications too. They are great at identifying vulnerabilities that can be identified by sending attack input and analyzing the output of the application for vulnerability signatures. The tools can detect popular vulnerabilities like SQL injection, cross-site scripting, disclosure of stack traces or error messages, disclosure of sensitive information (like credit card numbers or SSNs), open redirects, and more. They generally perform best identifying non-complex to moderately complex vulnerabilities. This makes automated tools great for use cases such as:
  • A first time look at the security of a web application
  • Scanning all of an organization's web applications for the first time or on a periodic basis
  • Integration with other automated processes, such as the build step of a continuous integration server (probably on a schedule. i.e. every night)
After understanding the value that automated tools can provide, it's also important to understand their limitations. The primary limitation is that they aren't human. They are written to find a concrete, specific set of issues and to be able to identify those issues based on signatures or algorithms. An experienced application security tester's knowledge and expertise will far outshine a tool allowing them to identify tremendously more issues and interpret complex application behavior to understand whether a vulnerability is present. This typically means manual testing is required to identify vulnerabilities related to:
  • Authentication process steps including login, forgot username/password, and registration
  • Authorization, especially determining if data is accessed in excess of a user's role or entitlements or data that belongs to another tenant
  • Business logic rules
  • Session management
  • Complex injection flaws, especially those that span multiple applications (for example a customer application accepts and stores a cross-site scripting vulnerability, but the exploit executes in the admin application)
  • Use of cryptography
  • The architecture and design of the application and related components
The issues listed above are extremely important! For example, it's unacceptable for an attacker to be able to read and modify any other user's data. But, an automated tool isn't going to be able to identify this type of flaw. These tools also tend to perform poorly on web services, REST services, thick-clients, mobile applications, single-page applications. For these reasons, manual testing is absolutely essential for identifying risk in an application.

If manual testing can identify all the same issues and more versus an automated scanning tool, why bother with the automated scanning tool? Well, sometimes you don't need the automated scanning tool. But most of the time, it's still very helpful. The key factors are speed and scale. You can scan a lot of web applications very quickly, receive results, and fix them. THEN, follow up with manual testing. The caution is that scanning alone and waiting to do manual testing may leave critical risk vulnerabilities undiscovered in the application, so don't wait too long afterward.

If your organization needs assistance choosing and adopting automated scanning tools or would like more information about manual application-layer security testing, please contact Security PS. Security PS does not sell automated tools, but we have advised many of our clients regarding how to choose an appropriate tool, prepare staff for using that tool, and update processes to include its usage.

ASP.NET Core Basic Security Settings Cheatsheet

When starting a new project, looking at a new framework, or fixing vulnerabilities identified as part of an assessment or tool, its nice to have one place to refer to the fixes for common security issues. This post provides solutions for some of the more basic issues, especially those around configuration. Most of these answers can be found in Microsoft's documentation or by doing a quick Google search. But hopefully having it all right here will save others some time.

Enabling An Account Lockout Response

To enable the account lockout response for ASP.NET Identity, first modify the Startup.cs file to choose appropriate settings. In the ConfigureServices method, add the following code:
services.Configure<IdentityOptions>(options =>
  options.Lockout.AllowedForNewUsers = true;
  //requires manual unlock
  options.Lockout.DefaultLockoutTimeSpan = TimeSpan.MaxValue;
  //three failed attempts before lockout
  options.Lockout.MaxFailedAccessAttempts = 3; 
With the settings configured, lockout still needs to be enabled in the login method of the account controller. In AccountController -> Login(LoginViewModel model, string returnUrl = null), change lockoutOnFailure from false to true as shown below:
var result = await _signInManager.PasswordSignInAsync(model.Email, model.Password, model.RememberMe, lockoutOnFailure: true);

ASP.NET Identity comes with a class that validates passwords. It is configurable and allows one to decide if passwords should require a digit, uppercase letters, lowercase letters, numbers, and/or a symbol. This policy can be further customized by implementing the IPasswordValidator interface or extending the Microsoft.AspNetCore.Identity.PasswordValidator. The code below extends the PasswordValidator and ensures the password does not contain an individual's username.
using ASPNETCoreKestrelResearch.Models;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using System.Threading.Tasks;

namespace ASPNETCoreKestrelResearch.Security
  public class CustomPasswordValidator<TUser> : PasswordValidator<TUser> where TUser : IdentityUser
        public override async Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password)
            IdentityResult baseResult = await base.ValidateAsync(manager, user, password);

            if (!baseResult.Succeeded)
                return baseResult;
                if (password.ToLower().Contains(user.UserName.ToLower()))
                    return IdentityResult.Failed(new IdentityError
                        Code = "UsernameInPassword",
                        Description = "Your password cannot contain your username"
                    return IdentityResult.Success;
Next, ASP.NET Identity needs to be told to use that class. In the ConfigureServices method of Startup.cs, find services.AddIdentity and add ".AddPasswordValidator<CustomPasswordValidator<ApplicationUser>>();" as shown below.
services.AddIdentity<ApplicationUser, IdentityRole>()

Choosing a Session Timeout Value

Developers can choose how long a session cookie remains valid and whether a sliding expiration should be used by adding the following code to the ConfigureServices method of Startup.cs:
services.Configure<IdentityOptions>(options =>
  options.Cookies.ApplicationCookie.ExpireTimeSpan = TimeSpan.FromMinutes(10);
  options.Cookies.ApplicationCookie.SlidingExpiration = true;

Enabling the HTTPOnly and Secure Flag for Authentication Cookies

First, if you are using Kestrel, HTTPS (TLS) is not supported. Instead, it is implemented by HAProxy, Nginix, Apache, IIS, or some other web server you place in front of the application. If you are using Kestrel, the Secure flag cannot be enabled properly from the application code. However, if you are hosting the application in IIS directly, then it will work. The following code demonstrates enabling both the HTTPOnly and Secure flags for cookie middleware in ASP.NET Identity through the ConfigureServices method in Startup.cs.
services.Configure<IdentityOptions>(options =>
  options.Cookies.ApplicationCookie.CookieHttpOnly = true;
  options.Cookies.ApplicationCookie.CookieSecure = CookieSecurePolicy.Always;

Enabling Cache-Control: no-store

When applications contain sensitive information that should not be stored on a user's local hard drive, The Cache-Control: no-store HTTP response header can help provide that guidance to browsers. To enable that feature, add the following code to the ConfigureServices method in Startup.cs.
services.Configure<MvcOptions>(options =>
  options.CacheProfiles.Add("DefaultNoCacheProfile", new CacheProfile
    NoStore = true,
    Location = ResponseCacheLocation.None
  options.Filters.Add(new ResponseCacheAttribute
    CacheProfileName = "DefaultNoCacheProfile"                    

Disabling the Browser's Autocomplete Feature for Login Forms

The changes to ASP.NET's razor views makes this super simple. Just add the autocomplere="off" attribute as if it were a normal HTML input field:
<input asp-for="Email" class="form-control" autocomplete="off"/>
<input asp-for="Password" class="form-control" autocomplete="off"/>

Modify The Iterations Count for the Password Hasher's Key Derivation Function

First, I believe the default right now is 10,000 and the algorithm is PBKDF2. The code below won't change that default iteration count, but it will show how it can be done. In ConfigureService in Startup.cs add the following code.
services.Configure<PasswordHasherOptions>(options =>
  options.IterationCount = 10000;

Enforcing HTTPS and Choosing Appropriate TLS Protocols and Cipher Suites

As mentioned above, if you are using Kestrel you won't be able to use HTTPS directly. Therefore, you won't do this in your code. You will need to look up how to do this in HAProxy, Nginx, Apache, IIS, etc. If you are hosting your application using IIS directly, then you can enforce the use of HTTPS using something like BUT, it will only be applied to your MVC controllers/views. It will not be enforced for static content (see If you want to do this in code, you will need to write some middleware to enforce it across the entire application. Finally, the choice of cipher suites offered cannot be changed using code.

Enabling a Global Error Handler

A custom global error handler is demonstrated by the Visual Studio template. The following relevant code can be found in the Configure method of Startup.cs.
if (env.IsDevelopment())

Removing the Server HTTP Response Header

All responses from the server are going to return "Server: Kestrel" by default. To remove that value, modify UseKestrel() in Program.cs to include the following settings change:
public static void Main(string[] args)
  var host = new WebHostBuilder()
    .UseKestrel(options =>
      options.AddServerHeader = false;


X-Frame-Options, Content-Security-Policy, and Strict-Transport-Security HTTP Response Headers

The following post seems to cover most of these headers well: I haven't evaluated its design, but I did verify that I can install it and the headers are added successfully. Since Kestrel does not support HTTPS, consider whether its appropriate to implement the Strict-Transport-Security header using code or by configuring the web server placed in front of the application.

I installed this nuget package using "Install-Package NetEscapades.AspNetCore.SecurityHeaders". Then, I made sure to have the following imports in Startup.cs:
using NetEscapades.AspNetCore.SecurityHeaders;
using NetEscapades.AspNetCore.SecurityHeaders.Infrastructure;
I added the following code to the ConfigureService method of Startup.cs:
Last, I added this code to the Configure method of Startup.cs:
app.UseCustomHeadersMiddleware(new HeaderPolicyCollection()
  //.AddCustomHeader("Content-Security-Policy", "somevaluehere")
  //.AddCustomHeader("X-Content-Security-Policy", "somevaluehere")
  //.AddCustomHeader("X-Webkit-CSP", "somevaluehere")
Make sure you add this code BEFORE app.UseStaticFiles();, otherwise the headers will not be applied to your static files.