THE SECURITY PS BLOG:
Observations and insights from the Security PS Team.

Protecting Thick-Client Applications From Attack (Or How To Not Have To)


In the previous post, I discussed security testing techniques Security PS used to assess a complex thick-client application. After the assessment was complete, our client asked:
  • Is .NET less secure than other languages since these techniques are possible?
  • How do I stop attackers from manipulating my applications?
This post answers those questions and discusses best practices around securing client-server architectures.

Security PS tested the thick-client application with a variety of techniques including:
  • Reusing the application's DLLs to communicate with the server and decrypt data
  • Using a debugger to interactively modify variables and program flow
  • Disassembling, modifying, and reassembling the thick-client application to integrate it with a custom testing framework
Considering these methods, how does .NET compare to other platforms? Is .NET less secure than another choice?
.NET is not unique. In other assessments, Security PS has used the same techniques to assess Android, Java, and Native (C/C++ executables) applications. Based on my quick research some or all of the techniques work for iOS applications as well. The only differences between these platforms are the level of complexity and the toolset required. .NET is not any more or less secure than any other platform in this way.

How do you stop attackers from reusing DLLs, interactively debugging applications, or modifying applications?
You shouldn't need to in most cases. For a client-server architecture, the thick-client resides on a user's (or attacker's) computer. That environment cannot and should not be trusted to enforce security controls or protect sensitive data. Client-side security controls can be defeated or bypassed completely, and any data sent to the client can be obtained by an attacker (even if it is encrypted).

Instead, organizations should spend their time architecting and designing applications that enforce security controls on the server-side. If all the security controls are implemented on the server-side, then it does not matter whether the attacker manipulates the thick-client (or writes his or her own client application). This security best practice applies to web applications, web services, and client-server applications.

If an organization still wishes to protect client-side code from analysis and manipulation, what are the options? If you search on the Internet, you may find these choices:
  • Strong name verification (for .NET)
  • Obfuscation
  • Native compilation (for .NET)
  • Encryption
  • and more...
Each option can be used to slow down an attacker and will make analysis or modification more difficult. But, none of them prevent a skilled and determined attacker from eventually reaching their goal. Let's briefly dig in to each one.

Strong name verification enables an assembly to identify and reference the correct version of a DLL. Some Internet sources recommend using strong name verification to prevent attackers from modifying DLLs. But, according to Microsoft, it should not be used as a security control. Security PS's experience agrees with that assertion. It is trivial to bypass Strong name verification, especially with local administration privileges on a computer.

A non-technical explanation of Obfuscation is that a tool jumbles up the variable names, program structure, and or program flow before it is distributed to users. Then, when an attacker uses an interactive debugger or reflection to view the code, he or she has difficulty following and understanding the program's logic. There are many free and commercial tools to provide this protection, and it does demotivate casual attackers from performing analysis. However, there are also tools to help deobfuscate applications or track program flow.

Obfuscation tools can also make it difficult to use reverse engineering tools like ILSpy, dnSpy, and ILDasm/ILAsm. The tools can corrupt or mangle portions of the application to crash an attacker's toolsets. Additionally, encryption can be applied to strings, resources, or the code within a method. This makes it difficult to use reflection to see the original code and more complex to modify the IL code. However, eventually the code must be decrypted so it can be run, making it available to attackers.

Security PS's research into two obfuscation tools (ConfuserEx and Dotfuscator Community Edition), showed that most controls can be bypassed by a skilled attacker or worked around using WinDbg and managed code extensions. Additionally, there's a significant performance impact to using some of the Obfuscation controls.

Native compilation (i.e. Ngen) compiles a .NET DLL into processor-specific machine-code. Security PS found that native compiled .NET applications still allow an attacker to use interactive debuggers to introspect and control program. Additionally, there's no mention of using it as a security feature in Microsoft's documentation. Therefore, this technique does not provide a significant amount of protection.

There are even more techniques then I've named here. But, the important points to remember are:
  • Implement and enforce security controls on the server-side
  • Only send information to the client-side that you want the user (or attacker) to see (even encrypted)
  • Don't rely upon thick-client, browser, desktop application, etc. to provide any sort of reliable level of security
  • Only apply protection mechanisms to the executable if you absolutely have to and/or if it is nearly free (money, time, operationally, etc.)





    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment