tag:blogger.com,1999:blog-235205332024-03-08T13:05:08.377-06:00Security PS BlogInsights from the Security PS consulting team about application security, network security, and enterprise security topics. Learn more by visiting http://www.securityps.com.Kris Drenthttp://www.blogger.com/profile/10182751344265769843noreply@blogger.comBlogger68125tag:blogger.com,1999:blog-23520533.post-56744697279803239472021-04-22T15:33:00.002-05:002022-05-06T15:19:51.556-05:00If you want to find complex bugs, sharpen your axe.<div class="separator" style="clear: both; text-align: center;"><br /></div>Abraham Lincoln was once quoted as having said “If I had eight hours to chop down a tree, I’d spend the first six of them sharpening my axe.” For me, this quote underscores the importance of being prepared. And having the right tools. And also how difficult chopping down a tree is. But, that's probably best saved for another post ...<br /><br /><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgooj2T-SiyT4Dr1828-cmeJKVbads2hDsnptj9qMexVk6zxmMpI_0QKeNFY-X-ZcoMOExP7nOzxykAjj5h3CrQeo5GBUYH5cC-ms97YiSOAHZQkdhX9x-y6E8KJ9FbM26QRlMH0hJfZw05Zv7-V34_84nqXMxERgxijvwBTgMbcZD-_VIyzFY/s450/sps-axe.jpeg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="262" data-original-width="450" height="186" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgooj2T-SiyT4Dr1828-cmeJKVbads2hDsnptj9qMexVk6zxmMpI_0QKeNFY-X-ZcoMOExP7nOzxykAjj5h3CrQeo5GBUYH5cC-ms97YiSOAHZQkdhX9x-y6E8KJ9FbM26QRlMH0hJfZw05Zv7-V34_84nqXMxERgxijvwBTgMbcZD-_VIyzFY/s320/sps-axe.jpeg" width="320" /></a></div><br />I'm fortunate to be a member of many communities online. And, the cybersecurity industry is one in which the more seasoned veterans are willing and able to help and assist newcomers. It's a wondrous and encouraging thing, and I've seen it time and time again. In participating in these communities, I oftentimes observe newcomers (particularly those new to application security) take a vulnerability-first approach. <br /><br /><br />They'll ask questions like: <br /><div><div><ul style="text-align: left;"><li>"What is XSS and how do I find it?" </li><li>"What is SQLi?" </li><li>"How do I bypass MFA?" </li><li>and more. </li></ul></div><div>At first glance, there's nothing wrong with these questions. These are important vulnerabilities and it's important to know what they are and how to find them. In fact, one might argue that this is foundational, the first-steps on the journey, and many practice labs/arenas/challenges focus on a vulnerability-first approach. Yet, due to the ingenuity of the human imagination and the time-constraints of challenge builders, the learning materials often only present specific variations of vulnerabilities intended to be found. They cannot reflect the infinite number of variations seen in real systems. </div><div><br /></div><div>Inevitably, newcomers often focus on specific vulnerabilities or classes of vulnerabilities but lack the tools to go deeper and more complex. And, because of this, vulnerability-first approaches kind of miss the mark. It's equivalent to starting at the tip of the iceberg, missing the mass lurking below the surface. </div><h2 style="text-align: left;"><b>A better way</b> </h2><div>Just as Honest Abe noted about chopping down a tree, you need the right tools. Similarly, if you want to find more complex and higher-risk bugs, you need to be better prepared with the right tools. Within cybersecurity, we like our tool, but there's no greater tool than your understanding of the system/application you're assessing. Taking the time to first understand the application/system as it was built - not as it should be, but as it is - gives you a foundation to begin looking for discrepancies that may be exploited. </div><div><br /></div><div>I've told this story before, but, when I was very new to application security, I had never assessed a .NET multi-factor implementation. I was familiar with types of vulnerabilities in authentication processes, and had even done the labs where you could use SQLi to bypass authentication controls. I knew there was distinct possibility that, if I was handed a MFA process, I might be able to find some things wrong but there could be something big I'd miss. </div><div><br /></div><div>So, my team had me go build a basic .NET application, shred it to pieces, and then report back on my findings. This app had no multi-factor; my goal was simply to understand how .NET authentication works "out-of-the-box". I looked at session management, password policy criteria, how users register, etc. My goal was simply to understand. </div><div><br /></div><div>In the next assessment I performed, I was given a .NET app that had - you guessed it - MFA added to it. Now, because I had an understanding of .NET "out-of-the-box" authentication, I was better prepared to note where things just didn't feel right. There were deviations from the baseline, and my spidey-sense was attuned to pick up on those deviations. By the time the assessment was complete, I had found multiple distinct avenues of compromising the MFA process. All because I started with trying to understand the framework and application first, and then worried about vulnerabilities after that. </div><h2 style="text-align: left;">Application-first </h2><div>What do application-first approaches look like? Considering I'm an app pentester, the following examples take an application slant, but the principles apply to networks as well. Let's take authentication. If you want to find authentication bypass, instead of asking "How can I bypass MFA?" or immediately inserting '-- into the username field, first take the time to really understand how the authentication process works. Application-first questions may be: </div><div><ul style="text-align: left;"><li>How does the application determine who you are? (How does it know you from Bob?) </li><li>How does a user move from unauthenticated to authenticated? </li><li>If self-registration is present within an application, is there a difference between a first time login after self-registration versus any other login attempt? </li></ul></div><div>Or, alternatively, let's say you're taking a look at output encoding vulnerabilities. Instead of asking "Where is XSS in this application?" or dropping a script tag in every field, try asking application-first questions. </div><div><ul style="text-align: left;"><li>How does the application perform output encoding? </li><li>What are the contexts at work within any given page? </li><li>What client-side functions are in use and are any of them using vulnerable sinks? </li></ul></div><div>One more. Let's say you want to find privilege escalation. This is a high-risk issue in any network or application. </div><div><ul style="text-align: left;"><li>Have you investigated how the application or server makes its authorization determinations? </li><li>What permissions does a high-level user have that a low-level user doesn't? </li><li>Are page-level, feature-level, and role-based controls enforced consistently and comprehensively? </li></ul></div><div>These questions and more take an application-first approach. They approach a system seeking to first understand how it is operating, the controls/processes/flows in use - before looking for a vulnerability. Once you know how the application works, you are better prepared to look for, find, and exploit more complex vulnerabilities because they start to stick out like a sore thumb. </div><h2 style="text-align: left;">Sharpening your axe </h2><div>So how do you sharpen your axe? Particularly in appsec, the best way is to build web apps. Building your own app gives you a great understanding of how apps are put together. Going back to the above example, let's say you want to know how to better assess .NET applications. </div><div><ol style="text-align: left;"><li>Go to <a href="https://dotnet.microsoft.com/learn/aspnet">https://dotnet.microsoft.com/learn/aspnet</a>, pick a tutorial and follow it through. This <a href="https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app/start-mvc?view=aspnetcore-5.0&tabs=visual-studio">MVC movie app</a> is pretty good. </li><li>With the application built, connect a proxy to it and examine the calls going back and forth. I particularly like BurpSuite, but you can use OWASP ZAP, Fiddler, or something else. At this point, you're not hacking, just observing. </li><li>Start asking questions about the authentication, the authorization, session management, encoding, etc. The how's and why's become critical here. </li></ol></div><div><ul style="text-align: left;"><li>How does a user authenticate? </li><li>Why is the flow designed this way? </li><li>How does the app know you're a low-level user and not an admin? Or vice versa. </li><li>When you make a selection on a page, how does the server know you made the edit? </li><li>Etc. </li></ul></div><div>Once you feel like you know the answers to these questions, you've now covered your "use" cases. Put on your hacker hat and start exercising the "abuse" cases. What happens if you... </div><div><ul style="text-align: left;"><li>...skip steps of the authentication process? </li><li>...revisit steps out of order? </li><li>...force cookies to be a certain value? </li><li>...change that POST parameter to be something other than what you selected or that's an option for you? </li></ul></div><div>And this is the general method. Want to get better at looking for and exploiting MFA bypasses .NET? Add MFA to the login flow of your .NET and take a crack at it. Privilege escalation? Add a restricted section that only an admin should be able to get to and analyze it with a low-level user. This same method/thought process works for Angular, React, or application technologies such as OAuth, SSO, blockchain, etc. </div><div><br /></div><div>It also can be applied on the network-side. Want to get better at pivoting? Deploy multiple VMs on your host, establish a subnet between them and try your hand at using proxychains. Active Directory got you down? Deploy a virtualized Windows Server with AD, add some users, and start exploring how this technology is supposed to be used. The better you are at understanding a technology, its intended use cases, and the way it was designed to be deployed the better you will be at identifying when deployments have deviated from the intended path. </div><div><br /></div><div>Ultimately, this comes down to trying to truly understand the application/technology/system you're assessing. If you take the time to sharpen your understanding, you'll be on your way to finding deep and complex bugs in no time.</div></div></div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-65996710940712929022020-10-20T13:45:00.001-05:002020-10-20T14:43:45.044-05:00Security PS Internship and Apprenticeship Information Session - November 4th<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw5PkMuKSxxEP9pQwjZuMO1Bmmm_pb60ydy37gi_GjxHyNVRDYNOpxg4WpFAWSwezobw8RWuEJv7Sgb4b3zXQqIM-4pGrSl1_WdKCzQkJUU66pc1NT1aWn14ufYhzEQeTgpLXvuQ/s1600/AppSecApprentice.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="335" data-original-width="600" height="111" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw5PkMuKSxxEP9pQwjZuMO1Bmmm_pb60ydy37gi_GjxHyNVRDYNOpxg4WpFAWSwezobw8RWuEJv7Sgb4b3zXQqIM-4pGrSl1_WdKCzQkJUU66pc1NT1aWn14ufYhzEQeTgpLXvuQ/s200/AppSecApprentice.png" width="200" /></a></div>
<p>Are you a professional, college student, or high school student trying to get started in Cyber Security? Do you want to learn how to be a penetration tester and hack web applications? If so, join us for <a href="https://www.securityps.com/cyber_apprentice.html" target="_blank">Security PS's Cyber Apprentice Program</a> information session. Come learn about opportunities to begin a career in Cyber Security as an application security intern or apprentice. This session will discuss how the Cyber Apprentice program works, the practical hands on skills you will learn through the program, and you will see a live hacking demo.</p>
<p>The virtual information session will be held via Google Meet on <b>Wednesday, November 4th at 3:30 PM</b>. To receive an invitation to join, please fill out the following <a href="https://docs.google.com/forms/d/12YipFiuD9NgJKs7KmVIme-Vp9LGvysGNTwpmZl1ATsk" target="_blank">form</a>.</p>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-26426858715374317432020-10-09T14:58:00.001-05:002020-10-09T15:02:06.469-05:00Upcoming Event: How to Get Started as an Application Security Engineer<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPjwxdAsSS6xp-ShT1iosSV9gVQh1VEFmcGrHmDHoHBTfFQuX9UrPwBlfXirAe6HICFvxClIIhmFTyUjcDOR0gvPoWYDVZG2s7mQNvxLjBGQzOlKD4Gtno0UeTRJOlWPF1eYrD7g/s1808/SPS-CareerTalk-Christian.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="600" data-original-width="1808" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPjwxdAsSS6xp-ShT1iosSV9gVQh1VEFmcGrHmDHoHBTfFQuX9UrPwBlfXirAe6HICFvxClIIhmFTyUjcDOR0gvPoWYDVZG2s7mQNvxLjBGQzOlKD4Gtno0UeTRJOlWPF1eYrD7g/s320/SPS-CareerTalk-Christian.png" width="320" /></a></div><p></p><p>If you 'd like to learn about getting started in a Cyber Security career, come join us for a virtual event October 14th.</p><p>Johnson County Community College is hosting Security PS for an online presentation about getting started in your Cyber Security career. Christian Elston will be discussing his experience pursuing education, certifications, applying for jobs, and eventually being hired by Security PS. He talks about the pros and cons of his background and what he wish he had known or done differently leading up to being hired. High School, College, and young professionals will take away valuable knowledge about the practical steps they can take now to help them land a Cyber Security job in the future. </p><p>Join us <b>Wednesday, October 14th at 3:15 PM</b>. To register for the event or get more information, visit: <a href="https://www.jccc.edu/events/2020/1014-security-app-engineer.html">https://www.jccc.edu/events/2020/1014-security-app-engineer.html</a></p>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-18409484997101122172020-08-13T12:04:00.001-05:002022-05-06T15:25:01.451-05:00Cyber Apprentice 2020 Intern Term Success!<div class="separator" style="clear: both; text-align: center;"><br /></div>
<div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdg-zwCW6DY0sQMafLQgIPsxtVFrmB4Dh6k1F5oxK0OdEXkvai8udZx7KMG43vt-lIRUByigvttT3uTsw3NBCjfki-BjG-_YrtlwdgNqdPpskeWk1in-q7oxwwR7GsLOWJWvfaNsCi4r0z9OwGUIMnqop3yRF3qGliEZhWrHEZ9pf9blomp8U/s1200/AppSecApprentice.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="711" data-original-width="1200" height="190" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdg-zwCW6DY0sQMafLQgIPsxtVFrmB4Dh6k1F5oxK0OdEXkvai8udZx7KMG43vt-lIRUByigvttT3uTsw3NBCjfki-BjG-_YrtlwdgNqdPpskeWk1in-q7oxwwR7GsLOWJWvfaNsCi4r0z9OwGUIMnqop3yRF3qGliEZhWrHEZ9pf9blomp8U/w320-h190/AppSecApprentice.png" width="320" /></a></div>
Globally, the information security industry is facing a shortage of talent. According to Cyberseek, the U.S. alone has nearly 500,000 unfilled information security positions. And, while Kansas and Missouri fare better than the national average for supply vs demand of cybersecurity professionals, there are still an estimated 10,000 information security jobs left unfilled in our region.<br />
</div><br />
<div>Security PS created the Cyber Apprentice Program to help address this gap and to invest in the next generation of cybersecurity professionals in the Kansas City metro area. The program begins with an Internship designed to train and mentor students and young professionals who want to build a career in cybersecurity but lack the hands-on experience often required for entry-level positions in the business world. By providing strong coursework, personal mentorship, and practical hands-on projects, the program gives high-school students, college students, and early career professionals an incredible opportunity to gain practical experience and accelerate their growth and career development in cybersecurity. </div><div><br /></div>
<div>In April, Security PS reviewed over 50 applications and took on 7 Interns into the program. With diverse backgrounds, walks of life, and interests, all of our Interns shared a strong drive to learn and grow in cybersecurity knowledge and skills. None had experience in application-layer security testing or analysis, which was the focus of this Internship term. Last week, 6 Interns completed the program, having stretched themselves to keep an accelerated pace of learning, collaboration, mentorship, and practical hands-on projects that proved their growth and experience achieved.</div>
<blockquote>“I really appreciated that this was an actual real-world experience. Many internships have you watch people do their jobs. It was really beneficial to do the programming and analysis myself, to have the deadlines and meetings myself - all of it was beneficial to learn exactly what it means to do this type of work.” -- Shelby J.</blockquote>
<div>Through the 13-week term, the Interns received training, direction, and mentoring from our Security PS Application Security team. Each Intern pushed themselves to learn new software development platforms and technologies and develop their own applications demonstrating those technologies. They then learned how to analyse their applications to understand how they work internally so they could have thoroughly informed conversations about the security implications of their application. Gaining experience with these technologies, industry favored tools, and a rigorous analysis and documentation process, each of these Interns arrived at security conclusions beyond what automated testing tools can identify. </div><div><br /></div>
<div>The difficulty and criteria required to complete these tasks set a high bar. The program's training and mentoring combined powerfully with this group's relentless tenacity to learn, propelling each Intern to complete the term and chalk up experience of 3 application security projects in total to name as their own. Ultimately, we set high expectations for the Interns to demonstrate professional levels of communication, documentation, and teamwork. To their credit, they delivered.</div><div><br /></div>
<div>The internship culminated in a final project where each member of the Cyber Apprentice Intern team was given a real, production application with wide-ranging, diverse, and complex technologies, some of which they had never seen before. Their objective: research, analyze, and assess the application to determine its inner machinations and clearly communicate how the application worked to a seasoned Security PS Application Security team member.</div>
<blockquote>“From the beginning of the internship to now, I definitely wouldn’t have been able to do this level of analysis without going through the first two projects … I had never assessed this type of application before … it took me two days to research and figure it out … but once I got through that point it became really fun … It was a lot of fun to figure out what was going on once I got my feet on the ground.” -- Grant S.</blockquote>
<div>The program put an emphasis on learning to analyze how an application works, which is the critical foundation of security analysis. Using the analysis processes taught by their mentors, the Interns naturally began identifying security problems in their applications including session fixation, weak and improper OAuth grant types, reflected XSS, credential harvesting, and bypass of authentication process steps. ...Not your average vulnerabilities for newcomers! So proud.</div>
<blockquote>“It was really exciting to analyze a real application. ... I really enjoyed doing the analysis because this is something that is real, it isn’t fake or theoretical, and I really learned it. I have never done analysis like this before this internship... I have the confidence now that, yes, I can do this. If anyone asks me how this application works, I can be confident in what I’m saying, because I’ve worked it out and documented it.” -- Quentin K.</blockquote>
<div>These Interns made a strong stride down the path of learning the specialized area of cybersecurity analysis and pentesting called Application Security. They've gained firsthand practical skills and enough experience to be able to consider whether they would like to continue pursuing this fascinating specialization or to explore other areas of cybersecurity. Those who have the time and desire to push further in and aim for a career in Application Security can apply for the second term of our Cyber Apprentice Program, which builds further understanding of application security vulnerabilities and how to find them.</div><div><br /></div>
<div>Security PS started this program to address a growing skills and candidates gap in the information security industry. The goal was to identify capable, bright young people and train and mentor them to perform basic web application analysis. The inaugural class of interns performed very well and the Security PS professional team thoroughly enjoyed providing mentorship and training to this group of individuals. In the future, we hope to increase the number of interns we can take on as we work to find, train, and equip the next generation of information security talent.
</div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-54216756835898717472020-06-26T13:00:00.005-05:002020-08-13T11:56:27.375-05:00My WCF Experience Part 2: wsHttpBinding<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-8VVIAl3lkYQ/XuppkFn1LBI/AAAAAAAABNM/aJU3UFDe3bEDjPExcorFlnrBiA-yIk5oACLcBGAsYHQ/s1600/msWCF.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="384" data-original-width="768" height="100" src="https://1.bp.blogspot.com/-8VVIAl3lkYQ/XuppkFn1LBI/AAAAAAAABNM/aJU3UFDe3bEDjPExcorFlnrBiA-yIk5oACLcBGAsYHQ/s200/msWCF.jpg" width="200" /></a></div>
<h2>
Introduction</h2>
This is part two in a multi-part blog series detailing my experience building and testing WCF applications. If you haven’t read it yet, I recommend <a href="https://blog.securityps.com/2020/02/my-wcf-experience-part-1.html">checking out part one of the series</a>, as it provides context for the rest of the posts in this series.<br />
<h2>
WsHttpBinding</h2>
In “<a href="https://blog.securityps.com/2020/02/my-wcf-experience-part-1.html">My WCF Experience Part 1</a>”, I built an out-of-the-box basicHttpBinding WCF service, which exposed me to the concept of “service oriented architecture” (SOA). Yet, basicHttpBinding is not capable of using features like WS-Security, and so it is likely most assessments I’ll do won’t use basicHttpBinding. To progress my understanding of WCF further, I wanted to learn WCF services the way Security PS’s clients were implementing them. At the direction of my mentor I decided my next step was to try my hand at defining a service that employed wsHttpBinding.<br />
<br />
WsHttpBinding implements a binding over HTTP that supports Web Services (WS) protocols and standards. This includes many of the security mechanisms built into the WS standards. The default security standard implemented by wsHttpBinding is message-level security. Message-level security provides the ability to encrypt transmitted messages, even if the application or service is using HTTP which itself is not encrypted. So in an environment where the underlying protocol may not be encrypted and is possibly susceptible to interception, the messages themselves are encrypted and not readable without the proper decryption mechanism. This provides message-level security even over non-TLS, HTTP connections.<br />
<br />
Here is the binding contract I used to implement wsHttpBinding. We’ll discuss defining binding contracts later on, so don’t worry if it looks like foreign XML. For now, I’m providing it as the “solution up front” and we’ll dive more into the highlighted elements and why it’s in an XML format down below.<br />
<div class="codeblock">
…<br />
<span class="highlightgood">
<bindings><br />
<wsHttpBinding ><br />
<binding name="WSHttpBinding_IService1"><br />
<security mode="Message" ><br />
<message negotiateServiceCredential="true" /><br />
</security><br />
</binding><br />
</wsHttpBinding><br />
</bindings><br />
</span>
<services><br />
<service name="WcfServiceLibrary1.Service1"><br />
<host><br />
<baseAddresses><br />
<add baseAddress="http://mywcfservice.local:8733/Design_Time_Addresses/WcfServiceLibrary1/Service1/" /><br />
</baseAddresses><br />
</host><br />
<!-- Service Endpoints --><br />
<!-- Unless fully qualified, address is relative to base address supplied above --><br />
<span class="highlightgood">
<endpoint address="" binding="wsHttpBinding" contract="WcfServiceLibrary1.IService1" bindingConfiguration="WSHttpBinding_IService1" ></span><br />
<!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically.<br />
--><br />
<span class="highlightbad">
<!--<identity><br />
<dns value="mywcfservice.local"/><br />
</identity>--><br />
</span>
</endpoint><br />
…</div>
<br />
Key elements for this binding are highlighted in green, and I will discuss why the <code>identity</code> property (in red, above) has been commented out. In this blog post, I want to discuss two lessons I learned in writing this contract: defining the <code>security</code> attributes via the <code>App.config</code> file and removing the <code>identity</code> attributes to prevent token authentication errors.<br />
<h2>
Defining Security Attributes Administratively vs. Programmatically</h2>
Establishing wsHttpBinding versus the basicHttpBinding was a lot trickier than I anticipated, because it was at this point that the documentation available online started to break down. Rather than a singular tutorial designed as a “one size fits all” approach, I found everything from general information on wsHttpBinding to extremely environment-specific issues that did not fully address the setup issues I encountered. It was supremely frustrating as I simply wanted to get the service implemented at the most basic level.<br />
<br />
wsHttpBinding has many of the standard WS-* security features already built-into its definition; all developers need to do is select which attributes to implement based on their specific deployment. In order to achieve the message-level encryption mentioned above, within your binding configuration, you must set your security mode like the following:<br />
<div class="codeblock">
<security mode="Message" ></div>
<br />
Curiously, once I started attempting to implement the <code>security</code> properties to define message-level encryption, I kept receiving errors like:<br />
<ul>
<li>Visual Studios “token generation” errors</li>
<li>Client “improper authentication” errors</li>
<li>Setup mismatched configuration files</li>
</ul>
<br />
Eventually, I discovered my implementation differed from many others in that I opted to set my binding configuration administratively via the <code>App.config</code> file, whereas many others implemented their settings programmatically within the actual application. This is one of the great things about Visual Studio: users can choose to set their configuration settings either in a structured XML .config file or via programmatic statements in their code. It is an entirely personal choice depending on what the coder desired.<br />
<br />
<b>Lesson learned #1:</b> It is possible to define your settings both programmatically as well as administratively through a configuration file. Your best implementation may change depending on your project. Administrative configuration made more sense to me while learning this process. Be prepared to do your research no matter which route you opt to employ. More resources available <a href="https://docs.microsoft.com/en-us/dotnet/framework/wcf/configuring-wcf-services-in-code">here</a> and <a href="https://www.oreilly.com/library/view/programming-wcf-services/9781449382476/ch01s07.html">here</a>.<br />
<h2>
Authenticating WCF Services</h2>
While this was a good lesson learned, I still had authentication errors to contend with. Enter the role of authenticating services.<br />
<br />
WCF clients need to know and authenticate the service to which they are connecting. It is not enough for a service to claim they are an authentic, non-malicious source; clients must be able to trust the service is who it claims to be. If not, clients may be redirected to malicious services and so become victims of spoofing attacks. In order to achieve this security control and prevent said phishing attacks, WCF provides a way for services to authenticate themselves.<br />
<br />
WCF provides the Identity property of the EndpointAddress class to achieve this functionality. This property represents the identity of the service to which the client is attempting to connect. The WCF infrastructure will automatically authenticate the service using this property prior to any code being executed by the service on the client. It authenticates the service using one of six different types of identities (as derived from <a href="https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/service-identity-and-authentication">this .NET developers document</a>):<br />
<ul>
<li>DNS: X.509 certificate or Windows accounts</li>
<li>Certificate: B64-encoded X.509 certificates</li>
<li>Certificate Reference: Same as certificate, but you can store the certificate in different locations and only have to update the reference to it.</li>
<li>RSA: RSA key value</li>
<li>User Principal Name: Specifies that the service is running under a specific Windows user account (using Kerberos security if in an AD environment)</li>
<li>Service Principal Name (SPN): Ensures the SPN and the specific Windows account associated with this SPN both identify the service</li>
</ul>
<br />
After much Googling, troubleshooting, tearing down and building back up again, I stumbled across a note hidden in an MSDN document which solved all of the token generation and improper authentication problems I experienced with wsHttpBinding. Because the <code>identity</code> property (which is included by default when adding a wsHttpBinding to a service) forces the underlying WCF infrastructure to validate the service’s identity before the client attempts to authenticate to it, my service has to be able to supply sufficient credentials matching the authentication type in order to work properly. In my development environment, I was creating a very simple client and service using a local instance of IIS Express, and so I did not have the infrastructure (or desire) to build in sufficient authentication mechanisms for my service. Thankfully, removing the <code>identity</code> property altogether in the configuration file fixed the token authentication and allowed my binding to establish an HTTP connection using WS-* standards.<br />
<br />
As may be seen in the screenshot below, my new WCF service now encrypts the message body in both the request and response.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img height="409" src="https://lh4.googleusercontent.com/ri7gwKLoPkfzDQ5LOcb9VBw9RBbz-vaC9DzeKyEP2yMPh4qmhkITeWqNk5yw9bWyT6rV8KihwA1CPKKtf2V7OKk2LISd0OYtpLOysvfJAMBuL96zacGkagVVuJz2_uqKPLlM88se" style="margin-left: auto; margin-right: auto; margin-top: 0px;" width="624" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Raw Encrypted Request</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img height="656" src="https://lh4.googleusercontent.com/XCTPUY1F0zHNqwTGBVpWFpOvjeLTPRdnRgnpAOH5Rfevayc-iNf5xLbjLBak6H4yIGurrObh15viSAiFVTl5wuLz7Y53_my9b86TRDOA75QfrhKAVgqr1S8Xiv5PW9AFni14M-BS" style="margin-left: auto; margin-right: auto; margin-top: 0px;" width="624" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Raw Encrypted Response</td></tr>
</tbody></table>
<b>Lesson Learned #2:</b> You can remove the <code>identity</code> property in development and allow your client to connect to your service. This is a development hack that worked great for me to get this proof of concept working, but this is not suitable (nor is it recommended) for production. If a WCF service is deployed into production without the <code>identity</code> property enabled, clients may become susceptible to phishing attacks due to redirection to malicious services.<br />
<h2>
Summary</h2>
I found getting wsHttpBinding configured and working properly was a challenge. Researching the technology and deconstructing other users’ specific issues required me to be creative and methodical in my troubleshooting. Persistence, patience, creativeness, and good old-fashioned luck were in high-demand during this exercise. However, by the end, I had a working WCF prototype that modeled the protocol configuration which mimicked some real-world intranet applications Security PS has assessed.<br />
<div>
Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-58872411160817309592020-05-27T12:13:00.000-05:002020-05-28T11:31:14.968-05:00Websockets: The Importance of a Firm Handshake<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwwsCiLBJSkjzNnnoBLPpPqwG4b3n0vP5zsVhOumkG9AoE1dj5jVRCHGuvWCYmJUtJ3C40pj8Te7M1O29K1_GgQ2guYqkUs3SM1fUi8XK-XJa66Zekd17kuPqkjPi1x1sDJVfnNg/s1600/WebSocketsSecurity.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="600" data-original-width="1071" height="111" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwwsCiLBJSkjzNnnoBLPpPqwG4b3n0vP5zsVhOumkG9AoE1dj5jVRCHGuvWCYmJUtJ3C40pj8Te7M1O29K1_GgQ2guYqkUs3SM1fUi8XK-XJa66Zekd17kuPqkjPi1x1sDJVfnNg/s200/WebSocketsSecurity.png" width="200" /></a></div>
In the <a href="https://blog.securityps.com/2020/04/websockets-trust-no-one-not-even-your.html" target="_blank">previous post on websockets security</a>, we discussed several attacks that could occur in websocket implementations when security concerns were overlooked. All of these attacks required a websocket connection to be established between a client (browser) and server. In this post, we will focus on several attacks that can occur during the initial negotiation that establishes the websocket connection, and I'll also point out the security practices that strengthen the negotiation to defend from such attacks.<br />
<h2 style="text-align: left;">
Handshakes</h2>
Before discussing any specific attacks, we need to briefly discuss how the initial negotiation, or handshake, process for websockets works. To establish a websockets connection, the client sends an HTTP request to the server with several unique headers. These headers identify the request as a websockets handshake request. One possible example of this request is shown below, with the websocket-specific headers highlighted in green:<br />
<div class="codeblock">
GET /chat HTTP/1.1<br />
Host: example.com<br />
<span class="highlight1">Connection: Upgrade<br />
Upgrade: websocket<br />
Sec-WebSocket-Version: 13<br />
Sec-WebSocket-Key: 1RZ1202NWUUIxSAVA/XpFA==</span><br />
Origin: example.com</div>
<br />
If the server supports the websocket protocol, it will respond to the request above to indicate that it is switching the communication protocol. An example of this response is shown below:<br />
<div class="codeblock">
<span class="highlight1">HTTP/1.1 101 Switching Protocol</span><br />
Connection: Upgrade<br />
Upgrade: websocket<br />
Sec-WebSocket-Accept: Hu/nwPq2Q50L0v2bgQwb6Sm7BjY=<br />
Content-Length: 0</div>
<br />
Once this response is received by the client, all future traffic will be sent between the client and server through websockets. Now that we have covered the handshake process, we can explore some attacks that target it.<br />
<h2 style="text-align: left;">
CSRF</h2>
In the example above, there is no way to identify the user initiating the request. To identify users, most applications will use some sort of session cookie. This cookie is then sent by the client when making a request, which allows the server receiving the request to uniquely identify the user. For example, using a cookie to identify the user initiating a websockets handshake, a client’s request might look something like:<br />
<div class="codeblock">
GET /chat HTTP/1.1<br />
Host: example.com<br />
Connection: Upgrade<br />
Upgrade: websocket<br />
Sec-WebSocket-Version: 13<br />
Sec-WebSocket-Key: 1RZ1202NWUUIxSAVA/XpFA==<br />
Origin: example.com<br />
<span class="highlight1">Cookie: session=9cj0asd9012n3JC89014NJASD</span></div>
<br />
When cookies are involved in any process, developers need to consider how to defend against possible Cross-Site Request Forgery (CSRF) attacks. A CSRF attack takes advantage of the fact that cookies are sent automatically with any request that a client makes. In general, an attacker injects HTML or JavaScript into a web page that causes a visitor's browser to make a request to a different site. The request is aimed at functionality that performs a sensitive action (such as changing a password or transferring money). Then, if the attacker can trick a victim into visiting this web page, the target application will receive a request that appears to be from the victim.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div style="text-align: right;">
</div>
Due to websocket handshake requests being done over HTTP, they are also vulnerable to CSRF attacks. However, unlike a traditional CSRF attack in which an attacker can only make requests as the victim, conducting a CSRF attack against a websockets implementation will allow the attacker to both make requests and receive responses as the victim. This behavior can be used to retrieve sensitive user information by making websocket requests to endpoints that return things like user database identifiers or social security numbers. In addition, due to the requests and responses appearing to originate from the server that handles websockets, these attacks can bypass any CORS protections applied to the application.<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhww6ZUuiSJq_nbzxxrHOSAbDcGlLq8I9PmkiYYJ6nWR7DewwqUyFbut7FEIdysmC6-b9jfvCrMkciv09nudWOq8pXpFtnxQAoY-E3c6Kt3cgzBJDTiMkSXhXp5aqpVUgyTEQLWyg/s1600/CSRF+Websockets.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="CSRF WebSocket Handshake" border="0" data-original-height="424" data-original-width="883" height="307" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhww6ZUuiSJq_nbzxxrHOSAbDcGlLq8I9PmkiYYJ6nWR7DewwqUyFbut7FEIdysmC6-b9jfvCrMkciv09nudWOq8pXpFtnxQAoY-E3c6Kt3cgzBJDTiMkSXhXp5aqpVUgyTEQLWyg/s640/CSRF+Websockets.png" title="CSRF WebSocket Handshake" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: 12.8px;">CSRF WebSocket Handshake</span></td></tr>
</tbody></table>
<br />
There are many solutions to CSRF, the most common being the use of anti-CSRF tokens. It is important that these same protections are applied to any websocket handshake requests.<br />
<h2>
Fallbacks</h2>
In contrast to the two example handshake requests above, what happens if a request is made to initiate a websocket connection and the client or server does not support the websocket protocol? To handle this case, a majority of websocket libraries will “fallback” into a model known as HTTP long-polling. In HTTP long-polling, the client constantly sends requests with long timeouts to the server. The server will then either respond with information or the request will timeout if there is no new information to retrieve.<br />
<br />
While HTTP long-polling is not vulnerable in its design, security vulnerabilities can arise if developers do not account for communication falling back into this mode of operation. For example, one application we examined supported websockets and would verify the original session cookie used to initiate the handshake to handle authorization. However, when the application was forced to fallback into HTTP long-polling mode, HTTP requests would instead include a JWT in their body to handle authorization. Since this behavior was not expected, many of the endpoints in the application did not properly check this JWT. This allowed users who forced a fallback to HTTP long-polling to gain access to unauthorized functionality, including administrative-only operations.<br />
<br />
In applications that support websockets, it’s critical to test the fallback workflow to ensure that authorization controls are applied consistently. This can be done by modifying the handshake’s response in a man-in-the-middle proxy, such as Burp Suite or OWASP Zap, to force the connection to fallback to HTTP long-polling. Instead of the “HTTP/1.1 101 Switching Protocol” response, the response should be modified to look like:<br />
<div class="codeblock">
HTTP/1.1 200 OK<br />
Content-Length: 0</div>
<br />
This response will force most websocket implementations to fallback into HTTP long-polling mode. After doing this, the application can be navigated normally and authorization controls can be verified to work correctly in this mode.<br />
<h2>
Publishing and Subscriptions</h2>
On top of their websocket implementations, many applications will use a publishing and subscription(pub/sub) model to send identical messages to multiple users. For example, in a shared document editor, any changes to the document will be sent to all the users viewing the document. To accomplish this, the client would subscribe to the “document_updated” event. When the document is updated, the server would go through all subscribers and publish this event to all of them.<br />
<br />
Clients often subscribe to events immediately following the initial websocket connection. If authorization controls are not enforced on these subscriptions, attackers can subscribe to unauthorized publishers.<br />
<br />
In one application we examined, users could subscribe to events regarding document updates. For example, to subscribe to content updates, the (simplified) websocket request looked like:<br />
<div class="codeblock">
{“document_id”:”1234-5678-9012-3456”, “event”: “update”}</div>
<br />
The application would then send responses over websockets containing information regarding the event:<br />
<div class="codeblock">
{“document_id”:”1234-5678-9012-3456”, “event”: “update”, “content”: “new text”}</div>
<br />
However, in this application, there were no authorization controls enforced on the “document_id” field. By substituting in different document_id’s, attackers could subscribe to events on other user’s documents. In addition, like many pub/sub models, this application supported the “*” wildcard. This wildcard is commonly used by pub/sub models to mean “all”. In this application, the attacker could subscribe to all document updates from all users by issuing the following request:<br />
<div class="codeblock">
{“document_id”:”*”, “event”: “*”}</div>
<br />
When pub/sub models are implemented on top of websockets, it’s critical that developers ensure that users have permission to subscribe to a resource before granting it.<br />
<h2>
Conclusion</h2>
So, while websockets handshakes have some unique considerations regarding design and security, they require the same type of security scrutiny that must be applied to all traditional traffic. In particular, developers should take the following steps to secure their websocket handshakes:<br />
<ol>
<li>Ensure that websocket handshakes have some sort of CSRF protection.</li>
<li>Understand your application’s fallback mechanism and review whether the fallback mode uses a different authorization model.</li>
<li>Restrict subscriptions even if GUIDs are difficult to guess.</li>
</ol>
<br />
<div>
<br /></div>
Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-45265421151399171912020-05-01T13:48:00.001-05:002020-05-01T13:48:07.895-05:00Security PS Brings On Seven New Cyber Apprenticeship Interns!<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjK4kqqSo3R8p6QR4D73Imupjb3TeB6exHpmQOkzI6kLUo-H3zzlB-VfX6juZM0PAWOwx4KrJQNLTtTGoSoTFcSHp3EeO_zd-1Ou-yyxbwJLxC9uW4FMnNvvrX530QlqV-8CNuCug/s1600/AppSecApprentice.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="711" data-original-width="1200" height="118" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjK4kqqSo3R8p6QR4D73Imupjb3TeB6exHpmQOkzI6kLUo-H3zzlB-VfX6juZM0PAWOwx4KrJQNLTtTGoSoTFcSHp3EeO_zd-1Ou-yyxbwJLxC9uW4FMnNvvrX530QlqV-8CNuCug/s200/AppSecApprentice.png" width="200" /></a></div>
Security PS loves Kansas City. We have consultants that live all over the metro area. Many of our clients also call Kansas City their home. That’s why we are investing in students, schools, and young professionals right where we live by creating the <a href="http://blog.securityps.com/2020/03/security-ps-internships-and.html" target="_blank">Security PS Cyber Apprentice Program</a>. We are particularly proud of this program because it breaks down barriers that would ordinarily make it difficult for passionate students and newcomers to the field to learn, gain experience, and begin their cyber security career.<br />
<br />
As a part of our Cyber Apprentice Program, we have partnered with several area schools and teachers to teach high school and college students about cyber security. We’ve provided hands-on labs about hacking web applications and talked about how to start a career in this high demand and exciting professional field. This year, we've engaged students and teachers from area schools including:<br />
<ul>
<li>Johnson County Community College</li>
<li>Metropolitan Community College</li>
<li>Summit Technology Academy</li>
<li>North Kansas City High School</li>
<li>Blue Springs High School</li>
<li>The Barstow School</li>
<li>The Islamic School of Greater Kansas City.</li>
<li>Shawnee Mission Center for Academic Achievement</li>
<li>Shawnee Mission South High School</li>
<li>Shawnee Mission Northwest High School</li>
<li>Olathe South High School</li>
<li>Spring Hill High School</li>
<li>Teachers at a variety of schools through KC Stem Alliance’s Computer Science Teacher Mentor Day</li>
</ul>
<br />
On Tuesday, April 28th 2020, seven new interns began the SPS Cyber Apprenticeship program and joined our team for the next three months. Our diverse cohort of interns includes exceptional high schools students, college students, and professionals in the Kansas City area. We are looking forward to getting to know these students better and helping them realize their potential in this rewarding industry!<br />
<div>
<br /></div>
Kris Drenthttp://www.blogger.com/profile/10182751344265769843noreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-26183001246311967942020-04-24T09:37:00.001-05:002020-05-27T12:50:32.316-05:00Websockets: Trust No One, Not Even Your Own Application<img border="0" data-original-height="600" data-original-width="1071" height="111" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikY-GRCemjuhB2NtMXbPYI5Icu89nbBUHe5CNStI_SZRM2D4BXrUybTKqrwEmn1HuY_ZOo6PPqH0UEqJaKCcXYBn3BGc0HXhkHSFMMCOU0sme2UUiz7YeDrXBF90wZUqcmOm_NqA/s200/WebSocketsSecurity.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" title="Web Sockets Security" width="200" />Websockets were designed to solve the need for fast and bi-directional communication in client/server applications. They are an ideal choice for chat programs, collaborative document editors, and browser-based games. Overall, they offer many advantages over traditional HTTP in terms of speed and ease of use. However, like any other technology, there are fundamental security issues that need to be considered when using websockets. In this article, we will review several security concerns that are often overlooked when implementing websockets.
<br />
<h2>
Input</h2>
To ease into the topic, let’s begin with a well-known security concern and see how it applies to websockets. As many developers know, applications should always treat user input as possibly malicious. One example of attack that can occur if this assumption is not enforced is a cross-site scripting (XSS) attack. In a XSS attack, an attacker places malicious JavaScript code within a parameter. This malicious code is eventually viewed and executed by a victim’s browser. Many developers are aware of XSS attacks against typical form elements (such as a name or address field) and most web frameworks include libraries to protect against these attacks automatically. However, these libraries often do not automatically protect against malicious code in websocket messages. If this avenue is not secured, attackers can use a man-in-the-middle proxy, such as Burp Suite or OWASP Zap, to modify websocket traffic and conduct XSS attacks.<br />
<br />
For example, we recently analyzed a chat application that used websockets to send and receive chat messages. When a user sent a chat message, the following websocket message would be sent from the client (browser) to the server:
<br />
<div class="codeblock">
{“user”:”user1”, "text":"hello", "type":"message", "id":1562869260905}</div>
<br />
By intercepting their own requests to the server, an attacker could modify the message sent to the server to include JavaScript code. This would allow them to bypass client-side sanitation controls. This attack would then execute identically to a traditional XSS attack. An example of this attack is shown below:<br />
<div class="codeblock">
{“user”:”user1”, "text":"Nothing malicious here<script>alert(1)</script>","type":"message","id":1562869260905}</div>
<br />
When viewed by another user, the JavaScript code executed on their machine:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<span style="border: none; display: inline-block; height: 139px; margin-left: 1em; margin-right: 1em; overflow: hidden; width: 624px;"><img src="https://lh6.googleusercontent.com/K_aimPcg_Yt6dDebk0qibex_mFiXDnBcnjScM2LqNTZBOQq_nHCbB3OdQ-sq2oHOPPFl9xmGNMHWo1OWjnXPuLIo1jMs2l30MWkzQKSoT50LM29b8_doums6qXlXNh0lABY61BEj" style="border: 1px solid rgb(221, 221, 221); margin-left: 0px; margin-top: 0px;" width="500" /></span></div>
While this particular application was vulnerable to XSS attacks, the same root cause of trusting websocket input can be used to exploit other areas, such as authorization controls. Like regular input, every application should assume all websocket traffic from users is malicious.<br />
<h2>
Server Output</h2>
In contrast to dealing with user input, many applications are developed to intrinsically trust information that originates from server responses. However, developers should be careful when it comes to trusting websocket responses from the server, as it is possible for attackers to modify these responses before the client receives them. In addition, these modified responses will appear legitimate to the client. If the application trusts these responses, it can lead to serious vulnerabilities.<br />
<br />
One browser-based game we analyzed controlled player movement through a combination of websocket requests and responses. When moving in the game, players would send a websocket message to the server that contained their new position:
<br />
<div class="codeblock">
{“player”:140234, “newposition_x”: 155, “newposition_y”: 20}</div>
<br />
The server would respond back to the player with a confirmation message that contained their new position. This response would look identical to the request above.<br />
<br />
Attempting to send a request from the client with modified newposition_x or newposition_y values would cause the entire request to be ignored by the server. However, by intercepting the response from the server and modifying these positions before they reach the browser, the game would assume that the player desynced from the server. The game would then invoke a special function to fix this apparent desync. This desync routine would place the player wherever was specified in the modified server response. Since the traffic from the client and server appeared legitimate, this vulnerability allowed attackers to teleport around the game world.
<br />
<h2>
Message Forgery</h2>
Both the vulnerabilities discussed above occurred due to attackers modifying messages. However, attackers are not limited to only modifying messages - they can also create them. This can cause two potential issues for applications:
<br />
<ol>
<li>Denial of Service (DoS) attacks</li>
<li>Authorization issues</li>
</ol>
Applications will often have certain operations that take a long time to finish. An example may be parsing a large file for certain information or querying a database. If these large operations are not properly restricted and queued, it may be possible for an attacker to create thousands of requests and overload the application. Similarly, if the application allows for file uploads, an attacker could upload enormous amounts of data. Neither of these vulnerabilities are novel and often applications will have mechanisms in place to handle them. However, many developers treat websocket traffic differently than regular HTTP requests and do not apply these security mechanisms consistently to websockets use. When using websockets, it’s important to remember that attackers can easily create a large volume of requests to any endpoint and that some sort of protection needs to be applied to prevent denial of service attacks.<br />
<br />
In a similar manner, if applications are designed to assume that websocket traffic cannot be created by attackers, they may allow access to unauthorized features. For example, if an application assumes that only administrators can create certain messages, it may not properly verify the requesting user. This would allow an attacker to perform unauthorized actions by only sending websocket requests.<br />
<h2>
Conclusion</h2>
So, while websockets offer many advantages over traditional HTTP, they require the same security considerations that must be applied to all traditional traffic. However, because many security libraries do not account for them automatically, developers must ensure that common controls are also applied to any websocket functionality. In particular, developers should take the following steps to secure their websocket implementations:<br />
<ol>
<li>Always assume user input is malicious and protect against it, even in websocket messages to the server</li>
<li>Clients that use websockets shouldn’t trust server responses to execute sensitive functionality and servers receiving websocket messages should enforce controls</li>
<li>Always verify that websocket traffic is validated on the server-side for authorization controls, rate limiting, business logic, and other traditional server-side responsibilities</li>
</ol>
In the next post in this series, we will cover some additional unique vulnerabilities found specifically within websocket implementations.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-59404392401625729742020-03-18T14:21:00.000-05:002020-04-29T18:20:58.639-05:00Security PS Internships and Apprenticeships for High School Seniors and Current College Students<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw5PkMuKSxxEP9pQwjZuMO1Bmmm_pb60ydy37gi_GjxHyNVRDYNOpxg4WpFAWSwezobw8RWuEJv7Sgb4b3zXQqIM-4pGrSl1_WdKCzQkJUU66pc1NT1aWn14ufYhzEQeTgpLXvuQ/s1600/AppSecApprentice.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="335" data-original-width="600" height="111" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw5PkMuKSxxEP9pQwjZuMO1Bmmm_pb60ydy37gi_GjxHyNVRDYNOpxg4WpFAWSwezobw8RWuEJv7Sgb4b3zXQqIM-4pGrSl1_WdKCzQkJUU66pc1NT1aWn14ufYhzEQeTgpLXvuQ/s200/AppSecApprentice.png" width="200" /></a></div>
Right now, high school and college age students have an opportunity to start internships and apprenticeship positions on the Security PS Application Security team learning the fundamental concepts and skills necessary to pursue a professional career in this high demand field. Our team of cybersecurity experts has a passion for investing into the upcoming workforce and we've created these programs to give hard working students the opportunity, mentoring, and resources to accelerate their growth toward a professional career.<br />
<h2>
Who is this for?</h2>
Security PS is hiring high school seniors and current college students as interns and apprentices NOW! You don’t need a college degree to get started, and you don’t have to wait until May. Students can work part-time while finishing (and prioritizing) school. Security PS is holding an information session on Wednesday, March 25th at 3:00 PM over Google Hangouts for students, parents, and teachers interested in learning more. Please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSeT_56VLuDaoISIFFJ_pYw-AQU4BmBxgBCIiZ_zTV0THC33ng/viewform">this form</a> to be invited to that information session.<br />
<h2>
What can you expect?</h2>
During the internship and apprenticeship programs, students will strengthen their existing software development skills to build a solid foundation towards application penetration testing. Then, Security PS will provide training and mentorship to equip them with the technical and soft skills necessary to find and exploit application security vulnerabilities and report them to customers. As apprentices and interns become proficient at working on projects and identifying vulnerabilities, they will be promoted to an associate application security engineer with the option to begin working full-time.<br />
<h2>
About Security PS</h2>
Security PS is a 18-year old Kansas City company that has earned a high reputation in the industry for delivering quality work and for its excellent team. Internally, we have developed a culture that enjoys pursuing knowledge through self-study and then teaching those skills to the rest of the team.<br />
Security PS provides a supportive team environment that gives employees opportunity for growth and ongoing professional development in a range of areas. Designed to fit the work style of our team, we've moved away from the traditional office and have adopted a virtual office but with a local team presence. This allows us to work flexibly from home, collaborate virtually, and also have the opportunity to meet and collaborate face to face as well. Regular team events and hangouts also add to the collaborative team culture. Security PS values employees as people. Our company has a 40-45 hour work week and managers have a genuine interest in each person’s well being.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-73202533677202801532020-02-26T15:02:00.000-06:002020-04-29T17:47:11.266-05:00My WCF Experience (Part 1)<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsGzViamNeKhyphenhyphen0oqboXcmJJEUQwgAXwBvlfQZgCht9F1NrYtKEdT-viZ5j9f_ggnU_vYYHKlPzHTAnC1nM9761fxv6zeeMokIZwoGnx023u7bBRHkyfYNLj9V36let7q6CmLOaFQ/s1600/msWCF.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="384" data-original-width="768" height="100" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsGzViamNeKhyphenhyphen0oqboXcmJJEUQwgAXwBvlfQZgCht9F1NrYtKEdT-viZ5j9f_ggnU_vYYHKlPzHTAnC1nM9761fxv6zeeMokIZwoGnx023u7bBRHkyfYNLj9V36let7q6CmLOaFQ/s200/msWCF.jpg" width="200" /></a></div>
<h2>
Hello World</h2>
One of the core aspects of Security PS is the leadership’s emphasis and priority on continuing professional development. I have been afforded a great deal of research and development time to grow my security and application development knowledge. So, I've decided to blog a bit of what I'm learning in hopes that my "aha" moments may help and spur on others on a similar journey.<br />
<div>
<h2>
A New Target: WCF</h2>
In our team's application security testing projects, there are times where we encounter applications that use "Windows Communication Foundation" (WCF) to communicate between application components. I was new to WCF, so one of my teammates and mentors, Nick Coblentz, tossed me in the deep end of Windows WCF communication so I could get a good handle on how to test such applications for security concerns. Nick gave me a crash course and some exercises to understand WCF, their security implications, and how to test them. Essentially, I first had to build a service myself and implement different service configurations. Then I needed to figure out how to listen to the communication and identify the circumstances and methods that would enable me to manipulate the communication. Over the next few posts, I'm going to walk through me figuring out this process and share a few things I learned along the way.<br />
<h2>
WCF - What is it good for? (and what even is?)</h2>
Windows Communication Foundation (WCF) originally was code-named Indigo and was released with .NET 3.0 in 2006. According to Microsoft, WCF “is a framework for building service-oriented applications.” WCF was built to support a whole suite of features, including interoperability, data contracts, security, multiple transports and encodings, and reliable and queued messages (Source: <a href="https://docs.microsoft.com/en-us/dotnet/framework/wcf/whats-wcf">https://docs.microsoft.com/en-us/dotnet/framework/wcf/whats-wcf</a>). <br />
<br />
Today, it appears the general Internet consensus is that more and more enterprises are moving away from WCF towards WebAPI for its ease in supporting not only websites but also mobile devices and tablets. However, WCF has been a core part of service-oriented infrastructure over the Internet for over a decade, and so it still bears relevance today. Not only due to pervasiveness, WCF is likely to remain a core part of service-oriented architecture for the time being because WCF and WebAPI are complementary and not mutually exclusive. WCF by design supports a service-oriented architecture and implements Web Services (WS) protocols based on SOAP specifications for stateless applications, whereas WebAPI supports resource-oriented architecture and pairs much better with Restful frameworks where stateful applications are concerned.<br />
<br />
Now that we have an understanding of what WCF is, I'll share how I implemented my first WCF service.<br />
<h2>
The Setup</h2>
First, I set out to build a basic WCF service. No fancy bells, no fancy whistles. Just a WCF service out of the box. Microsoft has done a good job with providing documentation on their website to assist with setting up an initial WCF service. As I mentioned previously, this is not a tutorial blog post, but I followed <a href="https://docs.microsoft.com/en-us/visualstudio/data-tools/walkthrough-creating-a-simple-wcf-service-in-windows-forms?view=vs-2017">this</a> walkthrough to help me get a basic service up and running along with a client. For me, my application was a simple Windows Form that: <br />
Asked for a username and password <br />
Checked the combination against an Excel “database” <br />
Returned whether the credential pair was valid or not. <br />
<br />
This simple application was designed to mimic an intranet login application, which you can see in the two screenshots below. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img src="https://lh5.googleusercontent.com/Y9bF_EoMrunaaKGmfAgbuPRlTgQeR_P72o-lbY4WK5vKocB6upTCg8pj9QKUtNdHNSya49LIIeZilLLagyvewAv2lpOjYAo4SBTmuWojZvBB0FEuX_uXzWH2E28i4XG8ye7j_BY6" style="border: 1px solid rgb(224, 224, 224); margin-left: auto; margin-right: auto;" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Login Form with Valid Response Message </td></tr>
</tbody></table>
<div style="text-align: center;">
</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img src="https://lh4.googleusercontent.com/k8Uaype2tx_4ce_tST8JCxdgCeUHq453e2YL0gUmWk_lybBzGwdOd0dWG4XWT826dtLr5X_pGXKIG0zber3ZFzpDe7CXgXEccuo7pRPARmZppqpbkk9IdOoRosrEYJRq9-ebXxAN" style="border: 1px solid rgb(224, 224, 224); margin-left: auto; margin-right: auto;" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Login Form with Invalid Response Message</td></tr>
</tbody></table>
<div style="text-align: center;">
</div>
Now, the default WCF service generated by Visual Studio 2017 is a “basicHttpBinding” service. This is a very simple service with no default security mechanisms implemented - no TLS, no encryption, no message-level security. The returned message (either “Valid” or “Invalid”) told me the service and client was working properly, but I wanted to see how the mechanism underneath worked. Proxying the service through BurpSuite allowed me to inspect the transport mechanism.<br />
<h2>
WCF basicHttpBinding Observations</h2>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img src="https://lh4.googleusercontent.com/BI4a5ABD0IlloZCjZssUqm1xmLZMMlUN5AiPFQBHnDmb3ioFdfkpJrmzxtSZ0o8FwijNWsOaexawnQsimmadX9Os1cUh1eGgu5TcugKQZE4jQB6WbPg14gQT-P-RE7Pld-T455Y5" style="border: 1px solid rgb(224, 224, 224); margin-left: auto; margin-right: auto;" width="550" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Intercepting POST Request with BurpSuite</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img src="https://lh3.googleusercontent.com/W0TdgZfKls9r3Uw4E2wfyE6BvImEtJHOvqgWIB3j_PHOCrB9gGoWGYwJvQfgh40VstbAjkXDF_8y3me_mhiGRkNADn54gQW0IEakyCdjk31JEEoSxdT77wCgtTGXoqcvCT4LgoUg" style="border: 1px solid rgb(224, 224, 224); margin-left: auto; margin-right: auto;" width="550" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Viewing raw response with BurpSuite</td></tr>
</tbody></table>
<div style="text-align: left;">
The first observation one may make about the above screenshots is the presence of the SOAP envelope and body. As previously mentioned, WCF implements the SOAP framework. This point is proved by the request and response shown in the screenshots. </div>
<br />
The second observation one may make is that basicHttpBinding transmits everything via clear text with no predefined security implementations whatsoever. It does not support Web Services (WS) protocols or standards. By default, basicHttpBinding does not allow cookies, the encoding type is “Text”, and security is turned off.<br />
<h2>
Quick Conclusions</h2>
This was my introduction to WCF and playing with the out of the box basic configuration. Based on my initial observations, basicHttpBinding is simple to set up quickly, but it is not suitable for corporate solutions. However, most clients probably are not using basicHttpBinding. So, look for “My WCF Experience Part 2” where I set out to try to create a wsHttpBinding.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br /></div>
Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-62386736704983829252019-02-25T08:30:00.002-06:002020-04-29T17:13:51.557-05:00My KringleCon 2018 Experience!<h3>
Hello World</h3>
Hi, my name is Christian, and I joined the Security PS team back in July as an Associate Application Security Engineer. As an associate, I get to spend a significant amount of time training to build out my application security knowledge and experience so I can grow in my technical testing skills and consulting ability. I know there are many others out there who are working to grow their security knowledge as well, so I've decided to blog a bit of what I'm learning in hopes that my "aha" moments may help and spur on others on a similar journey.<br />
<br />
In December, I participated in SANS Holiday Hackfest. In this post, I want to share the top four things I learned from the challenge. They are broken down into the two concepts and two tools I found most exciting to learn about. These concepts and tools apply web app security, network security, and best practices. The first concept I would like to share is a best network practice: keep credentials out of system commands.<br />
<h3>
The Event</h3>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/--icKpIs6xUE/XGwct737egI/AAAAAAAAAJk/enETRYlaeHUdmninDqwjlyYaS43RXBbdgCLcBGAs/s1600/At%2Bthe%2BGates.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="331" data-original-width="512" height="206" src="https://3.bp.blogspot.com/--icKpIs6xUE/XGwct737egI/AAAAAAAAAJk/enETRYlaeHUdmninDqwjlyYaS43RXBbdgCLcBGAs/s320/At%2Bthe%2BGates.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">At the gates of Santa’s Castle</td></tr>
</tbody></table>
Each year for the last eleven years, the SANS Institute hosts a holiday-themed hacking challenge. For 2018, SANS requested the help of the hacking-challenge attendees in assisting Santa save the North Pole from a malicious cyber takeover. To assist the hacking challenge attendees, SANS composed a free, virtual conference that went hand-in-hand with the hacking challenge. This virtual conference was known as KringleCon, and all of the conference videos were hosted on <a href="https://www.youtube.com/channel/UCNiR-C_VXv_TCFgww5Vczag/featured">Youtube</a>. In addition to the in-game hints and resources, the conference presenters delivered talks on relevant topics, concepts, and tools, with several of the videos containing hints as to how to solve the objectives in the hackfest. Beyond the fun of the hacking challenges, KringleCon provided an extremely effective manner through which to teach new tools and security concepts. Here are a few of those concepts I encountered.<br />
<h3>
Concept 1: Keeping Credentials Out of System Commands</h3>
As I was helping to secure the North Pole’s networks, I approached one of Santa’s elves, who needed to access a networked SMB share to upload a job report. Simple enough, except the elf forgot his password and needed assistance in recovering his forgotten credentials. Thankfully, the elf in question provided <a href="https://blog.rackspace.com/passwords-on-the-command-line-visible-to-ps">some resources</a> to assist in triaging the network. With a little elbow grease and some nifty output formatting, I was able to help the elf retrieve his credentials.<br />
What I learned is that, when running instructions via the command line, it is important to take care to not enter plaintext credentials on the commandline because these commands are viewable by listing the running processes. This presents a vulnerable practice in which malicious actors on the local network may be able to retrieve command line credentials for restricted resources by viewing the running processes.<br />
<br />
The most significant lesson I took from this exercise is to not enter credentials on the command line. However, this isn’t always possible. If command line system credentials are required in a corporate environment, the best solution would be to find a way to use the tools differently so as to not enter the credentials on the command line. Alternatively, one workaround is to store the credentials into a file and read the credentials from this file using the necessary input/output commands. Of course, this makes it imperative to ensure the appropriate protections are applied here, such as enabling strict read/write privileges to the file or using an encrypted storage method.<br />
<br />
Next, let’s talk about a web application tool that makes it easy to find lost, forgotten, or hidden artifacts: trufflehog.<br />
<h3>
Tool 1: Using Trufflehog to Dig Through Git Repositories</h3>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-bklU90V4zBU/XGwcvxpRjBI/AAAAAAAAAJs/QvUmbt1_ZpMCuXFJz74c3gv0ppylAeQRACLcBGAs/s1600/Badge%2Bitems.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="395" data-original-width="512" height="246" src="https://1.bp.blogspot.com/-bklU90V4zBU/XGwcvxpRjBI/AAAAAAAAAJs/QvUmbt1_ZpMCuXFJz74c3gv0ppylAeQRACLcBGAs/s320/Badge%2Bitems.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">KringleCon badge view showing the hints and resources pane</td></tr>
</tbody></table>
GitHub is a phenomenal resource for team collaboration on, and development of, software applications, but, like any repository, artifacts may inadvertently be uploaded to GitHub which should not for security reasons. Examples of critical information to be wary of uploading to remote repositories includes SSH keys, RSA keys, artifact pathways, clear text credentials, and more. While data from these commits can be removed, artifacts may still remain in the <b>.head</b> folder of the git repository.<br />
<br />
Cue <a href="https://github.com/dxa4481/truffleHog">trufflehog</a>. According to the tool’s developer, trufflehog “searches through git repositories for secrets, digging deep into commit history and branches. This is effective at finding secrets accidentally committed.” In the context of KringleCon, one of the grand challenges involved retrieving the password for an encrypted zip file. Using trufflehog, the credentials were quickly and easily retrieved. <a href="https://youtu.be/myKrWVaq3Cw">A great video</a> from the KringleCon conference demonstrates the value of trufflehog.<br />
<br />
Based on my experience with trufflehog during KringleCon, I plan to include trufflehog in my open source reconnaissance should I find a development team’s GitHub repository during application assessments. Furthermore, tools similar to trufflehog may be useful for development teams to run on their own git repositories to determine if any sensitive information is stashed anywhere in the repository history prior to executing a push or pull request.<br />
<br />
The next stop on the KringleCon tour is the concept of dynamic data exchange and how it may be exploited via CSV files.<br />
<h3>
Concept 2: Exploiting Dynamic Data Exchange via CSV Injection</h3>
Being in infosec, we often hear about the ever-ominous Advanced Persistent Threat (APT), usually in reference to nation-state actors. Well, the next concept I learned about from KringleCon is actually a vulnerability that exists in common office applications that APT28 (Sofacy aka Fancy Bear aka Strontium aka the Russian GRU) have exploited in the past: dynamic data exchange.<br />
<br />
In true web application fashion, Santa set up a web form which accepts resumes uploaded in CSV format. Unfortunately, Santa was not aware of dynamic data exchange (DDE) that allows for <a href="https://www.owasp.org/index.php/CSV_Injection">CSV injection</a>. Also known as formula injection, inherent within Microsoft Excel and LibreOffice Calc is the ability to define a spreadsheet cell formula of which the system then computes and renders the result. <a href="https://youtu.be/Z3qpcKVv2Bg">This video from KringleCon</a> does a very good job of explaining and demonstrating the vulnerability. Malicious users can harness these formulas to exploit vulnerabilities within the spreadsheet software, trick the user into ignoring security warnings, or read and exfiltrate other data.<br />
<br />
Getting back to the North Pole, thanks to the in-game objectives, I knew Santa’s CSV resume upload feature was exploitable. I uploaded a typical Microsoft excel document replete with fake job information and an invisible embedded formula that copied a local file to the public internet directory. After playing with the formulas on my local system, I was able to upload the malicious CSV file and pull down the local file from Santa’s HR network.<br />
<br />
Ultimately, the CSV DDE exploit is a well-known injection trajectory and works mainly as a payload for a social engineering attack. Critically, this exploit depends on the victim ignoring system warnings and clicking through to open up the attachment. At Security PS, we test to ensure the fidelity of file upload features - but this attack vector goes further and takes advantage of the core functioning of office application interdependencies.<br />
<br />
Finally, KringleCon provided me the opportunity to learn a new tool which helps to visualize resource authorization controls in Active Directory networks: Bloodhound.<br />
<h3>
Tool 2: Using Bloodhound to Graph Active Directory Trust Relationships</h3>
As far as business resource authorization controls go, Active Directory (AD) has become a mainstay of corporate networks and one attackers seek to abuse. However, the trust relationship structure in AD is difficult to visualize and, as such, unintended trust relationships could form that leave open vectors of attack. This is where Bloodhound comes into play.<br />
<br />
More appropriate for network penetration tests rather than web application assessments, <a href="https://github.com/BloodHoundAD/BloodHound">Bloodhound is a tool</a> that needs three core pieces of information from an AD environment: who is logged onto which computers, what users and groups belong to the different AD groups, and who has admin rights on which computers. The Bloodhound tool then takes all of this information and presents the data using graph visualization tools. KringleCon linked to a great video that shows the tool in action <a href="https://youtu.be/gOpsLiJFI1o">here</a>.<br />
<br />
The true value from Bloodhound is its ability to map the AD relationships and, with these, show the security assessor the recommended attack paths for privilege escalation. In a similar vein, defenders can utilize Bloodhound to identify the same attack paths in order to neutralize possible exploit vectors. These relationships would otherwise be too obscure to be noticed or be too time consuming to identify the weaknesses in the formed associations.<br />
<br />
As relates to KringleCon, I utilized Bloodhound to view a particular AD environment. The objective was to identify a reliable path from a Kerberoastable user to the Domain Admin user group. Using Bloodhound’s natural in-built query language along with their standard queries, the attack vector and the specific user to target was made readily known.<br />
<h3>
Sum Up</h3>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-1oqAcpVi0Zo/XGwcv94BmeI/AAAAAAAAAJo/b6u7yymhtkAPf2l--gHBtjA5iUdAb8ALwCLcBGAs/s1600/KCon%2BTalks.png" imageanchor="1" style="margin-left: auto; margin-right: auto; text-align: center;"><img border="0" data-original-height="426" data-original-width="512" height="265" src="https://2.bp.blogspot.com/-1oqAcpVi0Zo/XGwcv94BmeI/AAAAAAAAAJo/b6u7yymhtkAPf2l--gHBtjA5iUdAb8ALwCLcBGAs/s320/KCon%2BTalks.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">KringleCon Youtube Virtual Conference Talks</td></tr>
</tbody></table>
<br />
Competitions and virtual conferences such as KringleCon offer a wealth of practice and knowledge that otherwise would take significant time and experience to amass. Not all capture-the-flag competitions are created equal, and, indeed, some end up being unrealistic puzzle boxes with no real-world applicability. Fortunately, KringleCon walked the line between fun challenges and educational virtual conference. As a new infosec community member, KringleCon taught me very valuable lessons which I intend to utilize in my role as a web application security engineer. It also gave me a vehicle to plug in to the infosec community and was a very fun experience as well. I highly recommend newbies and seasoned security practitioners take time to experience CTFs like KringleCon.<br />
<div>
<br /></div>
Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-89750988728906096902018-06-01T15:27:00.000-05:002020-04-30T10:40:14.735-05:00Protecting Thick-Client Applications From Attack (Or How To Not Have To)<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
In the previous post, I discussed security testing techniques Security PS used to assess a complex thick-client application. After the assessment was complete, our client asked:
<br />
<ul>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgilqPKEDo_yxkdH-byLvGXbEN6n9T5sWjIpRWJd8OcSjxHoLeK3N0aI1gHpoc0Usu-RorvROpJgtDKWgnYdL-JhKRQ-qDu_N2c08aeM9cUgG-JvL9hMH3w6SE95YgdEvyQP4eSA/s1600/AppSecLessons.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="335" data-original-width="600" height="111" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgilqPKEDo_yxkdH-byLvGXbEN6n9T5sWjIpRWJd8OcSjxHoLeK3N0aI1gHpoc0Usu-RorvROpJgtDKWgnYdL-JhKRQ-qDu_N2c08aeM9cUgG-JvL9hMH3w6SE95YgdEvyQP4eSA/s200/AppSecLessons.png" width="200" /></a>
<li>Is .NET less secure than other languages since these techniques are possible?</li>
<li>How do I stop attackers from manipulating my applications?</li>
</ul>
This post answers those questions and discusses best practices around securing client-server architectures.<br />
<br />
Security PS tested the thick-client application with a variety of techniques including:<br />
<ul>
<li>Reusing the application's DLLs to communicate with the server and decrypt data</li>
<li>Using a debugger to interactively modify variables and program flow</li>
<li>Disassembling, modifying, and reassembling the thick-client application to integrate it with a custom testing framework</li>
</ul>
<b>Considering these methods, how does .NET compare to other platforms? Is .NET less secure than another choice? </b><br />
.NET is not unique. In other assessments, Security PS has used the same techniques to assess Android, Java, and Native (C/C++ executables) applications. Based on my quick research some or all of the techniques work for iOS applications as well. The only differences between these platforms are the level of complexity and the toolset required. .NET is not any more or less secure than any other platform in this way.<br />
<br />
<b>How do you stop attackers from reusing DLLs, interactively debugging applications, or modifying applications?</b><br />
You shouldn't need to in most cases. For a client-server architecture, the thick-client resides on a user's (or attacker's) computer. That environment cannot and should not be trusted to enforce security controls or protect sensitive data. Client-side security controls can be defeated or bypassed completely, and any data sent to the client can be obtained by an attacker (even if it is encrypted). <br />
<br />
Instead, organizations should spend their time architecting and designing applications that enforce security controls on the server-side. If all the security controls are implemented on the server-side, then it does not matter whether the attacker manipulates the thick-client (or writes his or her own client application). This security best practice applies to web applications, web services, and client-server applications.<br />
<br />
If an organization still wishes to protect client-side code from analysis and manipulation, what are the options? If you search on the Internet, you may find these choices:<br />
<ul>
<li>Strong name verification (for .NET)</li>
<li>Obfuscation</li>
<li>Native compilation (for .NET)</li>
<li>Encryption</li>
<li>and more...</li>
</ul>
Each option can be used to slow down an attacker and will make analysis or modification more difficult. But, none of them prevent a skilled and determined attacker from eventually reaching their goal. Let's briefly dig in to each one.<br />
<br />
<a href="https://docs.microsoft.com/en-us/dotnet/framework/app-domains/strong-named-assemblies">Strong name</a> verification enables an assembly to identify and reference the correct version of a DLL. Some Internet sources recommend using strong name verification to prevent attackers from modifying DLLs. But, according to Microsoft, it should not be used as a security control. Security PS's experience agrees with that assertion. It is trivial to bypass Strong name verification, especially with local administration privileges on a computer.<br />
<br />
A non-technical explanation of Obfuscation is that a tool jumbles up the variable names, program structure, and or program flow before it is distributed to users. Then, when an attacker uses an interactive debugger or reflection to view the code, he or she has difficulty following and understanding the program's logic. There are many free and commercial tools to provide this protection, and it does demotivate casual attackers from performing analysis. However, there are also tools to help deobfuscate applications or track program flow. <br />
<br />
Obfuscation tools can also make it difficult to use reverse engineering tools like ILSpy, dnSpy, and ILDasm/ILAsm. The tools can corrupt or mangle portions of the application to crash an attacker's toolsets. Additionally, <span style="background-color: transparent; color: black; display: inline; float: none; font-family: "times new roman"; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;">encryption can be applied to strings, resources, or the code within a method. This makes it difficult to use reflection to see the original code and more complex to modify the IL code. However, eventually the code must be decrypted so it can be run, making it available to attackers.</span><br />
<span style="background-color: transparent; color: black; display: inline; float: none; font-family: "times new roman"; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><br /></span>
<span style="background-color: transparent; color: black; display: inline; float: none; font-family: "times new roman"; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"></span>Security PS's research into two obfuscation tools (ConfuserEx and Dotfuscator Community Edition), showed that most controls can be bypassed by a skilled attacker or worked around using WinDbg and managed code extensions. Additionally, there's a significant performance impact to using some of the Obfuscation controls.<br />
<br />
Native compilation (i.e. <a href="https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator">Ngen</a>) compiles a .NET DLL into processor-specific machine-code. Security PS found that native compiled .NET applications still allow an attacker to use interactive debuggers to introspect and control program. Additionally, there's no mention of using it as a security feature in Microsoft's documentation. Therefore, this technique does not provide a significant amount of protection.<br />
<br />
There are even more techniques then I've named here. But, the important points to remember are:<br />
<ul>
<li>Implement and enforce security controls on the server-side</li>
<li>Only send information to the client-side that you want the user (or attacker) to see (even encrypted)</li>
<li>Don't rely upon thick-client, browser, desktop application, etc. to provide any sort of reliable level of security</li>
<li>Only apply protection mechanisms to the executable if you absolutely have to and/or if it is nearly free (money, time, operationally, etc.)</li>
</ul>
<br />
<br />
<br />
<br />
<br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-44414855340594159882018-03-21T12:46:00.000-05:002020-04-30T10:39:54.574-05:00Lessons From Attacking Complex Thick-Client ApplicationsSecurity PS performs assessments on a wide variety of software architectures and platforms, some of which cannot be tested effectively using the more standard testing tools and methods. Recently, our team performed an assessment on a more complex application architecture. In this case, a .NET thick-client communicated with a variety of server-side components using either signed SOAP messages or with custom TCP messages. These factors meant our consultants couldn't use a proxy tool to directly manipulate traffic for security testing purposes. This post discusses some of the techniques our application security team used to overcome those challenges and successfully complete the assessment.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQFfnRu-A298BlYeUd77Wo4r5_B_v0V9APT_h7N5Fak90PM6xgOP0IPyHnyuG0tNUn0xaK7xIw4I9aBY_vCIHcc6nXpI7CrLW1ovexDQopay65B3dicwuQ-9dZuvZ4Q495yfJ8nA/s1600/AppSecLessons.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="335" data-original-width="600" height="111" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQFfnRu-A298BlYeUd77Wo4r5_B_v0V9APT_h7N5Fak90PM6xgOP0IPyHnyuG0tNUn0xaK7xIw4I9aBY_vCIHcc6nXpI7CrLW1ovexDQopay65B3dicwuQ-9dZuvZ4Q495yfJ8nA/s200/AppSecLessons.png" width="200" /></a></div>
<br />
Security PS used three techniques to manipulate both the signed SOAP requests and the custom TCP messages:<br />
<ul>
<li>Writing custom code and reusing thick-client libraries</li>
<li>Attaching a debugger to the running application and manipulating variables</li>
<li>Disassembling, modifying, and reassembling the application</li>
</ul>
Code is often written in a modular way that makes it easy to reuse existing libraries. In this assessment, Security PS wrote GUI applications that reused the thick-client's libraries to decrypt data or send data to the server. This technique involved creating a new Visual Studio Project, adding the DLLs as a reference, and then writing code that calls functions within those thick-client libraries.<br />
<br />
Next, Security PS needed to modify a field within a signed SOAP request to test authorization controls. Our team used a debugger and breakpoints to perform this modification. For .NET thick-clients, this attack is possible after disassembling and reassembling the application with debugging enabled.<br />
<br />
Finally, we needed a way to quickly and easily manipulate custom TCP messages to identify vulnerabilities. Use of the debugger and breakpoints was too slow. Use of a custom written testing tool meant having to understand and duplicate some complex interactions that the thick-client managed. So, Security PS chose to directly modify the thick-client to allow interactive modification of TCP messages by consultants. For that to be possible, we needed to disassemble the thick-client, modify the intermediate language code, and then reassemble it.<br />
<br />
Using these testing techniques, Security PS identified a number of high impact vulnerabilities. After discussing the vulnerabilities with the client, two of the questions they asked were:<br />
<ul>
<li>Is .NET less secure than other languages since these techniques are possible?</li>
<li>How do I stop attackers from manipulating my applications?</li>
</ul>
The next post will consider these questions more, but the primary message we communicated to our client focused on a critical best practice for secure software design: all security controls must be implemented on a trusted component in the application architecture. In this case, security controls must be implemented on the server-side rather than on the client-side. The client operates on the attacker's computer where everything can be analyzed and modified regardless of the security controls used. The architecture of the application must assume the client environment cannot be trusted. While additional controls can be applied to increase the difficultly an attacker would have in attempting to manipulate client-side security controls, it is important to recognize that the root of this security weakness is fundamentally a design flaw that would need to be addressed to fully mitigate the risks.<br />
<br />
Stay tuned for a follow-up on the questions brought up above.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-29443915552073803832017-05-04T13:34:00.000-05:002020-04-29T17:14:35.606-05:00OAuth Is Not Meant For Authentication!As we work with software development teams to help them apply security principles and practices to their applications, we commonly identify misunderstandings or gaps in the team's understanding regarding security features, APIs, or frameworks they are using. It's important to identify and correct these misunderstandings as early on as possible. When such security elements are misused, systemic security flaws are produced in the application that are difficult to resolve without significant reworking of the code or architecture.<br />
<br />
<img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhT0VivQ9Ac1xrrxFYKBSMLzgHfntZdRTuLrFfcyNPy03n_IzXzI7oeQXKvTGMLISatUms-GEdDaLaoVU9ZpDwzUz1ftbzmCe2Sv-oqqv0ykeGVaqBDPGjSUAMeLw6cIIgi0oarGg/s200/3343062926_77bc534b31_m.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" width="200" /><br />
One such example is the use of OAuth. As useful as OAuth is, it must be used for its intended purpose. If we try to make it do things it wasn't designed or intended to do, we get into trouble. Let's clarify the fundamental purpose and use of OAuth and in doing so, clear up a common misunderstanding with it.<br />
<h2>
OAuth is not meant for authentication. OAuth is for authorization.</h2>
Here are a few points demonstrating why:<br />
<br />
OAuth has four Grant Types:<br />
<ul>
<li>Authorization Code</li>
<li>Implicit</li>
<li>Resource Owner Password Credential</li>
<li>Client Credentials</li>
</ul>
For "<b>Authorization Code</b>" and "<b>Implicit</b>" grants the specification doesn't govern the submission of a username or password. It's something totally outside of the scope of OAuth. This is a great warning flag that OAuth is not intended to be used directly for authentication.<br />
<br />
"<b>Client Credentials</b>" does have a username and password. It is sent as a Basic Authorization Header (Base64 encoded "username:password"). BUT, it's not a grant used by users. Here's what the specification says:<br />
<blockquote class="tr_bq">
"Client credentials are used as an authorization grant typically when the client is acting on its own behalf (the client is also the resource owner) or is requesting access to protected resources based on an authorization previously arranged with the authorization server." <i>- <a href="https://tools.ietf.org/html/rfc6749#section-1.3.4">https://tools.ietf.org/html/rfc6749#section-1.3.4</a></i></blockquote>
An example client could be a third-party API, that a user has granted an "offline" token. That third-party API may make requests without a user interacting with it.<br />
<br />
Now for "<b>Resource Owner Password Credentials</b>." Yes, you can use it to login with a username and password, but you probably shouldn't. Not because it's insecure, but because it doesn't scale well and isn't flexible. The specification says:<br />
<blockquote class="tr_bq">
"The resource owner password credentials (i.e., username and password) can be used directly as an authorization grant to obtain an access token. The credentials should only be used when there is a high degree of trust between the resource owner and the client (e.g., the client is part of the device operating system or a highly privileged application), and when other authorization grant types are not available (such as an authorization code)."</blockquote>
So why shouldn't you use the Resource Owner Password Credentials grant for authentication? Well, let's start by looking at the login request and response:<br />
<blockquote class="tr_bq">
POST /token HTTP/1.1<br />
Host: server.example.com<br />
Content-Type: application/x-www-form-urlencoded<br />
<br />
grant_type=password&username=johndoe&password=A3ddj3w"</blockquote>
<blockquote class="tr_bq">
HTTP/1.1 200 OK<br />
Content-Type: application/json;charset=UTF-8<br />
Cache-Control: no-store<br />
Pragma: no-cache </blockquote>
<blockquote class="tr_bq">
{<br />
"access_token":"2YotnFZFEjr1zCsicMWpAA","token_type":"example",<br />
"expires_in":3600,<br />
"refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA",<br />
"example_parameter":"example_value"<br />
}</blockquote>
You submit a username and password and you get back and access token. The access token can then be used to call an API. Sounds ok, right? Let's add some complexities. First, OAuth is often used in combination with a stateless REST service. There's no session on the server-side. There's just the access token sent by the client, which is often a base 64 encoded set of claims with a signature (like a JWT). With that in mind, what if you need to do multi-factor authentication? What about security questions? What if there are several different ways a user can log in? How do you integrate all those options with the OAuth Resource Owner Password Credentials grant?<br />
<br />
One common approach is to make it an API call and have a mobile or web application force you to complete it. But, if the application is stateless and you already have an access token why not just call any other API method directly with that token and ignore the secondary authentication step. It's trivial to bypass in a client-side application (mobile, thick-client, web page). So, that means attackers can bypass that multi-factor system that helps meet compliance and regulatory requirements.<br />
<br />
Ok, how about if it's made as part of the login process? Well, that's not really OAuth any more. You have to add fields, add steps, and/or go through more process before issuing an access token. Are you going to write your own custom OAuth client library and server to do it? You might as well write a normal forms based authentication process instead?<br />
<h2>
<a href="https://oauth.net/articles/authentication/">OAuth IS NOT FOR AUTHENTICATION!</a></h2>
How do others do it then? Instead, they use the OAuth Authorization Code or Implicit grants and a separate login server (or identity provider) to handle all the authentication and pass back a user with an access token. In fact, that's exactly what the Authorization Code and Implicit grant is for. That identity server can offer as many options and schemes for authenticating users as it wants. The authentication process is centralized and isolated from the applications that rely upon them. When it's done authenticating the user, it passes the user back to the application fully authenticated. With this in mind, you can see that this is exactly what the OAuth Specification Authors had in mind when you read the Introduction section here: <a href="https://tools.ietf.org/html/rfc6749#section-1">https://tools.ietf.org/html/rfc6749#section-1</a><br />
<br />
This issue seems to come up in assessments more and more often lately. I keep seeing software development teams download a copy of Thinktecture's IdentityServer (a great open source product by the way), and then implement it just for their application using the resource owner password credentials. Then, they later bolt on security questions, finger print scanners, multi-factor authentication, and "remember me" features. As a result, their stateless application has easily bypassable authentication controls that are very time consuming to fix (or they have to compromise on having the API be stateless).<br />
<br />
If you are considering implementing OAuth or you already have, reach out to Security PS to help with the design and architecture. You could also watch some of these videos to help avoid common mistakes:<br />
<ul>
<li>Unifying Authentication & Delegated API Access for Mobile, Web and the Desktop with OpenID Connect and OAuth2 by Dominick Baier<br /><a href="https://vimeo.com/113604459">https://vimeo.com/113604459</a></li>
<li>Dominick Baier - Finally! - True Cross-Platform Federation & Single Sign-On with OpenID Connect<br /><a href="https://vimeo.com/97344501">https://vimeo.com/97344501</a></li>
<li>Authentication & secure API access for native & mobile Applications - Dominick Baier<br /><a href="https://vimeo.com/171942749">https://vimeo.com/171942749</a></li>
<li>Dominick Baier: OAuth2 – The good, the bad and the ugly<br /><a href="https://vimeo.com/68331687">https://vimeo.com/68331687</a></li>
<li>Dominick Baier - Web API Authorization & Access Control – done right!<br /><a href="https://vimeo.com/97337305">https://vimeo.com/97337305</a></li>
<li>Building JavaScript and mobile/native Clients for Token-based Architectures - Brock Allen and Dominick Baier<br /><a href="https://vimeo.com/205451987">https://vimeo.com/205451987</a></li>
<li>Authentication and authorization in modern JavaScript web applications – how hard can it be? - Brock Allen<br /><a href="https://vimeo.com/131636653">https://vimeo.com/131636653</a></li>
<li>Modern applications need modern security - Dominick Baier<br /><a href="https://vimeo.com/163899987">https://vimeo.com/163899987</a></li>
<li>Implementing OpenID Connect & OAuth 2.0 with IdentityServer - Dominick Baier<br /><a href="https://vimeo.com/163920479">https://vimeo.com/163920479</a></li>
</ul>
Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-30723343188181200022017-01-23T17:26:00.000-06:002020-04-29T17:14:57.226-05:00Improving User Acceptance of Account Lockout Responses for Login ProcessesThe purpose of a login process is to identify a particular individual and validate their identity before granting them access to an application. It's critical that the process only allows the owner of an account to login, and it must prevent an attacker from logging in as another user. <a href="http://www.securityps.com/application_security.html" imageanchor="1" style="clear: right; float: right; margin: 1em;"><img border="0" height="127" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwEAvF0kQWcnXSWuNpN-r1R5KketIIuQx_sPhyphenhyphenCJSnODKocQF_LMKRaQezx04O0Nag9-zwHkVx0nkh4CxhMelnsJSXnTuZhxh1jdhDz3TMbK4OBDnxvc71XCytONrnVFzX5j7bnA/" width="253" /></a>This post discusses one aspect of protecting authentication processes: using an account lockout response. And, it specifically focuses on decreasing the frustration users experience as a result of that control.<br />
<br />
An account lockout response is a security control developers apply to all of the application's authentication processes to limit the number of times an individual can enter the wrong credentials consecutively. For example, if an attacker incorrectly guesses another user's password five times in a row, the application will disable the user's account and notify the user by email. Organizations must choose an appropriate lockout threshold and choose how accounts are unlocked.<br />
<br />
When should an organization use an account lockout response? That's difficult to answer unless a company is compelled to implement the control due to a regulation or compliance requirement. The development team, security team, and marketing or user experience groups really need to discuss the pros and cons of such a process. On one hand, the application will have significantly more resistance to password guessing attacks, protecting users' accounts from being compromised. On the other hand, it may frustrate users, raise customer support costs, or even drive customers away from using the application. If an account lockout response is implemented (which Security PS generally encourages), it must be carefully designed to increase user acceptance.<br />
<br />
One of the frustrations users experience related to account lockout responses is that they may not know their password (or sometimes their username) and they lockout their account accidentally. On top of that, the user doesn't know their account is locked out. This occurs because the application cannot display notifications on the login page that the account is locked out. If it did, the process would inform an attacker that a particular username is valid resulting in a username harvesting vulnerability. This is one of the key challenges to solve in order to increase user acceptance of the account lockout response control.<br />
<br />
To address the notification challenge, Security PS recommends several user experience improvements that don't expose the application to additional risk. First, the application can email the user when a failed login attempt occurs. Additionally, if the account is locked out, the application can immediately email the user instructions for unlocking the account. These notifications do not cause username harvesting vulnerabilities, because only the account owner will receive those email notifications not the attacker.<br />
<br />
Email notifications are helpful, but what if the user doesn't check their email while using the application? They can still get frustrated easily. So, developers should consider sending SMS notifications when a user's account is locked out or potentially before the lockout occurs. The message can be short, direct, and can point the user to their emailed instructions for unlocking their account or resetting their password. The hope is that the user receives this notification before getting frustrated that they can't login.<br />
<br />
Finally, the messaging in the application itself can remind users that a lockout response is present and that they can check their email if they believe their account is locked out. This messaging can be displayed all the time or after a specific number of failed attempts per session. Key here is that this is not a specific number of failed attempts per username or account, but per session. Otherwise, username harvesting vulnerabilities are introduced.<br />
<br />
Authentication processes, especially complex, multi-step, multi-credential authentication processes are difficult to get correct. It's easy to introduce vulnerabilities in the user creation/registration step, forgot username/password step, and login process itself. If you are in the process of designing an authentication process, whether it's using an OAuth2, OpenID Connect, or custom forms based authentication, contact Security PS to have a partner come along side you and help ensure the design and implementation are secure.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-7781005623118373322016-09-29T17:28:00.000-05:002020-04-29T17:15:03.814-05:005 Things to Avoid When Implementing the CSFIn my <a href="http://blog.securityps.com/2016/08/why-use-nist-csf.html" target="_new">last post</a>, I gave a quick recap of what the Cybersecurity Framework is, how it differs from other standards and the importance it carries with both regulated and non-regulated organizations. <a href="http://www.securityps.com/enterprise_security.html" imageanchor="1" style="clear: right; float: right; margin: 1em;"><img border="0" height="127" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7kIFGdIjtTkmeO8uxlNQSXE0lL4uQVxKVBhCnNryHDu93fHAlz_ZHSqquEUCW5IK7TXVduVuYxqYa2b2nTzMf0wyDgMkai1oJBaxDyjLdC_e4OzYsFL9-sIdw6moU45bbaVngZg/s1600/SPS-NIST_CSF.png" width="253" /></a> This week, I wanted provide some quick lessons learned by many organizations, not only with the CSF itself, but with many of the standards used within the categories of the framework. Listed below are 5 quick things your organization should consider when implementing any security framework or standard.<br />
<br />
<ol>
<li>Don’t assume the CSF is only for “Critical Infrastructure” or Federally regulated organizations: Although the Executive Order is titled as such, it is meant for all organizations, in both public and private sectors. The same can be said for NIST 800-53 controls; it’s not just for Federal agencies. </li>
<br />
<li>Don’t try to do it all yourself: The implementation of the CSF requires the input and collaboration of almost every vertical within the organization. It can not be done solely by one person. Often times it requires outside help with subject matter experts for implementing various requirements.</li>
<br />
<li>Don’t adopt controls, just to adopt controls: This is one of the most common pitfalls. The informative references in the CSF are not a list of mandated controls which must be adopted for each category. They are to be considered as examples or possible suggestions. Each category must be carefully examined and the organization must ultimately decide which controls fit and which ones do not. When gaps exist, a risk assessment should be conducted to determine if the control is even necessary. All successful information security programs are built on risk management, not controls.</li>
<br />
<li>Don’t assume there is only one way for implementation: Every organization has their own business goals, risk levels and security requirements. One size does not fit all and neither does the implementation of the CSF. The NIST web site, along with many others offer unique approaches to implementing the framework. Security PS recommends that each organization carefully weigh the many options and decide which method, or combination of methods is right for your environment.</li>
<br />
<li>Don’t ever consider it “Finished”: Risk management and information security in general is a lifecycle or reiterative approach; the CSF is designed to evolve in the same way. Requirements change, new technologies and vulnerabilities emerge and risk levels alter over the course of time, which requires constant improvement of the organization’s program.</li>
</ol>
<br />
What challenges have you faced when implementing the CSF or other framework? We’d like to hear from you! Please let us know in the comments below.<br />
<div>
<br /></div>
Anonymousnoreply@blogger.com1tag:blogger.com,1999:blog-23520533.post-64107719324011469862016-08-29T13:54:00.000-05:002020-04-29T17:15:24.559-05:00Manual Application-Layer Security Testing AND Automated Scanning Tools<br />
<a href="http://www.securityps.com/application_security.html" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="120" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEht-aUO96Ys4KUlSE_RgadwdyRG_I-0StuskGABtEskzHU6vU6bC5QZ2PPb4NfbeiLEKWN9iHyJ7aB_5uBb3wbFe0uUpslOSkTLcj-5qdsp4ZTv62iOU5wWVIPfATaJUaZktVOnhg/s200/blog-ToolsManual1.png" width="240" /></a>There are many automated application security tools available on the market. They are useful tools for identifying vulnerabilities in your company's applications, but, they shouldn't be used alone as part of a risk identification process. This post discusses the advantages of automated tools and identifies gaps that need to be filled with manual testing techniques for a more comprehensive view of application risk.<br />
<br />
For my purpose here, I'm going to consider automated scanning tools like WebInspect, AppScan, and Acunetix. These are all automated dynamic analysis tools, but there are quite a few other options such as Automated Code Review, Binary Analyzers, and even newer technologies that instrument application code and analyze it during runtime. The capabilities of each of these types of tools differ, but many of the pros and cons are similar.<br />
<br />
Automated tools require at least a one-time set up step to configure it for your application. Once configured, the tools can run on a scheduled basis or even as part of a continuous integration build process. Automated tools can scan an application and deliver results very quickly, often in hours. They can scan large numbers of applications too. They are great at identifying vulnerabilities that can be identified by sending attack input and analyzing the output of the application for vulnerability signatures. The tools can detect popular vulnerabilities like SQL injection, cross-site scripting, disclosure of stack traces or error messages, disclosure of sensitive information (like credit card numbers or SSNs), open redirects, and more. They generally perform best identifying non-complex to moderately complex vulnerabilities. This makes automated tools great for use cases such as:<br />
<ul>
<li>A first time look at the security of a web application</li>
<li>Scanning all of an organization's web applications for the first time or on a periodic basis</li>
<li>Integration with other automated processes, such as the build step of a continuous integration server (probably on a schedule. i.e. every night)</li>
</ul>
<div>
After understanding the value that automated tools can provide, it's also important to understand their limitations. The primary limitation is that they aren't human. They are written to find a concrete, specific set of issues and to be able to identify those issues based on signatures or algorithms. An experienced application security tester's knowledge and expertise will far outshine a tool allowing them to identify tremendously more issues and interpret complex application behavior to understand whether a vulnerability is present. This typically means manual testing is required to identify vulnerabilities related to:</div>
<div>
<ul>
<li>Authentication process steps including login, forgot username/password, and registration</li>
<li>Authorization, especially determining if data is accessed in excess of a user's role or entitlements or data that belongs to another tenant</li>
<li>Business logic rules</li>
<li>Session management</li>
<li>Complex injection flaws, especially those that span multiple applications (for example a customer application accepts and stores a cross-site scripting vulnerability, but the exploit executes in the admin application)</li>
<li>Use of cryptography</li>
<li>The architecture and design of the application and related components</li>
</ul>
</div>
The issues listed above are extremely important! For example, it's unacceptable for an attacker to be able to read and modify any other user's data. But, an automated tool isn't going to be able to identify this type of flaw. These tools also tend to perform poorly on web services, REST services, thick-clients, mobile applications, single-page applications. For these reasons, manual testing is absolutely essential for identifying risk in an application.<br />
<br />
If manual testing can identify all the same issues and more versus an automated scanning tool, why bother with the automated scanning tool? Well, sometimes you don't need the automated scanning tool. But most of the time, it's still very helpful. The key factors are speed and scale. You can scan a lot of web applications very quickly, receive results, and fix them. THEN, follow up with manual testing. The caution is that scanning alone and waiting to do manual testing may leave critical risk vulnerabilities undiscovered in the application, so don't wait too long afterward.<br />
<br />
If your organization needs assistance choosing and adopting automated scanning tools or would like more information about manual application-layer security testing, please contact <a href="http://www.securityps.com/">Security PS</a>. Security PS does not sell automated tools, but we have advised many of our clients regarding how to choose an appropriate tool, prepare staff for using that tool, and update processes to include its usage.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-71978509524839511822016-08-11T14:25:00.000-05:002020-04-29T17:15:53.296-05:00ASP.NET Core Basic Security Settings Cheatsheet<a href="http://www.securityps.com/application_security.html" imageanchor="1" style="clear: right; float: right; margin: 1em;"><img border="0" height="127" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUJ6jbt50cemYvkaY2fUgk40kwsxmNLuBu5QgQBeKqBanlTUuAi3XHg7kTt2At0WnhSWFhbggq41-w2fBVGl2n1NIxPG6gX7GZpi3RLDfvABnThGeBFm1xIT2QutSwfU4N9qy8-g/s1600/ASPNETCoreSecurity.png" width="253" /></a>When starting a new project, looking at a new framework, or fixing vulnerabilities identified as part of an assessment or tool, its nice to have one place to refer to the fixes for common security issues. This post provides solutions for some of the more basic issues, especially those around configuration. Most of these answers can be found in Microsoft's documentation or by doing a quick Google search. But hopefully having it all right here will save others some time.<br />
<h3>
Enabling An Account Lockout Response</h3>
<div>
To enable the account lockout response for ASP.NET Identity, first modify the Startup.cs file to choose appropriate settings. In the ConfigureServices method, add the following code:</div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">services.Configure<IdentityOptions>(options =></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">{</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> //optional</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.Lockout.AllowedForNewUsers = true;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> //requires manual unlock</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.Lockout.DefaultLockoutTimeSpan = TimeSpan.MaxValue;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> //three failed attempts before lockout</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.Lockout.MaxFailedAccessAttempts = 3; </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">});</span></div>
</div>
<div>
With the settings configured, lockout still needs to be enabled in the login method of the account controller. In AccountController -> Login(LoginViewModel model, string returnUrl = null), change lockoutOnFailure from false to true as shown below:</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">var result = await _signInManager.PasswordSignInAsync(model.Email, model.Password, model.RememberMe, <b>lockoutOnFailure: true</b>);</span></div>
<div>
<br /></div>
<div>
References:</div>
<div>
<ul>
<li><a href="https://docs.asp.net/en/latest/security/authentication/2fa.html#account-lockout-for-protecting-against-brute-force-attacks">https://docs.asp.net/en/latest/security/authentication/2fa.html#account-lockout-for-protecting-against-brute-force-attacks</a></li>
<li><a href="https://docs.asp.net/en/latest/security/authentication/identity.html">https://docs.asp.net/en/latest/security/authentication/identity.html</a></li>
</ul>
<h3>
Defining and Enforcing an Application Specific Password Policy</h3>
</div>
<div>
ASP.NET Identity comes with a class that validates passwords. It is configurable and allows one to decide if passwords should require a digit, uppercase letters, lowercase letters, numbers, and/or a symbol. This policy can be further customized by implementing the IPasswordValidator interface or extending the Microsoft.AspNetCore.Identity.PasswordValidator. The code below extends the PasswordValidator and ensures the password does not contain an individual's username.</div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">using ASPNETCoreKestrelResearch.Models;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">using Microsoft.AspNetCore.Identity;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">using Microsoft.AspNetCore.Identity.EntityFrameworkCore;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">using System.Threading.Tasks;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">namespace ASPNETCoreKestrelResearch.Security</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">{</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> public class CustomPasswordValidator<TUser> : PasswordValidator<TUser> where TUser : IdentityUser</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> public override async Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> { </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> IdentityResult baseResult = await base.ValidateAsync(manager, user, password);</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> if (!baseResult.Succeeded)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return baseResult;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> else</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> if (password.ToLower().Contains(user.UserName.ToLower()))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> { </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return IdentityResult.Failed(new IdentityError</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> Code = "UsernameInPassword",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> Description = "Your password cannot contain your username"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> });</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> else</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return IdentityResult.Success;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">}</span></div>
</div>
<div>
Next, ASP.NET Identity needs to be told to use that class. In the ConfigureServices method of Startup.cs, find services.AddIdentity and add ".AddPasswordValidator<CustomPasswordValidator<ApplicationUser>>();" as shown below.<br />
<span style="font-family: "courier new" , "courier" , monospace;">services.AddIdentity<ApplicationUser, IdentityRole>()</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> .AddEntityFrameworkStores<ApplicationDbContext>()</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> .AddDefaultTokenProviders()</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> .AddPasswordValidator<CustomPasswordValidator<ApplicationUser>>();</span><br />
<h3>
Choosing a Session Timeout Value</h3>
<div>
Developers can choose how long a session cookie remains valid and whether a sliding expiration should be used by adding the following code to the ConfigureServices method of Startup.cs:</div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">services.Configure<IdentityOptions>(options =></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">{</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.Cookies.ApplicationCookie.ExpireTimeSpan = TimeSpan.FromMinutes(10);</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.Cookies.ApplicationCookie.SlidingExpiration = true;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">});</span></div>
</div>
<h3>
Enabling the HTTPOnly and Secure Flag for Authentication Cookies</h3>
<div>
First, if you are using Kestrel, HTTPS (TLS) is not supported. Instead, it is implemented by HAProxy, Nginix, Apache, IIS, or some other web server you place in front of the application. If you are using Kestrel, the Secure flag cannot be enabled properly from the application code. However, if you are hosting the application in IIS directly, then it will work. The following code demonstrates enabling both the HTTPOnly and Secure flags for cookie middleware in ASP.NET Identity through the ConfigureServices method in Startup.cs.</div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">services.Configure<IdentityOptions>(options =></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">{</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.Cookies.ApplicationCookie.CookieHttpOnly = true;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.Cookies.ApplicationCookie.CookieSecure = CookieSecurePolicy.Always;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">});</span></div>
</div>
<h3>
Enabling Cache-Control: no-store</h3>
<div>
When applications contain sensitive information that should not be stored on a user's local hard drive, The Cache-Control: no-store HTTP response header can help provide that guidance to browsers. To enable that feature, add the following code to the ConfigureServices method in Startup.cs.</div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">services.Configure<MvcOptions>(options =></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">{</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.CacheProfiles.Add("DefaultNoCacheProfile", new CacheProfile</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> NoStore = true,</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> Location = ResponseCacheLocation.None</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> });</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.Filters.Add(new ResponseCacheAttribute</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> CacheProfileName = "DefaultNoCacheProfile" </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> });</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">});</span></div>
</div>
<h3>
Disabling the Browser's Autocomplete Feature for Login Forms</h3>
</div>
<div>
The changes to ASP.NET's razor views makes this super simple. Just add the autocomplere="off" attribute as if it were a normal HTML input field:</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><input asp-for="Email" class="form-control" autocomplete="off"/></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><input asp-for="Password" class="form-control" autocomplete="off"/></span></div>
<h3>
Modify The Iterations Count for the Password Hasher's Key Derivation Function</h3>
<div>
First, I believe the default right now is 10,000 and the algorithm is PBKDF2. The code below won't change that default iteration count, but it will show how it can be done. In ConfigureService in Startup.cs add the following code.</div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">services.Configure<PasswordHasherOptions>(options =></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">{ </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> options.IterationCount = 10000;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">});</span></div>
</div>
<h3>
Enforcing HTTPS and Choosing Appropriate TLS Protocols and Cipher Suites</h3>
<div>
As mentioned above, if you are using Kestrel you won't be able to use HTTPS directly. Therefore, you won't do this in your code. You will need to look up how to do this in HAProxy, Nginx, Apache, IIS, etc. If you are hosting your application using IIS directly, then you can enforce the use of HTTPS using something like <a href="https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/RequireHttpsAttribute.cs">https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/RequireHttpsAttribute.cs</a> BUT, it will only be applied to your MVC controllers/views. It will not be enforced for static content (see <a href="https://github.com/aspnet/Home/issues/895">https://github.com/aspnet/Home/issues/895</a>). If you want to do this in code, you will need to write some middleware to enforce it across the entire application. Finally, the choice of cipher suites offered cannot be changed using code.</div>
<h3>
Enabling a Global Error Handler</h3>
<div>
A custom global error handler is demonstrated by the Visual Studio template. The following relevant code can be found in the Configure method of Startup.cs.</div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">if (env.IsDevelopment())</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">{</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> app.UseDeveloperExceptionPage();</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> app.UseDatabaseErrorPage();</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> app.UseBrowserLink();</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">}</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">else</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">{</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> app.UseExceptionHandler("/Home/Error");</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">}</span></div>
</div>
<h3>
Removing the Server HTTP Response Header</h3>
<div>
All responses from the server are going to return "Server: Kestrel" by default. To remove that value, modify UseKestrel() in Program.cs to include the following settings change:</div>
<div>
<div>
public static void Main(string[] args)</div>
<div>
{</div>
<div>
var host = new WebHostBuilder()</div>
<div>
.UseKestrel(<b>options =></b></div>
<div>
<b> {</b></div>
<div>
<b> options.AddServerHeader = false;</b></div>
<div>
<b> }</b>)</div>
<div>
.UseContentRoot(Directory.GetCurrentDirectory())</div>
<div>
.UseIISIntegration()</div>
<div>
.UseStartup<Startup>()</div>
<div>
.UseUrls("http://0.0.0.0:5000")</div>
<div>
.Build();</div>
<div>
<br /></div>
<div>
host.Run();</div>
<div>
}</div>
</div>
<h3>
X-Frame-Options, Content-Security-Policy, and Strict-Transport-Security HTTP Response Headers</h3>
<div>
The following post seems to cover most of these headers well: <a href="http://andrewlock.net/adding-default-security-headers-in-asp-net-core/">http://andrewlock.net/adding-default-security-headers-in-asp-net-core/</a>. I haven't evaluated its design, but I did verify that I can install it and the headers are added successfully. Since Kestrel does not support HTTPS, consider whether its appropriate to implement the Strict-Transport-Security header using code or by configuring the web server placed in front of the application.</div>
<div>
<br /></div>
<div>
I installed this nuget package using "Install-Package NetEscapades.AspNetCore.SecurityHeaders". Then, I made sure to have the following imports in Startup.cs:</div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">using NetEscapades.AspNetCore.SecurityHeaders;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">using NetEscapades.AspNetCore.SecurityHeaders.Infrastructure;</span></div>
</div>
<div>
I added the following code to the ConfigureService method of Startup.cs:</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">services.AddCustomHeaders();</span></div>
<div>
Last, I added this code to the Configure method of Startup.cs:</div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">app.UseCustomHeadersMiddleware(new HeaderPolicyCollection()</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> .AddContentTypeOptionsNoSniff()</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> .AddFrameOptionsDeny()</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> //.AddStrictTransportSecurityMaxAge()</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> .AddXssProtectionBlock()</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> //.AddCustomHeader("Content-Security-Policy", "somevaluehere")</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> //.AddCustomHeader("X-Content-Security-Policy", "somevaluehere")</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> //.AddCustomHeader("X-Webkit-CSP", "somevaluehere")</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">);</span></div>
</div>
<div>
Make sure you add this code BEFORE app.UseStaticFiles();, otherwise the headers will not be applied to your static files.</div>
<div>
<br /></div>
Anonymousnoreply@blogger.com1tag:blogger.com,1999:blog-23520533.post-63232125379984298972016-08-01T11:22:00.000-05:002016-10-06T19:23:14.889-05:00Why Use the NIST CSF?You may have heard about a recent framework that has been gaining traction since its inception a few years ago called the Cybersecurity Framework (CSF). If not, I’ll give you a quick recap. This framework was drafted by <a href="http://www.securityps.com/enterprise_security.html" imageanchor="1" style="clear: right; float: right; margin: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7kIFGdIjtTkmeO8uxlNQSXE0lL4uQVxKVBhCnNryHDu93fHAlz_ZHSqquEUCW5IK7TXVduVuYxqYa2b2nTzMf0wyDgMkai1oJBaxDyjLdC_e4OzYsFL9-sIdw6moU45bbaVngZg/s1600/SPS-NIST_CSF.png" width="253" height="127" /></a> the Commerce Department’s National Institute of Standards and Technology (NIST) back in February of 2013 from an Executive Order by the President entitled “Improving Critical Infrastructure Cybersecurity”. Following almost a year of collaborative discussions with thousands of security professionals across both public and private sectors, a framework was developed that is comprised of guidelines that can help organizations identify, implement, and improve cybersecurity practices as well as their overall security program as a whole. The framework is architected to be a continuous process to grow in sync with the constant changes in cybersecurity threats, processes and technologies. It was also designed to be revised periodically to incorporate lessons learned and industry feedback. At its core, the principles of the framework conceives cybersecurity as a progressive, continuous lifecycle that identifies and responds to threats, vulnerabilities, and solutions. The CSF provides the channels to allow organizations to determine their current cybersecurity state and capabilities, set goals for a desired outcomes, and establish a plan for improving and maintaining the overall security program. The framework itself is available <a href="http://www.nist.gov/cyberframework/upload/cybersecurity-framework-021214.pdf" target="_blank">here</a>.<br />
<br />
So, what makes the CSF different from NIST 800-53 or ISO 27001/27002? By definition, these are detailed regulatory documents which provide requirements for adhering to specific control standards. In comparison, the CSF provides a high-level framework for how to access and prioritize functions within a security program from these existing standards. Due to it’s high-level scope and common structure, the CSF is also much more suitable for those with non-technical backgrounds and C-Level executives. It was created with the realization in mind that many of the required controls and processes for a security program have already been created and duplicated across these standards. In effect, it provides the mechanisms for a common structure within the industry that allows for any organization to drive growth and maturity of cybersecurity practices, and to shift from a reactive state to a proactive state of risk management. <br />
<br />
For organizations that are Federally regulated, the CSF may be of particular importance. Many top level Directors have expressed that an industry driven cybersecurity model is much more preferred over prescriptive regulatory approaches from the Federal government. Even though the CSF is currently voluntary for both public and private sectors, it is important to realize that with a high degree of probability, this will not be the case in the future. Discussions have already taken place amongst Federal regulators and Congressional lawmakers that this voluntary framework should be used as the baseline for best security practices, including assessing legal or regulatory exposure and for insurance purposes. If these types of suggestions become reality, implementing the CSF now could allow organizations much more flexibility and cost savings in how it is implemented.<br />
<br />
In addition to staying ahead of possible new laws and federal mandates, the CSF provides any organization, regulated or not, a number of other benefits, all of which support a stronger cybersecurity posture. Some of these benefits include:<br />
<ul>
<li>A common language and structure across all industries</li>
<li>Opportunities for collaboration amongst public and private sectors</li>
<li>The ability to demonstrate due-diligence and due-care by adopting the framework</li>
<li>Greater ease in adhering to compliance regulations or industry standards</li>
<li>Improved cost efficiency</li>
<li>Flexibility in using any existing security standards, such as HiTrust, 800-53, ISO 27002, etc.</li>
</ul>
Though it is difficult to express all the possible benefits in this short post, Security PS highly recommends to any organization that they take a good look at the CSF and consider their options for implementation and future laws that influence its use.<br />
<h4>
Questions?</h4>
If you have more questions, please consider contacting us for additional details. We’ll be glad to assist you and your organization.<br />
<div>
<br /></div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-72113550373366493992016-07-29T12:45:00.000-05:002020-04-29T17:16:06.857-05:00ASP.NET Core and Docker Update: Docker Compose!Previously in <a href="http://blog.securityps.com/2016/07/aspnet-core-postgresql-docker-and.html">ASP.NET Core, PostgreSQL, Docker, and Continuous Integration with Jenkins</a>, I wrote about my experience getting started with ASP.NET Core and Docker. In that post, I was running a script to stop and remove my previous containers each time I deployed, and I had separate scripts to build and run each Docker container. In a recent conference video, a speaker referenced Docker Compose, and I'm so glad he did. It simplifies my set up greatly! This post describes the changes I made to my project to use Docker Compose.<br />
<br />
I reused all my previous DockerFiles and I needed to reference them in a docker-compose.yml file in the root of my project. Here's that file:<br />
<a href="http://3.bp.blogspot.com/-s5PqQCRTAhs/V6OiwSs8jRI/AAAAAAAAAPI/RuAl6sg15QA7AMgwx8wluec9Qq-zyWapQCK4B/s1600/docker-compose-yml.PNG" imageanchor="1"><img border="0" height="312" src="https://3.bp.blogspot.com/-s5PqQCRTAhs/V6OiwSs8jRI/AAAAAAAAAPI/RuAl6sg15QA7AMgwx8wluec9Qq-zyWapQCK4B/s320/docker-compose-yml.PNG" width="320" /></a><br />
<br />
After trying to run this (which I will get to later), I discovered that there were some differences in how linking two containers works. Instead of automatically providing environment variables that reference the linked containers, it sets up hostnames that match the services above. So, I needed to make a few changes to my configuration files. First, I updated my appsettings.json file to change the host referenced by the ConnectionString.<br />
<a href="http://4.bp.blogspot.com/-UalkwL_a024/V6Ojau1gfdI/AAAAAAAAAPU/yPOvZGAgDW4b7eG27-sOVVNgNXgdgTucwCK4B/s1600/2.png" imageanchor="1"><img border="0" height="66" src="https://4.bp.blogspot.com/-UalkwL_a024/V6Ojau1gfdI/AAAAAAAAAPU/yPOvZGAgDW4b7eG27-sOVVNgNXgdgTucwCK4B/s320/2.png" width="320" /></a><br />
<br />
In the haproxy.cfg configuration file, I referenced the web1 and web2 hosts. I also learned how to correctly configured the proxy to set a cookie and direct users to the same Kestrel instance each time.<br />
<a href="http://2.bp.blogspot.com/-81TtMjZsDJQ/V6Oj3_IF8wI/AAAAAAAAAPc/eX_DyUdDEvMhnsMNNNaF4gUX8itwG-GYwCK4B/s1600/3.PNG" imageanchor="1"><img border="0" height="278" src="https://2.bp.blogspot.com/-81TtMjZsDJQ/V6Oj3_IF8wI/AAAAAAAAAPc/eX_DyUdDEvMhnsMNNNaF4gUX8itwG-GYwCK4B/s320/3.PNG" width="320" /></a><br />
<br />
With those changes, I was ready to build and run all the containers.<br />
<a href="http://3.bp.blogspot.com/-EMtQQSa_fy0/V6Olg6vC6GI/AAAAAAAAAPo/tTzYYLhcPnICKmr59E4R56a_fFXiku8kwCK4B/s1600/4.png" imageanchor="1"><img border="0" height="320" src="https://3.bp.blogspot.com/-EMtQQSa_fy0/V6Olg6vC6GI/AAAAAAAAAPo/tTzYYLhcPnICKmr59E4R56a_fFXiku8kwCK4B/s320/4.png" width="173" /></a><br />
<br />
<a href="http://4.bp.blogspot.com/-4TCy7LbrN2Y/V6OljISaOKI/AAAAAAAAAPw/O6rqA3blX2AM-elWFqDskVBv_8cKtKpgACK4B/s1600/5.PNG" imageanchor="1"><img border="0" height="96" src="https://4.bp.blogspot.com/-4TCy7LbrN2Y/V6OljISaOKI/AAAAAAAAAPw/O6rqA3blX2AM-elWFqDskVBv_8cKtKpgACK4B/s320/5.PNG" width="320" /></a><br />
<br />
Since I'm deploying a new instance of the database server and an empty database, I also need to run my Entity Framework Migrations. So the next step is to run "docker-compose exec -d web1 dotnet ef database update".<br />
<br />
Finally, when I need to stop and remove the containers to make way for a newer build, Docker Compose takes care of that as well.<br />
<a href="http://4.bp.blogspot.com/-LpxzxOqTKx0/V6Ol6AG2QFI/AAAAAAAAAP4/7IjN-PU_qpsuifyiZ_62zkBniuimOmx-QCK4B/s1600/6.PNG" imageanchor="1"><img border="0" height="117" src="https://4.bp.blogspot.com/-LpxzxOqTKx0/V6Ol6AG2QFI/AAAAAAAAAP4/7IjN-PU_qpsuifyiZ_62zkBniuimOmx-QCK4B/s320/6.PNG" width="320" /></a><br />
<div class="separator" style="clear: both; text-align: center;">
</div>
Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-22467721972039069452016-07-22T15:11:00.000-05:002020-04-29T17:16:19.622-05:00ASP.NET Core, PostgreSQL, Docker, and Continuous Integration with JenkinsFollowing the Kansas City Developer Conference and the release of ASP.NET Core 1.0, I decided to try out the new framework, to deploy infrastructure with my application, and to use a continuous integration server. This post summarizes what I did and the result of that effort; however, I want to stress that this is in no way a recommendation of how one should securely build and deploy an application. A lot of these technologies are brand new to me, and my goal was just to get them to work. But, instead of waiting until everything is perfect, I wanted to write about what I had so far in case it helps someone else.<br />
<br />
First, a list of the technologies I used and how far I took them:<br />
<ul>
<li>ASP.NET Core with ASP.NET MVC 6 - Deploy and run the default template with Entity Framework and ASP.NET Identity (including ability to register and login) on Linux using Kestrel and NOT IIS or SQL Server</li>
<li>PostgreSQL - Used as my database instead of the more traditional choice of SQL Server</li>
<li>Docker - Used to host Linux containers for my ASP.NET MVC 6 applications, PostgreSQL database, and HA Proxy load balancer. Also allows me to deploy the infrastructure with the application</li>
<li>HA Proxy - Used as a load balancer</li>
<li>Jenkins - Used as my continuous integration server to automatically build and deploy the application AND its infrastructure</li>
</ul>
<b>ASP.NET Core with MVC 6</b> <br />
With Visual Studio 2015 completely up to date, I started with File -> New Project -> .NET Core -> ASP.NET Core Web Application. In the next dialog, I chose "Web Application" and for authentication, I chose "Individual User Account". I made sure nothing was checked for Azure. Next, I modified the Main method of Program.cs to ensure Kestrel would listen on ALL interfaces instead of just localhost. To do that, I added .UseUrls("http://0.0.0.0:5000") as shown below.<a href="http://2.bp.blogspot.com/-WCJZdt7AbBQ/V6ORyuh6eKI/AAAAAAAAAJc/nOzupTRpIG0zPXXVLkBtEPDCkLtL7IKhgCK4B/s1600/1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" height="257" src="https://2.bp.blogspot.com/-WCJZdt7AbBQ/V6ORyuh6eKI/AAAAAAAAAJc/nOzupTRpIG0zPXXVLkBtEPDCkLtL7IKhgCK4B/s400/1.png" width="400" /></a><br />
<br />
In order to use the PostgreSQL database, I uninstalled the Microsoft.EntityFrameworkCore.SqlServer and Microsoft.EntityFrameworkCore.SqlServer.Design packages. Then, I installed Microsoft.EntityFrameworkCore.Tools.Core (use -Pre for Install-Package), Microsoft.EntityFrameworkCore.Tools (use -Pre for Install-Package), and Npgsql.EntityFrameworkCore.PostgreSQL. Here are my packages afterward:<br />
<br />
<a href="http://2.bp.blogspot.com/-VJ2prQVnfbo/V6OSB9KRv_I/AAAAAAAAAJk/vvOPutjouJg6gSg2Ejp3TjdUzszVvXJPACK4B/s1600/2.png" imageanchor="1"><img border="0" height="320" src="https://2.bp.blogspot.com/-VJ2prQVnfbo/V6OSB9KRv_I/AAAAAAAAAJk/vvOPutjouJg6gSg2Ejp3TjdUzszVvXJPACK4B/s320/2.png" width="233" /></a><br />
Once the project has support for PostgreSQL in Entity Framework, the ConnectionString needs to be updated and EntityFramework needs to be configured to use PostgreSQL. So, I modified the default ConnectionString in the appsettings.json file as shown below.<br />
<br />
<a href="http://3.bp.blogspot.com/-B7k7EXNR6BU/V6OSEstXNrI/AAAAAAAAAJs/acv-L5rFna0q0NGUJu6k_I4-sg8YsiVAgCK4B/s1600/3.png" imageanchor="1"><img border="0" height="88" src="https://3.bp.blogspot.com/-B7k7EXNR6BU/V6OSEstXNrI/AAAAAAAAAJs/acv-L5rFna0q0NGUJu6k_I4-sg8YsiVAgCK4B/s320/3.png" width="320" /></a><br />
Then, I found the existing services.AddDbContext related code in the ConfigureServices method in Startup.cs, and I modified it to use PostgreSQL. The code is shown below.<br />
<br />
<a href="http://3.bp.blogspot.com/-3wUVjAl47Zw/V6OSHdhY6mI/AAAAAAAAAJ0/GikAgrXqbhIuXXECvvAMqxHDjIAmDT7xgCK4B/s1600/4.png" imageanchor="1"><img border="0" height="154" src="https://3.bp.blogspot.com/-3wUVjAl47Zw/V6OSHdhY6mI/AAAAAAAAAJ0/GikAgrXqbhIuXXECvvAMqxHDjIAmDT7xgCK4B/s320/4.png" width="320" /></a><br />
<br />
<b>Docker (For Windows)</b><br />
Yes, I'm using Docker on Windows. That means I can use Visual Studio AND I can deploy to a Linux container. Later, I can run it on a production system with a Linux without having to change a single thing. So, using windows works great for my purposes. You could also do this on Mac OS X, your favorite Linux distro, etc. but you would need to use Visual Studio Code as your IDE instead. To set up Docker, I went to their website, downloaded Docker for Windows and installed it.<br />
<b><br /></b>
<b>PostgreSQL (Using Docker)</b><br />
For my database instance, I used a Linux Docker container to host PostgreSQL. I used the "official" image found at <a href="https://hub.docker.com/_/postgres/">https://hub.docker.com/_/postgres/</a>. That official image allows for a script to be run at start up to get the database set up the way you want. The script must be named "init-user-db.sh" and must be placed in the /docker-entrypoint-initdb.d/ directory of the Docker container. My script creates a database and a user for accessing that database.<br />
<a href="http://3.bp.blogspot.com/-I1sjl4MhoNs/V6OSNDphE8I/AAAAAAAAAJ8/1CTymWzpWqUokRkv1dcI2Kwgj__9Aqi8QCK4B/s1600/5.png" imageanchor="1"><img border="0" height="92" src="https://3.bp.blogspot.com/-I1sjl4MhoNs/V6OSNDphE8I/AAAAAAAAAJ8/1CTymWzpWqUokRkv1dcI2Kwgj__9Aqi8QCK4B/s320/5.png" width="320" /></a><br />
Next, I set up the DockerFile for that container. It looks like this:<br />
<a href="http://4.bp.blogspot.com/--Vs9UsDqPC0/V6OSPgLENdI/AAAAAAAAAKE/kBFsc7ikMAMvscD4iyjjmWDjIXFTP6RJQCK4B/s1600/6.png" imageanchor="1"><img border="0" height="79" src="https://4.bp.blogspot.com/--Vs9UsDqPC0/V6OSPgLENdI/AAAAAAAAAKE/kBFsc7ikMAMvscD4iyjjmWDjIXFTP6RJQCK4B/s320/6.png" width="320" /></a><br />
I can now build and run the container:<br />
<a href="http://1.bp.blogspot.com/-tJV__gVBJto/V6OSRyaEPAI/AAAAAAAAAKM/ftdNYcPCwAg3-9Ku1zebFCp4ivxOyYbmQCK4B/s1600/7.png" imageanchor="1"><img border="0" height="129" src="https://1.bp.blogspot.com/-tJV__gVBJto/V6OSRyaEPAI/AAAAAAAAAKM/ftdNYcPCwAg3-9Ku1zebFCp4ivxOyYbmQCK4B/s320/7.png" width="320" /></a><br />
<br />
<b>Apply Entity Framework Migration</b><br />
<a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=23520533" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a>Now that the database is running, I applied the Entity Framework Migrations so the Login and Register features will work. To do that, I used the command line and changed directory to the root of the application (the place where project.json is located) and ran "dotnet ef database update".<br />
<br />
<b><a href="http://2.bp.blogspot.com/-3LJmD2OPLcg/V6OT6xsxiHI/AAAAAAAAAMk/TrwCJt8lrGMEQwGgd-b0dIU8bEhfGbmOQCK4B/s1600/9.png" imageanchor="1"><img border="0" height="55" src="https://2.bp.blogspot.com/-3LJmD2OPLcg/V6OT6xsxiHI/AAAAAAAAAMk/TrwCJt8lrGMEQwGgd-b0dIU8bEhfGbmOQCK4B/s320/9.png" width="320" /></a></b><br />
<br />
<b>Run the Application</b><br />
Finally, I ran the application with "dotnet run", and visited the site in a browser at http://localhost:5000/.<b> </b>I can now register a new user and login. All of that data is being stored in the PostgreSQL Docker container's database.<br />
<a href="http://1.bp.blogspot.com/-3u9xOzqbH30/V6OUfEWnsII/AAAAAAAAAM4/N3PDQCkDxo0QxBUo4OjKNDfKjFuE32RqwCK4B/s1600/10.png" imageanchor="1"><img border="0" height="46" src="https://1.bp.blogspot.com/-3u9xOzqbH30/V6OUfEWnsII/AAAAAAAAAM4/N3PDQCkDxo0QxBUo4OjKNDfKjFuE32RqwCK4B/s320/10.png" width="320" /></a><br />
<br />
<b>Hosting The ASP.NET Core MVC6 Template in a Docker Container</b><br />
After verifying I could run the application and connect to the database, I wanted to be able to host the application itself in a docker container. Microsoft has some ready-made Docker images (<a href="https://hub.docker.com/r/microsoft/dotnet/">https://hub.docker.com/r/microsoft/dotnet/</a>) that I used to accomplish it. My DockerFile file follows their instructions for copying the application's root into the container, running "dotnet restore" to download all the required Nuget packages, and "dotnet run" to start the application. <br />
<a href="http://3.bp.blogspot.com/-GDTRaVCHxE0/V6OUmftkiMI/AAAAAAAAANA/Lrq_NHpvJDszryRLjwTs6mwYny89DCzBwCK4B/s1600/11.png" imageanchor="1"><img border="0" height="182" src="https://3.bp.blogspot.com/-GDTRaVCHxE0/V6OUmftkiMI/AAAAAAAAANA/Lrq_NHpvJDszryRLjwTs6mwYny89DCzBwCK4B/s320/11.png" width="320" /></a><br />
<br />
Here's how I built and ran that Docker container:<br />
<b></b>
<b> <a href="http://4.bp.blogspot.com/-xSSdTwTtirk/V6OSizkTPiI/AAAAAAAAAK0/yDSZhG0_G5wmn-0fKZA8LCot_xBunys7wCK4B/s1600/12.png" imageanchor="1"><img border="0" height="250" src="https://4.bp.blogspot.com/-xSSdTwTtirk/V6OSizkTPiI/AAAAAAAAAK0/yDSZhG0_G5wmn-0fKZA8LCot_xBunys7wCK4B/s400/12.png" width="400" /></a></b><br />
<a href="http://4.bp.blogspot.com/-rI8gOBrYocQ/V6OVShqEEfI/AAAAAAAAANg/b098mqxG-boZVFb6CeHAjMSc79KoTrEjACK4B/s1600/13.png" imageanchor="1"><img border="0" height="41" src="https://4.bp.blogspot.com/-rI8gOBrYocQ/V6OVShqEEfI/AAAAAAAAANg/b098mqxG-boZVFb6CeHAjMSc79KoTrEjACK4B/s320/13.png" width="320" /></a><br />
Finally, I visited http://localhost:5000 to test that the application is running.<br />
<br />
<b>HAProxy</b><br />
This is the first time I've used HAProxy. Basically, I saw that other people had used it and I thought I would try it. To get it to work, I found an example configuration file, made a casual attempt to understand some of the settings in HAProxy's documentation, and then just messed with it until it worked. I don't recommend doing it this way for a real application. I was very happy when it finally forwarded my traffic correctly!<br />
<br />
One of the key challenges I had was that I couldn't point HAProxy's configuration at localhost:5000 and localhost:5001 for two exposed instances of my ASP.NET Core MVC application. Since I wasn't familiar with Docker, it took a while to figure out what to do. I eventually learned that you can "link" containers together and that some environment variables are automatically populated to help refer to those instances. Also, HAProxy can use environment variables in the configuration file. As a result, here's the haproxy.cfg configuration I came up with:<br />
<br />
<a href="http://4.bp.blogspot.com/-srEZ0iLbCEE/V6OU4yvPd7I/AAAAAAAAANU/OOGGifnkK0ozq-IM_ezyAOW1CDZR3MPlQCK4B/s1600/14.png" imageanchor="1"><img border="0" height="172" src="https://4.bp.blogspot.com/-srEZ0iLbCEE/V6OU4yvPd7I/AAAAAAAAANU/OOGGifnkK0ozq-IM_ezyAOW1CDZR3MPlQCK4B/s320/14.png" width="320" /></a><br />
In the configuration, you can see some references to using a cookie to ensure the same client hits the same server each time. I never actually got that part to work.<br />
<br />
My DockerFile for HAProxy can be found below. Again, I used an official image (<a href="https://hub.docker.com/_/haproxy/">https://hub.docker.com/_/haproxy/</a>) and followed their instructions.<br />
<a href="http://4.bp.blogspot.com/-NrvSiFZZfeA/V6OVcACmPBI/AAAAAAAAANo/yvvl7FU-uzgK020GI9RagGA43uIuwbl0QCK4B/s1600/15.png" imageanchor="1"><img border="0" height="53" src="https://4.bp.blogspot.com/-NrvSiFZZfeA/V6OVcACmPBI/AAAAAAAAANo/yvvl7FU-uzgK020GI9RagGA43uIuwbl0QCK4B/s320/15.png" width="320" /></a><br />
<br />
Here are the commands I ran to bring up the web servers, the load balancer, and link them together.<br />
<a href="http://1.bp.blogspot.com/-2SAwDT3m8us/V6OVeYWK2fI/AAAAAAAAANw/tP5qrM0iXVkniqXVsURdkBNrfD31N0DQwCK4B/s1600/16.png" imageanchor="1"><img border="0" height="208" src="https://1.bp.blogspot.com/-2SAwDT3m8us/V6OVeYWK2fI/AAAAAAAAANw/tP5qrM0iXVkniqXVsURdkBNrfD31N0DQwCK4B/s320/16.png" width="320" /></a><br />
<br />
<br />
<a href="http://2.bp.blogspot.com/-WwvuYR0hIQ4/V6OVlDwM7GI/AAAAAAAAAN4/M4nInbyo3xUegLIMjA07dvPfTej-UjJdgCK4B/s1600/18.png" imageanchor="1"><img border="0" height="32" src="https://2.bp.blogspot.com/-WwvuYR0hIQ4/V6OVlDwM7GI/AAAAAAAAAN4/M4nInbyo3xUegLIMjA07dvPfTej-UjJdgCK4B/s320/18.png" width="320" /></a><br />
Next, I visited http://localhost:80 to see that everything worked.<br />
<br />
<b>Jenkins</b><br />
I used Jenkins throughout this process as a continuous integration server. There's a whole lot I don't know how to do correctly with Jenkins, but I will share the build script I used. I have separate batch files for each step of the build process to ensure each return code is evaluated and the build process will fail if any of them return anything other than 0. Here's a screenshot of the Jenkins configuration for running those batch files:<br />
<a href="http://1.bp.blogspot.com/-ItPbIBu_YLM/V6OVq-5sgrI/AAAAAAAAAOE/X9t4HTsmXgA06LufGur3FL5r5LkNtqV1wCK4B/s1600/19.png" imageanchor="1"><img border="0" height="320" src="https://1.bp.blogspot.com/-ItPbIBu_YLM/V6OVq-5sgrI/AAAAAAAAAOE/X9t4HTsmXgA06LufGur3FL5r5LkNtqV1wCK4B/s320/19.png" width="126" /></a><br />
And here are those build scripts:<br />
<br />
<a href="http://1.bp.blogspot.com/-F-BCuA4gA8o/V6OVtg6dbCI/AAAAAAAAAOM/87WXZCjN9AsBE3ufkc09y8wjPstcw5kuQCK4B/s1600/20.png" imageanchor="1"><img border="0" height="189" src="https://1.bp.blogspot.com/-F-BCuA4gA8o/V6OVtg6dbCI/AAAAAAAAAOM/87WXZCjN9AsBE3ufkc09y8wjPstcw5kuQCK4B/s320/20.png" style="cursor: move;" width="320" /></a><br />
<div class="separator" style="clear: both; text-align: center;">
</div>
I don't think this one is actually necessary. I think I was experimenting with publishing, but I'm going to include it here just in case.<br />
<a href="http://2.bp.blogspot.com/-QFfzyilToDA/V6OV02Sx6HI/AAAAAAAAAOU/w2T_m1VxrAccD_k2OBdGLvpmaDo0Z5qRwCK4B/s1600/21.png" imageanchor="1"><img border="0" height="37" src="https://2.bp.blogspot.com/-QFfzyilToDA/V6OV02Sx6HI/AAAAAAAAAOU/w2T_m1VxrAccD_k2OBdGLvpmaDo0Z5qRwCK4B/s320/21.png" width="320" /></a><br />
<br />
<a href="http://3.bp.blogspot.com/-p3Ryjzb4pM8/V6OV8VzsZDI/AAAAAAAAAOk/tDFukpNP5MQwMxNqaUFR9YZTWZVO9lIOACK4B/s1600/22.png" imageanchor="1"><img border="0" height="57" src="https://3.bp.blogspot.com/-p3Ryjzb4pM8/V6OV8VzsZDI/AAAAAAAAAOk/tDFukpNP5MQwMxNqaUFR9YZTWZVO9lIOACK4B/s320/22.png" width="320" /></a><br />
<br />
<a href="http://3.bp.blogspot.com/-_u7khnEeEuc/V6OV-vqHHWI/AAAAAAAAAOs/wk-D2au3kachwo4PSXAkWU3e8RgKdoLoACK4B/s1600/23.png" imageanchor="1"><img border="0" height="12" src="https://3.bp.blogspot.com/-_u7khnEeEuc/V6OV-vqHHWI/AAAAAAAAAOs/wk-D2au3kachwo4PSXAkWU3e8RgKdoLoACK4B/s320/23.png" width="320" /></a><br />
<br />
<a href="http://2.bp.blogspot.com/-NIepjxmkjaQ/V6OWBRzDDWI/AAAAAAAAAO0/17289K7JZFMd6sTLHyrlhxzLqkLBO1SAgCK4B/s1600/24.png" imageanchor="1"><img border="0" height="9" src="https://2.bp.blogspot.com/-NIepjxmkjaQ/V6OWBRzDDWI/AAAAAAAAAO0/17289K7JZFMd6sTLHyrlhxzLqkLBO1SAgCK4B/s320/24.png" width="320" /></a><br />
<br />
<b>Conclusions</b><br />
There's a lot to be improved upon, but for now, I don't have time to work on it further. One thing I really wanted to get working was to "publish" the application instead of simply running the code within the Docker container. It would be nice to deploy the full application without needing to redownload all the Nuget packages every time. I was able to get a basic version of this working, but then I ran into an issue in which my views weren't actually being updated after I modified them. I wasn't able to pursue it more to troubleshoot that issue.<br />
<br />
<br />
<b></b><br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-7024453948798762772016-07-08T13:44:00.000-05:002020-04-29T17:18:26.413-05:00Nick's KCDC SummaryI attended the <a href="http://www.kcdc.info/">Kansas City Developer Conference</a> in June and was really pleased by the talks. I wanted to share about my experience and what I got out of the conference. I also provide links to videos from other conferences that are either the same presentation or a related presentation.<br />
<br />
I went to the conference with the goals of:<br />
<ul>
<li>Learning more about ASP.NET Core and</li>
<li>Understanding options for deploying software into production on a continuous basis (every check in, every day, once a week, etc...)</li>
</ul>
<b>ASP.NET Core</b><br />
For ASP.NET Core, I attended:<br />
<ul>
<li>Converting to .NET Core: How we did it and what you need to know - Matt Watson (<a href="https://www.youtube.com/watch?v=EkJGPHuKN24">here's someone's unofficial video of it</a> and a <a href="http://stackify.com/15-lessons-learned-converting-from-asp-net-to-net-core/">blog post</a>)</li>
<li>Entity Framework Core 1 (EF7) - Philip Japikse</li>
</ul>
Matt's presentation was particularly detailed, but both of the presenters provided some great information for understanding whether .NET Core and EF Core were ready to be used in a production application. For .NET Core, the presenter related his experience building tools that would run on as wide of variety of <a href="https://github.com/dotnet/corefx/blob/master/Documentation/architecture/net-platform-standard.md">platforms</a> and runtimes as possible. There were some interesting challenges he had to solve to get his code to work. He used compiler flags to conditionally include code for different framework versions and had to choose different libraries because they weren't supported by .NET Core. His advice was to do a lot of research before committing to converting an existing .NET application to a cross-platform .NET Core application. Otherwise, it will work great for new development or upgrading an existing application to ASP.NET 4.6.3.<br />
<br />
My takeaway from the EF Core talk was that the framework is missing a few features that are present in EF6, and I think the presenter's advice was to wait. But, you can easily make your own decision by looking at the feature comparison chart here: <a href="https://docs.efproject.net/en/latest/efcore-vs-ef6/features.html">https://docs.efproject.net/en/latest/efcore-vs-ef6/features.html</a>. <br />
<br />
I also saw the following related .NET talks:<br />
<ul>
<li>I'll Get Back to You: Task, Await, and Asynchronous Methods - Jeremy Clark (<a href="https://vimeo.com/157300741">video from NDC</a>)</li>
<li>Token Authentication in ASP.NET - Nate Barbettini</li>
</ul>
I really liked Jeremy's talk. After watching it, I decided to look into more Task API related presentations and resources. I found the following videos and resources helpful:<br />
<ul>
<li><a href="https://vimeo.com/171319725">NDC Oslo 2016: Break the chain asynchronously - Daniel Marbach</a></li>
<li><a href="https://vimeo.com/172111826">NDC Oslo 2016: Rearchitect your code towards async/await - Daniel Marbach</a></li>
<li><u><span style="color: #0066cc;"><a href="https://msdn.microsoft.com/en-us/magazine/jj991977.aspx">Async/Await - Best Practices in Asynchronous Programming</a></span></u></li>
<li><a href="http://particular.net/webinars/async-await-best-practices">Async/Await Webinar Series: Best Practices</a></li>
<li><a href="https://msdn.microsoft.com/magazine/gg598924.aspx">Parallel Computing - It's All About the SynchronizationContext</a> </li>
</ul>
Finally, some additional rabbit holes I went down:<br />
<ul>
<li>(ASP.NET Core) <a href="http://www.addskills.se/kunskapsbanken/tidigare-webinars/devsum16-dominick-baier-whats-new-in-asp.net-core-1.0-security">DevSum16: Dominick Baier- What’s new in ASP.NET Core 1.0 Security </a></li>
<li>(ASP.NET Core) <a href="https://vimeo.com/171704554">.NET without Windows - Matt Ellis</a> </li>
<li>(ASP.NET Identity) <a href="https://vimeo.com/172009501">ASP.NET Identity 3 - Brock Allen</a></li>
<li>(OpenID Connect) <a href="https://vimeo.com/171942749">Authentication & secure API access for native & mobile Applications - Dominick Baier</a></li>
<li>(OpenID Connect) <a href="https://vimeo.com/131636653">Authentication and authorization in modern JavaScript web applications – how hard can it be? - Brock Allen</a></li>
<li>(OpenID Connect) <a href="https://vimeo.com/113604459">Unifying Authentication & Delegated API Access for Mobile, Web and the Desktop with OpenID Connect and OAuth2 by Dominick Baier</a></li>
</ul>
<b>Continuous Deployment/Delivery</b><br />
Damian Brady had two great talks at KCDC that introduced me to some new vocabulary and ways of thinking about deployment. His two talks were:<br />
<ul>
<li><a href="https://vimeo.com/171704607">Deploying Straight to Prod: A guide to the Holy Grail - Damian Brady</a></li>
<li><a href="https://vimeo.com/171950824">.NET Deployment Strategies: the Good, the Bad and the Ugly - Damian Brady</a></li>
</ul>
I didn't retain as much as I would have liked from his talks because they were so full of information, but they did get me fired up about doing my own experiments. I wanted to be able to deploy not only code, but infrastructure that automatically builds and deploys using a continuous integration server. I ended up writing an ASP.NET Core application and DockerFiles to run PostgreSQL, HAProxy, and Linux containers to run ASP.NET Core. Then, I had Jenkins build and deploy them. I hope to write another blog post about that in the future. <br />
<br />
During that process, I found a few more presentations that I really enjoyed:<br />
<ul>
<li><a href="https://vimeo.com/171704656">Deploying Docker Containers on Windows Server 2016 - Ben Hall</a></li>
<li><a href="https://vimeo.com/171317281">Continuous Integration for Open Source Projects with Travis CI - Kyle Tyacke</a></li>
<li><a href="https://vimeo.com/171317249">Getting Into the Zero Downtime Deployment World - Tugberk Ugurlu</a><br /><a href="https://vimeo.com/171704555">Continuous Integration and Delivery - from the trenches at www.lego.com - Kristian Bank Erbou</a> </li>
</ul>
Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-11855665342219914702015-10-09T16:12:00.000-05:002020-04-29T17:18:14.343-05:00Grab a Cup of Coffee with the Security PS Team: Ask Your Security Questions and Get Advice<div class="separator" style="clear: both; text-align: center;">
<a href="https://www.securityps.com/img/SDLC%20sq.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://www.securityps.com/img/SDLC%20sq.png" /></a></div>
<br />
9AM to 11AM, Wednesday October 14th and Wednesday October 28th<br />
Revocup<br />
11030 Quivira Road<br />
Overland Park, KS 66210 <br />
<br />
This month is <a href="http://www.dhs.gov/national-cyber-security-awareness-month">Cyber Security Awareness Month</a>. In an effort to contribute to our community, Security PS is offering two opportunities to discuss application security challenges you are experiencing and receive free advice. Come get a cup of coffee at Revocup and ask our experts questions. Feel free to bring application code, architecture and design documentation, vulnerability results, or just general questions. We would be happy to have a conversation with you.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-61858757893966414742014-06-22T15:53:00.001-05:002015-04-21T11:30:45.598-05:00OAuth Resource Owner Password Credentials Grant Implementation in WebAPI 2<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_MXbCzj-yStvEBiCnxvBxQfMv73H9KSE2pKjoBnptdo9o8ph0ov17SnhPMID2wI0r6dtjE6lKBFkmnVhDANrEhgyPZsvKo1nt73i8Vobh3NDvc4rFNbQMhM93P3i5Z2_1ZWX5fg/s1600/3343062926_77bc534b31_m.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_MXbCzj-yStvEBiCnxvBxQfMv73H9KSE2pKjoBnptdo9o8ph0ov17SnhPMID2wI0r6dtjE6lKBFkmnVhDANrEhgyPZsvKo1nt73i8Vobh3NDvc4rFNbQMhM93P3i5Z2_1ZWX5fg/s1600/3343062926_77bc534b31_m.jpg" height="200" width="200" /></a></div>
A few customers have been asking about the proper implementation of an OAuth server using Microsoft's WebAPI 2. I spent some time implementing one (just to be knowledgeable both with OAuth and WebAPI) and struggled to find really good resources for using the OWIN OAuth 2.0 Authorization Server (and middleware). I was able to piece together information from a variety of blogs, forum posts, and other sources, but I realized part way through that there was a need to publish additional information to help others. I have provided the source code for a Visual Studio 2013 Express project implementing the Resource Owner Password Credentials Grant, Refresh Token Grant, and an endpoint for revoking access tokens. <br />
<br />
Before you dig into the code, I want to stress that I'm not done! Because of project work and a period of vacation, I will not be able to continue working on it for a month or so. But, I wanted to provide what I had so far. Currently, the code is functional and the example requests and documentation on the Google code page (linked to at the bottom) work. It's ready to be used as a platform to learn on.<br />
<br />
I am not a full time developer; I just happen to like writing C# code. That means, I may not have the prettiest, most efficient code. Also, this code may not be secure. I used it to learn with, and yes I considered security requirements while developing it, but I haven't had the chance to review it for security vulnerabilities.<br />
<br />
The Google Code Project Home page contains:<br />
<ul>
<li>Request and response examples for each endpoint</li>
<li>A sequence diagram showing which methods called on a particular provider</li>
<li>A list of top blogs, videos, or other resources I used </li>
<li>A list of all the files I remember modifying when implementing the OAuth server </li>
</ul>
I hope the following resource helps others learn to write OAuth servers using WebAPI 2:<br />
<ul>
<li><a href="https://code.google.com/p/nicksoauthserver/">https://code.google.com/p/nicksoauthserver/</a></li>
</ul>
<br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-23520533.post-19681552434358463342013-07-09T10:09:00.000-05:002015-04-21T11:31:06.546-05:00Forms Authentication Token Termination in ASP.NET WCF Services<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrc-MEM6sn3WHwciqbUrWGRUyFOVHg84wbW7cFw9czzYe_dlwJMT1NC3EgFcITtxPQhJKut_rVFgC35ipVF0URYNbP8SG8_dVqvevI6xN3Crl-tzRqviDBFOyyWe_D-91K2OEqFg/s1600/BustedBankForm.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrc-MEM6sn3WHwciqbUrWGRUyFOVHg84wbW7cFw9czzYe_dlwJMT1NC3EgFcITtxPQhJKut_rVFgC35ipVF0URYNbP8SG8_dVqvevI6xN3Crl-tzRqviDBFOyyWe_D-91K2OEqFg/s1600/BustedBankForm.png" height="200" width="172" /></a></div>
In my last post (<a href="http://blog.securityps.com/2013/06/session-fixation-forms-authentication.html">Session Fixation & Forms Authentication Token Termination in ASP.NET</a>), I talked about ways to mitigate two types of session related vulnerabilities in an ASP.NET MVC 4 application. One of these vulnerabilities is also present in many WCF web services. In one mode of operation, WCF web services can authenticate users and issue forms authentication cookies. Since this token contains an encrypted set of values and resides only on the client-side, the server cannot choose to invalidated that token and end a user’s authenticated session. This allows attackers to continue using stolen tokens, even after the user logs out.<br />
<br />
One solution for fixing this vulnerability is to issue an ASP.NET_SessionId cookie and to tightly couple it with the forms authentication cookie <a href="http://blog.securityps.com/2013/06/session-fixation-forms-authentication.html">as described previously</a>. Whenever a web service request is issued, the ASP.NET_SessionId value should be referenced to determine if the session store contains the username and it matches the value stored by the forms authentication token. This approach is sound; however, my implementation is experimental. I’m not a WCF or ASP.NET expert.<br />
<br />
In my first attempt to implement this model, I tried using the built in <a href="http://msdn.microsoft.com/en-us/library/bb386582(v=vs.100).aspx">Windows Communication Foundation Authentication Service</a> (<a href="http://msdn.microsoft.com/en-US/library/system.web.applicationservices.authenticationservice">System.Web.ApplicationServices.AuthenticationService</a>). There were some critical modifications I needed to make to this service for it satisfy all my security needs; however, due to its construct and scope, I couldn’t find a good way to extend it or to use a decorator pattern to utilize it. Instead, I chose to write my own authentication service. The code can be found below:<br />
<br />
AuthenticationService.cs:<br />
<ul>
<li><a href="https://gist.github.com/sekhmetn/1e6166fd3a5c1f017232">https://gist.github.com/sekhmetn/1e6166fd3a5c1f017232</a></li>
</ul>
<br />
<script src="https://gist.github.com/sekhmetn/1e6166fd3a5c1f017232.js"></script><br />
<br />
Next, I extended the ServiceAuthorizationManager class to provide the capability to validate users’ session and forms authentication cookies for web service calls. It ensures the user has authenticated, and that the identity in the session store matches the identity in the forms authentication token.<br />
<br />
MyServiceAuthorizationManager.cs<br />
<ul>
<li><a href="https://gist.github.com/sekhmetn/5700581">https://gist.github.com/sekhmetn/5700581</a></li>
</ul>
<br />
<script src="https://gist.github.com/sekhmetn/5700581.js"></script><br />
<br />
Finally, in the web.config file, I created two service behaviors. One for unauthenticated access to the login service (anonymousServiceBehavior), and one for authenticated access to all other web services (authenticatedServiceBehavior). Then in the service definition, I applied each of those behaviors using the behaviorConfiguration attribute.<br />
<br />
Web.config<br />
<ul>
<li><a href="https://gist.github.com/sekhmetn/5700627">https://gist.github.com/sekhmetn/5700627</a></li>
</ul>
<br />
<script src="https://gist.github.com/sekhmetn/5700627.js"></script>
The result is that all WCF calls to the IngredientsService and the ShoppingListService require authentication and ensure users forms authentication token is tightly coupled with the ASP.NET_SessionId.Anonymousnoreply@blogger.com0