I bought a shared hosting plan that lasted me almost a year before I decided to get something more powerful. Today I’ll tell you what I did to get a respectable HTTPS configuration in my recently deployed cloud server.

Motivation

Having a shared hosting plan is great. You log in to cPanel and you have tons of menu options, a search bar to look for settings and you can call yourself a certified system administrator from day 1… On the other side you can only have static content, a bundle of common software options and your performance will depend on the server you’re hosted in: if it has poor specs (or too many clients in it), you’d see your blog, forum or e-commerce store drag with lag beyond what any user would bare. Besides my static content, I had my own little blog, but even that was too much to ask. I was accessing my blog’s admin page via HTTP, sending my (weakest) password in clear text over the wire. I wanted to add security and only serve content via HTTPS. To do that, my monthly fee would rise from 2€/month to 6€/month , with no added performance benefits. This was the final nail in the coffin for my would-be server. I needed a real server, no shackles included. Fast forward a couple of weeks and I’m currently very happy using DigitalOcean’s cheapest droplet.

Planning ahead for the move

In  the last months I had increasing interest for NodeJS. I wanted to build  a website with it, and I knew I wouldn’t stop thinking about it until I  actually had it. Suffice to say that well before my move to a cloud  server, I already knew what backend would serve my content. But there  were other concerns! What about speed? Node is all about speed, but  there’s no built-in cache module. No listening on port 80 or 443 out of  the box, not without running it as root. I searched for a better way and  found that I could and should leverage on a bulletproof,  battle-hardened web-server like Apache or NGINX to have requests cached  and even handle SSL for me. In turn, this would make my actual server  code ever so simple. But which one to choose? I had worked with Apache before, but I had heard of NGINX as a more  scalable option, with excellent caching capabilities. Having searched  for configuration files, I found them to be intuitive enough, and the  choice was made. As soon as I got my Droplet and a server certificate I  could start working on making everything stick together.

More on the case of NGINX

Node  has a HTTPS module. There are ways to run a Node server on port 443.  But does it really make sense to do so? This is my first server, and I  might/will use it for more than one project. So how would I ever run  multiple servers on standard web-server ports? And what about cache? If I had 10 node servers running, I’d have some  serious code plumbing work in order to cache requests for all servers.  Don’t even get me started on SSL handling. I think you get my point. Just Node is probably not the best way to  deploy things: NGINX can act as a reverse proxy. Think about it. SSL settings and cache are all handled for me. I start  my node server(s), and set them up in user available ports. I then set  config files for my website and redirect all requests to the port on  which the node server is running. This way I can have multiple sites  configured, and multiple servers per site (e.g. requests to  goncalotomas.com/api-for-a-random-project might get redirected to the  node server running on port 9090).

NGINX setup

So  here we are, setting up website configuration files with a server  certificate in hand. Let’s see what an initial version would look like:

server {
  # HTTP B gone
  listen 80;
  redirect 301 https://$host$request_uri
}

server {
  # SSL settings
  listen 443 ssl;
  server_name goncalotomas.com;
  ssl_certificate /path/to/your/ca-bundle/file
  ssl_certificate_key /path/to/your/private/key

  location / {
    proxy_pass http://localhost:9090;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
  }
}

Remember kids: HTTPS only. It’s the safest way to go. That’s what the  first server entry is doing there: redirecting all standard HTTP  requests to HTTPS. Note that this is a simple config file I copied from the NGINX docs.  That doesn’t necessarily mean you get the best HTTPS config just using  these settings. In fact, if you were to visit securityheaders.io with  our current configuration, you’d see a big fat F… because we still don’t  have any security headers. But as with any security topic: one should  not fiddle with what one does not understand. So I will give you a  one-liner definition for the security headers we will be using:

Content Security Policy (CSP): Basically a way of whitelisting content sources, including images,  scripts and styles. Anything you add to your pages that comes form a  source not listed in the CSP will not be executed/displayed.
HTTP Strict-Transport-Security (HSTS): A way to tell browsers to keep using HTTPS connections to speak with your server.
X-XSS-Protection: Force enabling a browser’s XSS filter, just in case a user might have disabled it.
X-Content-Type-Options: Making sure that MIME types are not inferred through their content. If  the MIME content-type is “text/html”, it will always be displayed as  “text/html”.
X-Frame-Options: Whitelist of sources where your content can be iframed in, essentially offering protection against clickjacking.
HTTP Public Key Pinning (HPKP): List of hashes containing public keys used in your certificates. Useful for protecting against rogue certificate attacks.

I  need to warn you about HPKP. The current specification document  strongly recommends having a backup key and listing that public key as  well. If your certificate gets compromised, you still have your backup. It is also recommended to keep the backup key offsite and offline,  because if you keep both in the same server and if an attacker is able  to compromise one key, he will find the backup key in no time. I’m  warning you because if you keep your backup in the same place or if you  don’t have a backup and your certificate key is compromised, then adding  a new cert might make your visitors see nasty warnings in their  browser, since you did not pin the new key in the header. That’s the  last thing you want!

I could list all values for each individual header, but this post is already huge. When you’re querying your own website at securityheaders.io, it tells you what you should have in your  missing server headers. It shouldn’t take you more than 30 minutes to  set them up. 🙂

But wait… There’s more!

There are more things you should keep track of. Firstly, read this wonderful article. And this one.  They tell you just how security headers are nothing but a bunch of  bullshit if you don’t watch out for your other settings. A successful  attacker seldom uses complex attack vectors — he/she will choose the  easiest path. If you’re going to work or be responsible for security,  you need to put in some extra effort. Enter the infamous SSL settings.

I  have found two main issues with SSL settings: Diffie-Hellman protocol  parameters and the ciphersuites that your server can work with. Let’s  talk about both:

Diffie-Hellman parameters

There’s a reasonable chance that this website explains this issue way better than I do. But I’ll try anyways because  I’m that guy that tries to explain everything. In short, there are several groups of 1024-bit prime numbers that are  used in the Diffie-Hellman key exchange protocol by millions. Would  anything tempt the NSA more than precomputing and brute force these  numbers so they could be able to eavesdrop communications that everyone  thought were safe? What you can do to solve this issue is to use a stronger, out of the  mainstream group of Diffie-Hellman parameters in order to make sure your  server’s key exchange protocol is as safe as can be. I trust your  Googling skills to find how to make it work.

Accepted ciphersuites

This  one is a tricky one because people tend to overlook important security  news. Security vulnerabilities get discovered now and then, and it’s  your job to make sure that no vulnerable configuration is accepted by  your server. That is done by maintaining an updated list of accepted  ciphersuites. At the time of writing this, here is my “ciphersuite  string”:

ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

Again,  it is your job to keep this list updated: allow for newer protocols to  come and block old or vulnerable ones. Do it right and it will save your  bacon. I like bacon, so I’m going to keep my list updated.

Outcome

After this unbelievably long post and all these complicated notions and settings, what you should get is a top grade in both securityheaders.io and in ssltest.  This means that you have put a good effort in protecting your users’  data across the wire. Some users, especially developers, value this very  much. If you’re selling a service or product and you have not done this  work, one might think where else you might have saved time and money…  🤔

Bonus: migration headaches

In  the beginning I said I had a shared hosting and how it was so easy to  create and manage email accounts, add DNS settings, mess with databases,  etc… Big boys have to do everything by hand I guess.. And for your  pleasure and amusement, here I am writing about the nightmare that was  understanding and migrating everything correctly.

Firstly, the  email. My hosting had 2GB disk space, for website files and email  (freaking scrooges!). After a long session of Googling I found my lord  and saviour, Zoho. They have a free  hosted email package with 5GB, and all you have to do is follow a  migration wizard that basically clones your old email boxes so you don’t  lose anything, and it gives you precise and intuitive instructions on  how to change the DNS settings in order to make for a smoother journey.  If it took me 10 days to find the courage to start, it took me an hour  to completely migrate every account. They are awesome.

Then came  the DNS settings. I had already deployed NGINX, set up shop with my Node  server and it was just a matter of switching the A address to my new  server’s IP address. I don’t really know if this is standard procedure,  but my website’s DNS server was on the cPanel service I wanted to  cancel. I had to go to my domain settings (this time out of cPanel) and  set the domain DNS server to be my host’s primary DNS server (different  from the cPanel server). After that was done, several agonising hours  passed until DNS magic happened and goncalotomas.com pointed to its new address.

There  is still much to be done. Performance optimisations, custom 404 pages,  add true Material Design looks, and there’s a good chance I’ll never be  happy with what I have. But I’m glad I had the will and patience to  learn about these security settings. 😴

Thanks for sticking with  me ‘till the end. You’re either a loyal friend or clearly someone with nothing better to do. Either way, you’re awesome. Subscribe the RSS feed  so you get notifications!