I moved this blog from Tumblr, where it has been hosted for nine years now, to a static Jekyll site stored on Amazon S3. The site is generated and deployed automatically by Travis, and served through Amazon Cloudfront to allow SSL/HTTPS.
The setup is very simple, and I like how easy it is to add new posts by doing git push . There were a few steps and a little configuration to get it to work, however, so I thought I should write it down in case someone else (or future me) is interested in doing something similar.
Here’s a quick overview of what I will go through:
Change your current Jekyll setup. Assuming the site is using Jekyll already.
Create an S3 bucket that will be used to serve the site.
Add a Travis config that will build the Jenkins site and upload (deploy) the result to the S3 bucket.
Create a Cloudfront distribution that serves the S3 bucket, enabling fast content delivery through HTTPS/HTTP2 and custom domain names.
Changing your current DNS records to point to the Cloudfront distribution.
Create a certificate for your domain through Let’s Encrypt, download it, and upload it to Amazon’s IAM service.
Changing the Cloudfront distribution in step 4 to use the new certificate.
It’s worth mentioning that if you are already hosting your site somewhere under the domain name that you will use in step 6 above, you can go ahead and create the certificate before doing anything else. It will save you some time since you can configure SSL (step 7) while you are creating the Cloudfront distribution (step 4).
You don’t really need to change anything in your Jekyll setup. Travis should be able to build the site if your Gemfile is up to date.
The only thing to think of, if you are using Jekyll, is to make sure that your _config.yml contains this line:
- .well-known[/code] This allows Jekyll to include the Let’s Encrypt challenge when your site is generated. Otherwise, directories beginning with a . (period/dot) will not be included in the generated site.
Go into the S3 console and create a new S3 bucket to store your site. The bucket needs to have a bucket policy and to have static website hosting enabled.
Under bucket properties, permissions, click “Edit bucket policy” and enter the following:
"Resource": "arn:aws:s3:::[your bucket name]/*"
}[/code] Under the “Static Website Hosting” pane, enable static website hosting. Set index document to index.html . If you have a custom error document, you can update the error document setting now as well.
If you upload a document called index.html , your site should be accessible through a URL like [your-bucket-name].s3-website-[aws-region].amazonaws.com . Make sure it works now by opening this address in your browser.
Edit your .travis file to look like this:
- bundle exec jekyll build
bucket: your-bucket-name # Update this
region: eu-west-1 # Or whatever region you use
branch: master # You can use another branch if you want
dot_match: true[/code] Update the rvm section to match the version of Ruby that you want to use. I’m building from the master branch, but you might want to change that to a different branch name too. The dot_match setting will make sure that files beginning with . (period/dot) are being uploaded to S3 as well.
Copy your access key id from Amazon (I created a new user for the purpose of serving this site, with access only to the S3 bucket), then paste it when the following command asks for input. Finish by pressing control-D :
travis encrypt --add deploy.access_key_id
Do the same for the secret access key:
travis encrypt --add deploy.secret_access_key
The above should have added two encrypted keys to your .travis file:
⋯[/code] Now Travis should be able to build and deploy your site when you push a commit to your remote repo (I use Github).
AWS Cloudfront is a CDN that you can use to serve any static content. In our case, it makes it possible to serve pages from S3 with HTTP, HTTPS and HTTP2.
We don’t have a certificate, so we can’t enable SSL/HTTPS yet, but we need to be able to access the site through the custom domain before we generate a Let’s Encrypt certificate. Otherwise the Let’s Encrypt verification won’t work.
If you are already hosting your site somewhere else, you can wait with this step until you have the Let’s Encrypt certificate.
Now that we have an S3 bucket with a static site, we can go into the Cloudfront console and create a new distribution. Make it a web distribution, and change the following values:
Origin Domain Name: [your-bucket-name].s3-website-[aws-region].amazonaws.com
Don’t select the S3 bucket, directory indexing will not work if you do.
Default Root Object: index.html
The AWS region should be on the format “eu-west-1”.
Wait a couple of minutes for the new distribution to be created. Check the status column to know when it’s active. According to Amazon, it should take less than 15 minutes.
Create or update your DNS records. Make a CNAME record that points to the cloudfront.net domain visible in the Cloudfront console. For example:
[code]www.laszlo.nu. CNAME d19chveh49i9t2.cloudfront.net.[/code] Let’s Encrypt
It’s probably easier to use AWS Certificate Manager , but I like Let’s Encrypt, so I generated a new certificate for my site using Certbot in manual mode. Then I uploaded the certificate to IAM using the AWS command line tool .
Get Certbot if you don’t have it installed. There are installation instructions for many OSes and distributions on the Certbot site.
Once you have Certbot, the following command will generate and download the new certificate to /tmp/certbot/config/live/example.com :
[code]certbot certonly \
--config-dir=/tmp/certbot/config[/code] Enter your domain name, or several domain names. To verify that you are the owner of the domain, you have to create files on your site under the /.well-known/acme-challenge/ directory. You will get one token per domain name, just create the file with the specified content:
[code]Make sure your web server displays the following content at
http://laszlo.nu/.well-known/acme-challenge/[some long token] before continuing:
[a long string of random letters][/code] If you get an error saying The client sent an unacceptable anti-replay nonce while generating your certificates, just start over. I haven’t figured out why this happens, but it seems like a pretty common issue. It might be timing related.
Upload the certificate to IAM
To use the certificate with a Cloudfront distribution, they have to be uploaded to AWS. One option is to upload the certificate to IAM using the AWS command line tool :
aws iam upload-server-certificate \
--server-certificate-name ExampleComCert \
--certificate-body file://cert.pem \
--certificate-chain file://chain.pem \
--private-key file://privkey.pem \
--path /cloudfront/examplecom/[/code] Replace example.com with your domain name, or some other identifier that helps you keep track of your certificates.
Now that we have the certificate in AWS, we can go into the Cloudfront console and change the distribution that we created earlier. Edit the following value:
SSL Certificate: Custom Certificate (example.com), Select your cert in the dropdown.
Save your distribution, then wait for it to be deployed across Cloudfront. In a couple of minutes, you should be able to access your site through https://example.com without getting certificate warnings. If you check the certificate it should look something like this:
Now everything is done and you should be able to browse your site using HTTPS or even HTTP2 and push new content by a simple git push . I really like how everything fits together.
There are a lot of things that could be tweaked or changed to your needs. Here are some ideas:
Make Cloudfront only serve your site using HTTPS/HTTP2.
Use another static site generator than Jekyll.
Use Amazon’s Route 53 for DNS. I’m currently using Zoneedit, because I got a free account ages ago and never felt the need to switch.
Use Amazon’s certificate manager to manage certificates. This is probably easier, but personally I didn’t want to go all AWS yet.
Deploy to something different than S3. Cloudfront can easily be pointed to some other backend. One feature I miss in Travis’ S3 plugin is to remove old files that are no longer used.
If you ran into any problems or think that I missed something important, pleasecontact me. Good luck!