Commodore Maze Generator on AWS Lambda via API Gateway

The Commodore Maze Generator Program

I needed something “useful” to output from my Commodore VICE Lambda Custom Runtime that also didn’t require much input (if any) from the requester. Otherwise, I’d have to sanitize the input that is being passed to a shell script and potentially validate it if I didn’t want unpredictable behavior. I settled on a Commodore 128 version of the Commodore Maze Generator (the fast and slow will be unnecessary once we launch from the Lambda with the -warp option on VICE, but I never changed the program so it’s left in here. The fast and slow and everything before line 50 is Commodore 128 BASIC 7.0-specific, but the Commodore 64 version just requires substitution of POKE commands instead. (See vice-lambda repo on GitHub for latest version)

5 fast
10 scnclr
20 color 0,2
30 color 4,2
40 color 5,1
50 for y=1 to 25
60 for x=1 to 40
70 print chr$(205.5+rnd(1));
80 next x
90 next y
95 slow
100 goto 100

Making sure that the Lambda returns an image for Lambda

I want to be able to use Lambda Proxy Integration when setting up the API Gateway, so I need to return a format that maps the information properly. The response JSON needs to have:

  • isBase64Encoded set to true (necessary for the translation of the PNG back from base64)
  • headers with "Content-type" and "content-disposition" set (image/png and inline for this example)
  • statusCode of 200 (right now, if we have an error it’s either not caught or happens at the Lambda invocation level)
  • body containing the base64 data. Note the -w 0 argument to turn off line breaks when wrapping… Lambda Proxy Integration doesn’t handle that well.
function handler() {
  cd vice
  SEED=$((0 - `od -An -N2 -i /dev/random`))
  ./x128 -silent -sound -keybuf "
  3 i=rnd($SEED)
  `cat ../maze.bas`
  run
  " -warp -limitcycles 20000000 -exitscreenshotvicii /tmp/$1.png 2>&1 >/dev/null
  cd ..

  RESPONSE="{\"isBase64Encoded\": true, \"headers\": {\"Content-type\": \"image/png\", \"content-disposition\":\"inline\"}, \"statusCode\":200, \"body\":\"`base64 -w 0 /tmp/$1.png`\"}"

  echo $RESPONSE
}

Also note that we’re passing in a SEED value because, otherwise, the Commodore 128 will generate the same maze every time.

Repackage the Lambda and upload as in the Commodore VICE Lambda Custom Runtime blog post.

Setting up the API Gateway

Go to API Gateway in the AWS Console and select [Create API]

APIs Create API

Choose [Build] under “REST API” (we won’t be using OIDC/OAuth2/CORS…)

REST API -> Build

Choose REST, New API, and name your API:

REST, New API, API Name

Create a GET Method and check the checkmark

Create Method
API Method verb selection -> GET
Check the checkmark to commit
  • Choose “Lambda Function” for your Integration Type
  • Check “Use Lambda Proxy Integration”
  • Type the name of the Lambda that you’ve calling and select it and [Save] and confirm that you want to add role/permission for the Lambda.
GET API configuration

Deploy the API

Back on your API view, select the [Actions] dropdown and [Deploy API]

API Actions "Deploy API"

Just create a new dev stage… we’re not going to “production” with this experiment

Create new dev stage

and [Deploy]

It’s broken

If you click the Invoke URL presented, you will get a broken image

Invoke URL
Broken image from the API

Since we’re only sending back PNG files, we can just tell the API that */* is a Binary Media Type by going to Settings on the left sidebar for the API and [(+) Add Binary Media Type]

Add Binary Media Type

Be sure to save the settings, redeploy the API and be sure to use the Invoke URL for the correct API! (I had two identical and was “fixing” the wrong one… hence the different API URL snapshots)

Commodore Maze Generator PNG 1

If you refresh, you should get a new maze from the Commodore maze generator program:

Commodore Maze Generator run 2

Some Troubleshooting

  • Be sure to save your settings
  • Test your API call to Lambda in the Method Execution view (Resources -> GET -> Test [lightning bolt])
  • Be sure to redeploy your API after settings change.

Commodore 128 (VICE) Custom Runtime on AWS Lambda

Why a Commodore 128 Custom Runtime??

The first reason is wanted to do a Commodore 128 Custom Runtime on AWS Lambda is because it’s an absolutely ridiculous thing to do. The Commodore 128 runs at 1 MHz (2 MHz in “fast” mode) natively on the 8502 processor vs. multi-GHz scale.

Another reason is because I’ve had challenges with the slight differences with Amazon Linux vs. Ubuntu for EC2 instance. This seemed like a good way to exercise working through those differences.

The last reason is that I wanted a platform that would require me to dig a little deeper into how things are managed for a custom Lambda. The only directory that’s writable is /tmp. There are specific conventions used to honor the function.handler format. The responses (if you don’t have a supported runtime) are handled through callback URLs.

Compiling VICE

It’s a bit of an adventure getting VICE running on Amazon Linux. Ubuntu provides a package for VICE (of course, you’re on your own for the ROMs themselves.) With Amazon Linux, you’ll need to download the tarball and compile. Here’s my video recreating that process through to getting the Lambda running:

Compile VICE and Building the Custom Runtime

The steps… fire up an Amazon Linux EC2 instance and:

  • sudo yum update
  • sudo yum install links -y
  • links https://vice-emu.sourceforge.io and download the vice-3.5.tar.gz
  • tar zxvf vice-3.5.tar.gz from the home directory
  • sudo yum install -y gcc gcc-c++ flex bison dos2unix libpng-devel.x86_64
  • links https://www.floodgap.com/retrotech/xa/ and download xa-2.3.11.tar.gz or whatever version is available
  • tar zxvf xa-2.3.11.tar.gz
  • cd xa-2.3.11 and make
  • Add xa to the path with PATH=~/xa-2.3.11:$PATH
  • cd ~
  • cd vice-3.5 and ./configure --disable-pdf-docs --enable-headlessui --without-pulse --without-alsa and make

You should be able to cd ~/vice-3.5/data/C128 and run:

../src/x128 -silent -sound -ntsc -keybuf "10 graphic 1
20 scnclr
30 circle 1,100,100,20
run
" -warp -limitcycles 8000000 -exitscreenshotvicii image.png

and be able to scp the image.png and open it and see a screenshot something like (bump cycles to 10,000,000 if necessary): (Edit: use -warp not +warp to enable warp mode)

Testing x128 for Custom Runtime
C128 Circle

Packaging the Necessary Pieces

Make a vice-package directory or similar and grab the contents of vice-3.5/data/C128 (some files can be excluded, such as build files if you want to trim the package down) and the vice-3.5/src/x128 file and tar the vice-package directory (tar cvf vice-package.tar vice-package/*) and scp it down to your local computer.

Get the Sample Lambda for Custom Runtime

Go to Lambda -> Functions -> Create Function -> Author from Scratch and select the “Provide your own bootstrap on Amazon Linux 2” option:

Custom Runtime provide your own bootstrap
Custom Runtime template

Copy the bootstrap.sample to the root of the lambda package you are going to create and name it bootstrap and the hello.sh.sample as function or whatever the first part of your function.handler name is (see the Runtime settings below the code window):

Runtime settings where your function.handler is named.

Constructing the bootstrap

There are a couple of environment variables (XDG_CACHE_HOME and XDG_CONFIG_HOME) that have to be set to /tmp so that VICE can write to them. Be sure the handler in Runtime settings matches <script_name>.<bash_function_name> or else Lambda won’t be able to find it to invoke (actually the bootstrap below won’t… you can skip using $_HANDLER and hard code, but then the AWS console won’t help you for function configuration.) I disabled the -e option because we’re actually going to exit VICE ungracefully on purpose for simplicity. Be aware that this is the ON ERROR RESUME NEXT or “try with empty catch block” in that your code will ignore all the other potential failures along the way.

#!/bin/sh
# set -euo pipefail
# we're going to exit VICE on clock cycles so -e option would fail in this case
set -uo pipefail

# otherwise vice tries to write to the 'home' directory that isn't a [writeable] thing in Lambda
export XDG_CACHE_HOME=/tmp
export XDG_CONFIG_HOME=/tmp

# Handler format: <script_name>.<bash_function_name>
#
# The script file <script_name>.sh  must be located at the root of your
# function's deployment package, alongside this bootstrap executable.
source $(dirname "$0")/"$(echo $_HANDLER | cut -d. -f1).sh"

while true
do
    # Request the next event from the Lambda runtime
    HEADERS="$(mktemp)"
    EVENT_DATA=$(curl -v -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
    INVOCATION_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)

    # Execute the handler function from the script
    RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) $INVOCATION_ID "$EVENT_DATA")

    # Send the response to Lambda runtime
    curl -v -sS -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$INVOCATION_ID/response" -d "$RESPONSE"
done

Constructing the handler

The handler for this setup needs to only output what is intended as a response. I’m redirecting stderr and stdout to /dev/null because there are some messages that pop-up in the current state of the emulator. I am also using the -silent option to suppress all the errors about missing disk drive and other device ROMs that I don’t care about for this case.

function handler () {
  EVENT_DATA=$2

  cd vice-package

  ./x128 -silent -sound -ntsc -keybuf "10 graphic 1
    20 scnclr
    30 circle 1,100,100,20
    run
  " +warp -limitcycles 8000000 -exitscreenshotvicii /tmp/$1.png 2>&1 >/dev/null
  cd ..


  RESPONSE="{\"isBase64Encoded\": true, \"headers\": {\"Content-type\": \"image/png\", \"content-disposition\":\"attachment; filename=$1.png\"}, \"statusCode\":200, \"body\":\"`base64 /tmp/$1.png`\"}"

  echo $RESPONSE
}

The response

The above response is intended to output JSON in preparation for API Gateway Lambda integration. statusCode is required, and to convert the image back to an image isBase64Encoded and the headers Content-type is needed. The content-disposition is to tell it to download. All this gets POSTed back by the bootstrap script to the invocation response callback. The body is the base64 encoded png file, but in the current invocation in Amazon Linux, I’m getting newlines in the output, so that’s a problem to debug before attaching to API Gateway.

One more missing piece

We also need to pull libpng from our EC2 instance and place in the root of the lambda function at the same level as the bootstrap file. Just scp ec2-user@ec2IPaddress:/usr/lib64/libpng\* . for that.

Structure of the zip file and Deploy

zip up the following pieces into your lambda.zip (zip file name doesn’t really matter, just the organization of the contents):

  • bootstrap
  • function.sh
  • libpng*
  • vice-package/*

Once uploaded, you should be able to [Test] the function and check the logs. Add a set -x to your bootstrap if things aren’t behaving. You may need to chmod +x your bootstrap if you haven’t tried to run it locally for testing.

kubectl and eks… You must be logged in to the server (Unauthorized)

Say you have a setup with EKS using IAM API keys with Admin permissions and are using an AWS profile that you’ve confirmed can retrieve the eks config with aws eks update-kubeconfig --name context-name --region appropriate-region-name-n.

But then kubectl get pods --context context-name still fails, oddly with a “You must be logged in to the server (Unauthorized)”

Similarly, kubectl describe configmap -n kube-system aws-auth fails with the same message.

Is your username and/or role included/spelled correctly?

If you have access to all of the resources used by EKS then perhaps the ConfigMap is the issue. Check out How do I resolve an unauthorized server error when I connect to the Amazon EKS API server? for more details, and presumably the “You’re not the cluster creator” section.

Debugging steps:

  • Have the cluster creator or member of the system:masters group run kubectl describe configmap -n kube-system aws-auth and verify that the mapUsers or mapRoles section mention the identity with adequate permissions as returned by aws sts get-caller-identity
  • Double check that if the identity/roles are there that they have adequate permissions.
  • Double check that the identity/role ARN and username actually match. (This was a relatively simple setup, so in my case it was just a misspelling of the username that was the cause.)

Automatically Moving Files Between S3 Buckets with Lambda (part 1)

I have an S3 bucket that I want to attach to an application’s upload area, but I want to move them out of the bucket accessible to the application after they’ve been uploaded. Eventually, I want to have this be after a small delay, but initially I wanted to test out the concept itself.

Step 1: Have source and destination buckets in S3

Create buckets for source and destination. The ACLs on both of the buckets are the same (non-public) in my case.

Step 2: Create a Lambda Execution Role

  • Go to IAM > Roles > Create Role
  • Choose Lambda as a Use Case
  • Next: Permissions
  • Search for S3 and check AmazonS3FullAccess
AmazonS3FullAccess selection
  • Search for “lambdabasic” and check AWSLambdaBasicExecutionRole (for CloudWatch logs)
AWSLambdaBasicExecutionRole selection
  • Click [Next: Tags] > [Next: Review] and give your role a name and verify that the S3 and Lambda policies are added:
Verify policies and name role
  • Click [Create Role]

Step 3: Prep the Lambda Code

  • Clone https://github.com/stringsn88keys/move_file_on_upload
  • Be sure to have the correct ruby version (2.7.0 at the time of writing) installed
  • Change into move_file_on_upload folder
  • bundle install locally
  • bundle config set --local deployment 'true' to vendor the gems for the AWS Lambda
  • zip ../move_file_on_upload.zip ** to package the zip

Step 4: Create the Lambda

  • Go to AWS Lambda in the AWS Console and click [Create Function]
  • Name the function, set Ruby 2.7 as the runtime, and use the role you created
Function name, Runtime, and Permissions
  • [Create function]

Step 5: Add S3 Trigger

  • Click [+ Add Trigger]
  • Search and select S3
  • Fill in your source bucket and select all object create events
  • If you get this error (“configurations overlap”), select your bucket in S3, click the Properties tab, and you’ll see an Event Notifications that’s been orphaned by a previous config (be sure to delete the dependent Lambda as well if it exists):
Configurations overlap error

Step 6: Upload your code

  • Go back to the [Code] tab for your lambda and select the [Upload from] dropdown and select zip file.
Upload .zip file
  • Go to the [Configuration] section and add a FROM_BUCKET and TO_BUCKET environment variable for your Lambda to know what to process

Step 7: Monitor and test

  • You can test the Lambda execution via the Test dropdown
  • S3 put is one of the available test templates
  • Click “Create” after selecting S3 Put and you’ll be able to watch the event get executed.
  • Go to CloudWatch -> Logs -> Log Groups and you should see a log group for /aws/lambda/your_function_goes_here
  • If all else is successful, you should see “the specified key does not exist”

Step 8: Test it live

  • Create a folder in your source bucket.
  • Go into that folder
  • Upload a file.
  • The file should pretty quickly be copies to the destination bucket in the same folder structure and removed from its original location.
  • The top level folder under the bucket should remain.

AWS Certified Developer – Associate (DVA-C01) Experience

It’s been a little over 1 month since my AWS Certified Solutions Architect – Associate (SAA-C02) exam. After finishing the Solutions Architect Associate exam, I immediately registered for the Developer Associate exam, which I took yesterday and passed.

Wasting Time

I took a practice exam (through Linux Academy) shortly after passing the SAA exam, having actually gone through most of a Developer Associate training course. DO NOT DO THIS UNLESS YOU ARE READY TO DIVE DEEP INTO ANYTHING YOU MISSED. I managed to get about 72% on the practice exam, which is essentially a passing score so I let myself lose focus over the holidays.

Next mistake was going through a couple of versions of the course (30 hours each) instead of just using the practice test results to read the white papers, FAQs, and AWS service pages I needed to review. (Linux Academy/aCloudGuru courses provide links to associated AWS reference for this.)

Foundations in SAA

I highly recommend getting the Solutions Architect – Associate first. It covers a good breadth of topics that will give you a solid foundation. I recommend the Linux Academy courses that have labs that you can walk through via a transcript just to traverse all the areas you need to know.

Differences from SAA

All of the AWS Code* services factor into a significant chunk of the DVA exam. If you have solid experience with docker, k8s, and git, you’ll probably have a better than 50% chance on questions about services that you didn’t look into, but there are also things that are about the “AWS way” or the “way that AWS services push you to do things” that I shook my head at. (Physical team organization? Wat?)

Also…

  • Learn how to set up X-Ray on the various compute environments.
  • Create and deploy a Serverless Application Model project and dig into the configuration files for it.
  • 0 bytes is the minimum object size, 100 MB is the recommended threshold to use multipart upload to S3, 5GB it is required, 5TB is the object size limit.
  • cfn-init
  • View Protocol Policy in CloudFront (https/http)
  • Bucket policies for enforcing server side encryption
  • Learn supported platforms for CodeDeploy
  • Learn lifecycle hooks for CodeDeploy (just look at the diagrams a few times)
  • Learn which services trigger Lambda asynchronously and synchronously
  • Lambda requires dependency packaging, and CPU is controlled by RAM allocation.
  • Dead letter queues
  • Lambda, by definition, does not do in-place deploy (because in-place requires a tangible server)

AWS Certified Solutions Architect Associate (SAA-C02) Experience

Prep

Online Exam

  • PearsonVue browserlock kept detecting itself on Catalina on my MacBook Air and exiting, so I ended up using a Windows laptop instead.
  • OnVue app on Windows kept detecting “gamebar”, which I had to disable (and reboot Windows to fully take effect)
  • Make sure the laptop that you’re using can be maneuvered to show all the workspace within reach.
  • Have a phone handy for check-in, but a convenient place to stash out of sight/sound range.
  • Make sure anything with any print or writing is out of view and out of reach of the workspace area before check-in.

Contents That I Could’ve Been More Prepared For

  • SQS vs. Kinesis use cases
  • Priority queueing – I think this requires either AmazonMQ or a separate SQS queues (depending on priority levels)
  • ECS launch types. ECS vs. Fargate vs. EKS
  • Amazon RDS Read Replicas vs Aurora Global Database

Deploying a Minecraft Server to EC2 (with some cost analysis)

Caveat… this is likely not the option you want to pursue for a small to medium scale Minecraft server… Lightsail will be more cost-effective due to the amount of data transfer involved. Also, this is partly an exercise it navigating Amazon Linux as a largely Ubuntu user. If you really want to host your own Minecraft server on a virtual server, I’ve also done this exercise minus 90% of the steps on a 2GB Linode (get a $100, 60-day credit through that referral link) and you will not get a huge egress bill for the insane amount of data you transfer out. OR if you want a fairly plug-and-play solution, PebbleHost offers Minecraft hosting for as little as $1/month ($3-5 basic plan recommended depending on your needs)

Purchase a Domain

I’m going through Namecheap for a .online domain because… well… it’s cheap… and registering revrestfarcenim.online (.online domain through Route 53 would be $39 vs. $1.88 for the first year with Namecheap).

Launch an EC2 instance

If you’re trying to use free-tier resources for this, you’ll want to go for a t2.micro, but you’ll also need to modify the java parameters for the server to fit within those memory limits.

  • Go to Services -> EC2
  • Scroll down in the main window to “Launch instance” and click [Launch Instance]
  • Select Amazon Linux 2 AMI (should be the first option)
  • I’m selecting t3a.small for this to be similar in “virtual” resources as the $10-15/mo virtual server hosting providers (including Amazon Lightsail).
  • Click [Next: Configure Instance Details]
  • You’ll get a default vpc and subnet created and selected… if you don’t want to use these, you can click the available links to create new ones.
  • For “Auto-assign Public IP”, I have “Use subnet setting (Enable)” because I’m going to want to have this publicly accessible.
  • Scroll to near the bottom of the “Step 3” form and find “T2/T3 Unlimited” and uncheck “Enable” unless you want to run up a bill because you forgot about the Minecraft server.
  • Click [Next: Add Storage] to configure your space.
  • Click [Next: Add Tags]
  • Click [Next: Configure Security Group]
  • Click [Add Rule] and add a Custom TCP Rule that allows traffic to port 25565 (the port for Minecraft) from 0.0.0.0/0
  • Click [Review and Launch]
  • Click [Launch] and choose “Create a new key pair” named “minecraftserver”
  • Click on Instances and select your new instance, noting the IPv4 Public IP in the instance details below (leave tab open for reference in the next section)

Create a Hosted Zone in Route 53

  • In a new tab, go to Services -> Route 53 -> Hosted Zones
  • Click [Create hosted zone]
  • Type in your domain name for your server and select “Public hosted zone”
  • Copy the values for the NS record and populate those as the nameservers for your domain (for me, this is on Namecheap
Route 53 NS record
Namecheap Custom DNS settings
  • Now go back to Route 53 and [Create Record]
  • Choose “Simple routing”
  • Click [Define simple record]
  • Leave the record name blank.
  • Under “Value/Route traffic to” select “IP address or another value depending on the record type” and paste your IP in.
  • Be sure “Record type” is “A” and click [Define simple record]
  • Click [Create records] on the “Configure records” screen.

Install and setup Minecraft

  • Using your keypair from instance creation, ssh into your your instance
chmod 400 minecraftserver.pem # or whatever the filename is
ssh -i minecraftserver.pem ec2-user@revrestfarcenim.online
  • Create minecraft server folder and user
sudo mkdir /srv/minecraft-server # assuming EBS mount
sudo adduser --system --home /srv/minecraft-server minecraft
sudo chown minecraft.minecraft /srv/minecraft-service
  • Do updates and install java
sudo yum update
sudo amazon-linux-extras install java-openjdk11
  • Download minecraft server, run for the first time, and set the eula.txt
sudo -u minecraft wget https://launcher.mojang.com/v1/objects/a412fd69db1f81db3f511c1463fd304675244077/server.jar
sudo -u minecraft java -Xmx1024M -Xms1024M -jar server.jar nogui
# you'll get an error, so edit the eula.txt
sudo -u minecraft nano eula.txt # or vi, just set to eula=true
# run minecraft again and try to connect to server at revrestfarcenim.online

Making Minecraft a service

  • Run sudo nano /lib/systemd/minecraft-server.service
  • Paste a config similar to:
[Unit] 
Description=start and stop the minecraft-server 

[Service]
WorkingDirectory=/srv/minecraft-server
User=minecraft
Group=minecraft
Restart=on-failure
RestartSec=20 5
ExecStart=/usr/bin/java -Xms1024M -Xmx1024M -jar server.jar nogui

[Install]
WantedBy=multi-user.target
  • Now, enable the service with sudo systemctl enable /lib/systemd/minecraft-server.service
  • And start the service with sudo systemctl start minecraft-server.service
  • As it’s starting, you can check the status with sudo systemctl status minecraft-server.service.
  • With this systemctl setup, you should also be able to reboot the instance and have the Minecraft server come back up

Troubleshooting

  • If you don’t go with the default gateway and subnets created with the instance, you may find yourself having to explicitly set up an Internet Gateway and a Route Table (0.0.0.0/0 to your igw)
  • Make sure the associated security group is allowing port 25565 to connect (especially if SSH is working)

Teardown

Be sure to delete your hosted zone (you’ll need to delete the A record before deleting the zone) and terminate your instance to avoid running up charges for things you’re not using. I deleted the VPC as well just to avoid clutter and half-baked subnets and security groups, but that’s only because I have nothing of long-term value in the account.

Cost Analysis

There are multiple hits that you’re going to take by hosting this on EC2:

  • egress costs (9¢ per GB after your first GB): In my limited tests, the Minecraft worlds we started up required 100-200MB to initially download per session. It’s unclear if that’s the case for every session, but if you have 20 of your friends use a server, that might be 9¢ x 0.20 GB x 20 for one session each… 36¢ per average number of sessions… that could add up quickly. By contrast, you could get a different hosting provider (including Lightsail) to bundle 2TB of transfer instead.
  • hosted zone cost (50¢ for a distinct domain’s hosted zone)
  • If you use a separate EBS volume, minimum cost there is 80¢.
  • t3a.small cost is 1.88¢ per hour or $13.53/month… you could have the server shut down during off-hours, but then you’re not comparing to “always-on” options.
Posted in aws

Monitoring your S3 buckets for #omgcat usage

If you’re serving up a static website from S3, especially if you have larger assets stored there, you may want to put monitoring on the requests or bytes downloaded from S3, just to make sure someone’s not running up terabytes of transfers or millions of requests.

Enabling Metrics on S3

You will have to enable metrics on S3 in order to get CloudWatch alarms on them.

  • Go to Services -> S3 -> Buckets and select the bucket for your static site.
  • Select Management tab and [Metrics] and then click on the pencil icon next to the bucket icon.
  • Once enabled, the metrics will take a bit to populate.

Setting up Simple Notification Service (SNS)

(These are the same instructions as in Monitoring your CloudFront #omgcat usage)

Set up a topic

  • First, you’re going to want to be notified. Go to Services -> Simple Notification Service to set up a pathway for that to happen.
  • Next, click “Topics” and then [Create topic]
  • Name your topic something that adequately describes the purpose (I just used domainname-com)
  • Scroll down to [Create Topic]

Set up a subcription

  • Under “Amazon SNS” left sidebar, click “Subscriptions” and [Create Subscription]
  • Click on the Topic ARN field and you should be able to see an ARN with your topic name as the last part of the ARN. Click that ARN
  • Under Protocol, select your preferred method of notification (I’m going with SMS.
  • Under Endpoint, enter your cell number, including country code (+18005551212 for (800) 555-1212 in the US)

Setting up a CloudWatch Alarm

  • Go to Services -> CloudWatch -> Alarms and [Create alarm]
  • [Select metric] and select S3
  • If you don’t see “Request Metrics per Filter” then the metrics haven’t started populating yet.
  • Check “GetRequests” or “BytesDownloaded” and [Select Metric]
  • Set conditions as you would like to have flag any anomalies and click [Next]
  • Choose “In Alarm” and “Select an existing SNS topic” and click in the box below “Send Notification To…” to get suggestions and select the SNS topic corresponding to the notification method you set up. Click [Next]
  • Name your alarm and click [Next]
  • Review the summary and click [Create Alarm]

Monitoring your CloudFront #omgcat usage

Ok, so you’ve created a lovely static site and/or set up a CloudFront distribution for https for it. But CloudFront bills by the GB. What if somebody decides your assets are perfect to hotlink to or just straight up makes an insane boatload of requests? How do you protect yourself from getting a frightening bill before AWS Budget can even notify you?

Setting up Simple Notification Service (SNS)

Set up a topic

  • First, you’re going to want to be notified. Go to Services -> Simple Notification Service to set up a pathway for that to happen.
  • Next, click “Topics” and then [Create topic]
  • Name your topic something that adequately describes the purpose (I just used domainname-com)
  • Scroll down to [Create Topic]

Set up a subcription

  • Under “Amazon SNS” left sidebar, click “Subscriptions” and [Create Subscription]
  • Click on the Topic ARN field and you should be able to see an ARN with your topic name as the last part of the ARN. Click that ARN
  • Under Protocol, select your preferred method of notification (I’m going with SMS.
  • Under Endpoint, enter your cell number, including country code (+18005551212 for (800) 555-1212 in the US)

Get Notified

  • Go to Services -> CloudFront -> Alarms
  • [Create Alarm]
  • Under Metric, choose the threshold that you want to detect on… maybe it’s Requests, maybe it’s Bytes Downloaded…
  • Select the distribution that you want to watch ( domainname.com should be mentioned in the dropdown)
  • For “Send a notification to”, select the SNS topic that corresponds to the notification method you set up.
  • Since mine is a dev/test site, I don’t expect more than a request/second so
  • Finally, [Create Alarm]

Testing the Alarm

  • If you have a low enough threshold you can probably just hold down F5 (or whatever your refresh key is) for a few seconds. (Word of caution: Don’t do this with a page that downloads a lot of assets!)
  • In bash you can also do the following.
for i in {1..61}
do
  curl https://domainname.com
done
  • If your notifications are working, you should get an message through your preferred notification method.
  • Under Services -> CloudWatch -> Alarms, you should also see your Alarm count be > 0.

Adding a https to an S3 static site via CloudFront

Ok, so we’ve set up a static site hosted from an S3 bucket with a custom domain using Route 53. But sadly, it’s:

Not Secure

Request a Certificate in Certificate Manager

  • Go to Services -> Certificate Manager
  • Click [Request a Certificate]
  • In the window that opens from “Request or Import a Certificate with ACM”, enter your domain name (domainname.com) and click [Next]
  • Select DNS validation and click [Next]
  • Click [Review]
  • Click [Confirm and Request] if the details look correct.
  • Expand the domain in validation:
  • Click [Create record in Route 53] and confirm by clicking [Create] again.
  • You’ll be waiting from several minutes to half an hour for the validation to happen, during which time status will display as “Pending validation”
  • Click [Continue] to finish the request process and go back to the Certificate Manager main screen.
  • Click the (refresh icon) button to update status, and when status turns to “Issued” you are ready to use it in CloudFront.
Pending validation
Ready for use

Setting up a CloudFront distribution

  • In the AWS Console, go to Services → Cloudfront
  • Click [Create Distribution]
  • Click [Get Started] under Web

Create Distribution

  • Under “Origin Domain Name” select the selection under “Amazon S3 buckets” that corresponds to your static web site bucket. (e.g., domainname.com.s3.amazonaws.com)
  • Optional: Restrict Bucket Access [Yes] so that you can control access through the CloudFront distribution alone.
    • Set “Origin Access Identity” to “Create a new identity”
    • Set “Grant Read Permissions on Bucket” to “Yes, Update Bucket Policy”
  • Under Viewer Protocol Policy I select “Redirect HTTP to HTTPS” just to keep things uniform.

Set up SSL

  • Under Alternate Domain Names, enter your domain name (e.g., domainname.com)
  • Select “Custom SSL Certificate”
  • Click “Request or Import a Certificate with ACM”
  • If you go back to CloudFront you should be able to select “Custom SSL Certificate” now and the certificate corresponding to your domain name should show up in suggestions:
  • Scroll down and leave defaults until you get to “Default Root Object”. You’ll want to set this to the name of the document to bring up (e.g., index.html) if the user browses to / on the domain.
  • Optional: I set Logging to On and selected my logging bucket that I used for the static site as the bucket, adding a log prefix for it.
  • To finish, click [Create Distribution]
  • You may be waiting quite a while for changes to propagate to the edge locations, but at some point before the “In Progress” changes to “Deployed” you will be able to pull up via the domain listed under the “Domain Name” column in your list of CloudFront Distributions.

Pointing the domain name at your distribution

  • Go back to Route 53 and go into the hosted zone for your domain name
  • Check the checkbox next to your A record and then go up to Actions -> Edit
  • Change “Value/Route traffic to” from “Alias to S3 endpoint” to “Alias to CloudFront distribution” in the “Choose Distribution” input box.
  • Enter the domain name (“asdfkjdfasoiadsf9u.cloudfront.net.”) as your domain name. (The new interface wasn’t suggesting distributions like the last version of the interface did… it may change next week, of course.)

Locking down S3

If you selected “Restrict bucket access” and had CloudFront update your S3 policy, your public access setting on the bucket is still unaffected. You’ll want to remove that:

  • Go back to Services -> Amazon S3
  • Go to your domainname.com bucket
  • Click Permissions
  • Click Block public access
  • Check “Block all public access” and click [Save]

Some other details

If you want to have JavaScript and forms function properly You’ll want to set up CORS configuration by going to your S3 bucket, then selecting Permissions tab and clicking CORS configuration:

&amp;lt;CORSConfiguration&amp;gt;
 &amp;lt;CORSRule&amp;gt;
   &amp;lt;AllowedOrigin&amp;gt;https://thomaspowell.work&amp;lt;/AllowedOrigin&amp;gt;
   &amp;lt;AllowedMethod&amp;gt;PUT&amp;lt;/AllowedMethod&amp;gt;
   &amp;lt;AllowedMethod&amp;gt;POST&amp;lt;/AllowedMethod&amp;gt;
   &amp;lt;AllowedMethod&amp;gt;DELETE&amp;lt;/AllowedMethod&amp;gt;

   &amp;lt;AllowedHeader&amp;gt;*&amp;lt;/AllowedHeader&amp;gt;
 &amp;lt;/CORSRule&amp;gt;
 &amp;lt;CORSRule&amp;gt;
   &amp;lt;AllowedOrigin&amp;gt;*&amp;lt;/AllowedOrigin&amp;gt;
   &amp;lt;AllowedMethod&amp;gt;GET&amp;lt;/AllowedMethod&amp;gt;
 &amp;lt;/CORSRule&amp;gt;
&amp;lt;/CORSConfiguration&amp;gt;

Some mistakes I made:

  • A certificate for *.domainname.com does not cover domainname.com. You have to add both if you want wildcard and domainname.com itself covered.

Next up… preventing someone from running up a $1,000 AWS bill by hammering your site (i.e., monitoring your site’s access… with better granularity than AWS Budgets…