Commodore 128 (VICE) Custom Runtime on AWS Lambda

Why a Commodore 128 Custom Runtime??

The first reason is wanted to do a Commodore 128 Custom Runtime on AWS Lambda is because it’s an absolutely ridiculous thing to do. The Commodore 128 runs at 1 MHz (2 MHz in “fast” mode) natively on the 8502 processor vs. multi-GHz scale.

Another reason is because I’ve had challenges with the slight differences with Amazon Linux vs. Ubuntu for EC2 instance. This seemed like a good way to exercise working through those differences.

The last reason is that I wanted a platform that would require me to dig a little deeper into how things are managed for a custom Lambda. The only directory that’s writable is /tmp. There are specific conventions used to honor the function.handler format. The responses (if you don’t have a supported runtime) are handled through callback URLs.

Compiling VICE

It’s a bit of an adventure getting VICE running on Amazon Linux. Ubuntu provides a package for VICE (of course, you’re on your own for the ROMs themselves.) With Amazon Linux, you’ll need to download the tarball and compile. Here’s my video recreating that process through to getting the Lambda running:

Compile VICE and Building the Custom Runtime

The steps… fire up an Amazon Linux EC2 instance and:

  • sudo yum update
  • sudo yum install links -y
  • links https://vice-emu.sourceforge.io and download the vice-3.5.tar.gz
  • tar zxvf vice-3.5.tar.gz from the home directory
  • sudo yum install -y gcc gcc-c++ flex bison dos2unix libpng-devel.x86_64
  • links https://www.floodgap.com/retrotech/xa/ and download xa-2.3.11.tar.gz or whatever version is available
  • tar zxvf xa-2.3.11.tar.gz
  • cd xa-2.3.11 and make
  • Add xa to the path with PATH=~/xa-2.3.11:$PATH
  • cd ~
  • cd vice-3.5 and ./configure --disable-pdf-docs --enable-headlessui --without-pulse --without-alsa and make

You should be able to cd ~/vice-3.5/data/C128 and run:

../src/x128 -silent -sound -ntsc -keybuf "10 graphic 1
20 scnclr
30 circle 1,100,100,20
run
" -warp -limitcycles 8000000 -exitscreenshotvicii image.png

and be able to scp the image.png and open it and see a screenshot something like (bump cycles to 10,000,000 if necessary): (Edit: use -warp not +warp to enable warp mode)

Testing x128 for Custom Runtime
C128 Circle

Packaging the Necessary Pieces

Make a vice-package directory or similar and grab the contents of vice-3.5/data/C128 (some files can be excluded, such as build files if you want to trim the package down) and the vice-3.5/src/x128 file and tar the vice-package directory (tar cvf vice-package.tar vice-package/*) and scp it down to your local computer.

Get the Sample Lambda for Custom Runtime

Go to Lambda -> Functions -> Create Function -> Author from Scratch and select the “Provide your own bootstrap on Amazon Linux 2” option:

Custom Runtime provide your own bootstrap
Custom Runtime template

Copy the bootstrap.sample to the root of the lambda package you are going to create and name it bootstrap and the hello.sh.sample as function or whatever the first part of your function.handler name is (see the Runtime settings below the code window):

Runtime settings where your function.handler is named.

Constructing the bootstrap

There are a couple of environment variables (XDG_CACHE_HOME and XDG_CONFIG_HOME) that have to be set to /tmp so that VICE can write to them. Be sure the handler in Runtime settings matches <script_name>.<bash_function_name> or else Lambda won’t be able to find it to invoke (actually the bootstrap below won’t… you can skip using $_HANDLER and hard code, but then the AWS console won’t help you for function configuration.) I disabled the -e option because we’re actually going to exit VICE ungracefully on purpose for simplicity. Be aware that this is the ON ERROR RESUME NEXT or “try with empty catch block” in that your code will ignore all the other potential failures along the way.

#!/bin/sh
# set -euo pipefail
# we're going to exit VICE on clock cycles so -e option would fail in this case
set -uo pipefail

# otherwise vice tries to write to the 'home' directory that isn't a [writeable] thing in Lambda
export XDG_CACHE_HOME=/tmp
export XDG_CONFIG_HOME=/tmp

# Handler format: <script_name>.<bash_function_name>
#
# The script file <script_name>.sh  must be located at the root of your
# function's deployment package, alongside this bootstrap executable.
source $(dirname "$0")/"$(echo $_HANDLER | cut -d. -f1).sh"

while true
do
    # Request the next event from the Lambda runtime
    HEADERS="$(mktemp)"
    EVENT_DATA=$(curl -v -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
    INVOCATION_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)

    # Execute the handler function from the script
    RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) $INVOCATION_ID "$EVENT_DATA")

    # Send the response to Lambda runtime
    curl -v -sS -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$INVOCATION_ID/response" -d "$RESPONSE"
done

Constructing the handler

The handler for this setup needs to only output what is intended as a response. I’m redirecting stderr and stdout to /dev/null because there are some messages that pop-up in the current state of the emulator. I am also using the -silent option to suppress all the errors about missing disk drive and other device ROMs that I don’t care about for this case.

function handler () {
  EVENT_DATA=$2

  cd vice-package

  ./x128 -silent -sound -ntsc -keybuf "10 graphic 1
    20 scnclr
    30 circle 1,100,100,20
    run
  " +warp -limitcycles 8000000 -exitscreenshotvicii /tmp/$1.png 2>&1 >/dev/null
  cd ..


  RESPONSE="{\"isBase64Encoded\": true, \"headers\": {\"Content-type\": \"image/png\", \"content-disposition\":\"attachment; filename=$1.png\"}, \"statusCode\":200, \"body\":\"`base64 /tmp/$1.png`\"}"

  echo $RESPONSE
}

The response

The above response is intended to output JSON in preparation for API Gateway Lambda integration. statusCode is required, and to convert the image back to an image isBase64Encoded and the headers Content-type is needed. The content-disposition is to tell it to download. All this gets POSTed back by the bootstrap script to the invocation response callback. The body is the base64 encoded png file, but in the current invocation in Amazon Linux, I’m getting newlines in the output, so that’s a problem to debug before attaching to API Gateway.

One more missing piece

We also need to pull libpng from our EC2 instance and place in the root of the lambda function at the same level as the bootstrap file. Just scp ec2-user@ec2IPaddress:/usr/lib64/libpng\* . for that.

Structure of the zip file and Deploy

zip up the following pieces into your lambda.zip (zip file name doesn’t really matter, just the organization of the contents):

  • bootstrap
  • function.sh
  • libpng*
  • vice-package/*

Once uploaded, you should be able to [Test] the function and check the logs. Add a set -x to your bootstrap if things aren’t behaving. You may need to chmod +x your bootstrap if you haven’t tried to run it locally for testing.

kubectl and eks… You must be logged in to the server (Unauthorized)

Say you have a setup with EKS using IAM API keys with Admin permissions and are using an AWS profile that you’ve confirmed can retrieve the eks config with aws eks update-kubeconfig --name context-name --region appropriate-region-name-n.

But then kubectl get pods --context context-name still fails, oddly with a “You must be logged in to the server (Unauthorized)”

Similarly, kubectl describe configmap -n kube-system aws-auth fails with the same message.

Is your username and/or role included/spelled correctly?

If you have access to all of the resources used by EKS then perhaps the ConfigMap is the issue. Check out How do I resolve an unauthorized server error when I connect to the Amazon EKS API server? for more details, and presumably the “You’re not the cluster creator” section.

Debugging steps:

  • Have the cluster creator or member of the system:masters group run kubectl describe configmap -n kube-system aws-auth and verify that the mapUsers or mapRoles section mention the identity with adequate permissions as returned by aws sts get-caller-identity
  • Double check that if the identity/roles are there that they have adequate permissions.
  • Double check that the identity/role ARN and username actually match. (This was a relatively simple setup, so in my case it was just a misspelling of the username that was the cause.)

Ruby on Jets Webpacker errors and Invalid Configuration Object

The problem

tl;dr to the solution that worked for me

I was trying to get Ruby on Jets up and running and ran into webpacker errors including “CLI for webpack must be installed” (and others) all the way to “Invalid configuration object”.

On creating a new app with jets (3.0.8) with npm 7.11.2, node 16.1.0, and yarn 1.22.5 and creating a basic blog CRUD app like follows:

 jets new blog_app --database=postgresql
 cd blog_app
 jets generate scaffold post title:string post:text
 jets db:create
 jets db:migrate
 jets server

I get an error with browsing to localhost:8888/posts:

ActionView::Template::Error at /posts
Webpacker can't find application.js in /mnt/c/Users/twill/projects/jets/blog_app/public/packs/manifest.json. Possible causes: 1. You want to set webpacker.yml value of compile to true for your environment unless you are using the `webpack -w` or the webpack-dev-server. 2. webpack has not yet re-run to reflect updates. 3. You have misconfigured Webpacker's config/webpacker.yml file. 4. Your webpack configuration is not creating a manifest. Your manifest contains: { }

On the back end, I could see:

[Webpacker] Compilation failed:
warning package.json: No license field
CLI for webpack must be installed.
webpack-cli (https://github.com/webpack/webpack-cli)
We will use "npm" to install the CLI via "npm install -D webpack-cli".
Do you want to install 'webpack-cli' (yes/no):

Adding dependencies one by one

I added webpack-cli with yarn add webpack-cli and:

[webpack-cli] Failed to load '/mnt/c/Users/twill/projects/jets/blog_app/config/webpack/development.js' config
[webpack-cli] Error: Cannot find module '@rails/webpacker'

Ok, add yarn add @rails/webpacker and run jets server and reload the page:

[Webpacker] Compilation failed:
 warning package.json: No license field
 [webpack-cli] Invalid configuration object. Webpack has been initialized using a configuration object that does not match the API schema.
 configuration.node should be one of these: false | object { __dirname?, __filename?, global? } -> Include polyfills or mocks for various node stuff. Details: configuration.node has an unknown property 'dgram'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.
 configuration.node has an unknown property 'fs'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.
 configuration.node has an unknown property 'net'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.
 configuration.node has an unknown property 'tls'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.
 configuration.node has an unknown property 'child_process'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.  

That led me eventually to Webpack 5: configuration.node has an unknown property ‘dram’. There properties are valid: issue comment on GitHub, with the mention of rails webpacker and that “Rails’ webpacker 5.x.x is only compatible with webpack 4.x.x”

The Solution

After lots of teardowns and tweaks to the project to troubleshoot, I finally ended up with a sequence that produces a scaffold for posts that renders with webpack:

jets new blog_app --database=postgresql
cd blog_app
jets generate scaffold post title:string post:text

# jets db:drop # used while experimenting with yard packages below
jets db:create
jets db:migrate

# this is the key line that makes the currently installed @rails/webpacker work
yarn add webpack@4 
yarn add webpack-cli
yarn add @rails/webpacker
jets server

Automatically Moving Files Between S3 Buckets with Lambda (part 1)

I have an S3 bucket that I want to attach to an application’s upload area, but I want to move them out of the bucket accessible to the application after they’ve been uploaded. Eventually, I want to have this be after a small delay, but initially I wanted to test out the concept itself.

Step 1: Have source and destination buckets in S3

Create buckets for source and destination. The ACLs on both of the buckets are the same (non-public) in my case.

Step 2: Create a Lambda Execution Role

  • Go to IAM > Roles > Create Role
  • Choose Lambda as a Use Case
  • Next: Permissions
  • Search for S3 and check AmazonS3FullAccess
AmazonS3FullAccess selection
  • Search for “lambdabasic” and check AWSLambdaBasicExecutionRole (for CloudWatch logs)
AWSLambdaBasicExecutionRole selection
  • Click [Next: Tags] > [Next: Review] and give your role a name and verify that the S3 and Lambda policies are added:
Verify policies and name role
  • Click [Create Role]

Step 3: Prep the Lambda Code

  • Clone https://github.com/stringsn88keys/move_file_on_upload
  • Be sure to have the correct ruby version (2.7.0 at the time of writing) installed
  • Change into move_file_on_upload folder
  • bundle install locally
  • bundle config set --local deployment 'true' to vendor the gems for the AWS Lambda
  • zip ../move_file_on_upload.zip ** to package the zip

Step 4: Create the Lambda

  • Go to AWS Lambda in the AWS Console and click [Create Function]
  • Name the function, set Ruby 2.7 as the runtime, and use the role you created
Function name, Runtime, and Permissions
  • [Create function]

Step 5: Add S3 Trigger

  • Click [+ Add Trigger]
  • Search and select S3
  • Fill in your source bucket and select all object create events
  • If you get this error (“configurations overlap”), select your bucket in S3, click the Properties tab, and you’ll see an Event Notifications that’s been orphaned by a previous config (be sure to delete the dependent Lambda as well if it exists):
Configurations overlap error

Step 6: Upload your code

  • Go back to the [Code] tab for your lambda and select the [Upload from] dropdown and select zip file.
Upload .zip file
  • Go to the [Configuration] section and add a FROM_BUCKET and TO_BUCKET environment variable for your Lambda to know what to process

Step 7: Monitor and test

  • You can test the Lambda execution via the Test dropdown
  • S3 put is one of the available test templates
  • Click “Create” after selecting S3 Put and you’ll be able to watch the event get executed.
  • Go to CloudWatch -> Logs -> Log Groups and you should see a log group for /aws/lambda/your_function_goes_here
  • If all else is successful, you should see “the specified key does not exist”

Step 8: Test it live

  • Create a folder in your source bucket.
  • Go into that folder
  • Upload a file.
  • The file should pretty quickly be copies to the destination bucket in the same folder structure and removed from its original location.
  • The top level folder under the bucket should remain.

Adding a https to an S3 static site via CloudFront

Ok, so we’ve set up a static site hosted from an S3 bucket with a custom domain using Route 53. But sadly, it’s:

Not Secure

Request a Certificate in Certificate Manager

  • Go to Services -> Certificate Manager
  • Click [Request a Certificate]
  • In the window that opens from “Request or Import a Certificate with ACM”, enter your domain name (domainname.com) and click [Next]
  • Select DNS validation and click [Next]
  • Click [Review]
  • Click [Confirm and Request] if the details look correct.
  • Expand the domain in validation:
  • Click [Create record in Route 53] and confirm by clicking [Create] again.
  • You’ll be waiting from several minutes to half an hour for the validation to happen, during which time status will display as “Pending validation”
  • Click [Continue] to finish the request process and go back to the Certificate Manager main screen.
  • Click the (refresh icon) button to update status, and when status turns to “Issued” you are ready to use it in CloudFront.
Pending validation
Ready for use

Setting up a CloudFront distribution

  • In the AWS Console, go to Services → Cloudfront
  • Click [Create Distribution]
  • Click [Get Started] under Web

Create Distribution

  • Under “Origin Domain Name” select the selection under “Amazon S3 buckets” that corresponds to your static web site bucket. (e.g., domainname.com.s3.amazonaws.com)
  • Optional: Restrict Bucket Access [Yes] so that you can control access through the CloudFront distribution alone.
    • Set “Origin Access Identity” to “Create a new identity”
    • Set “Grant Read Permissions on Bucket” to “Yes, Update Bucket Policy”
  • Under Viewer Protocol Policy I select “Redirect HTTP to HTTPS” just to keep things uniform.

Set up SSL

  • Under Alternate Domain Names, enter your domain name (e.g., domainname.com)
  • Select “Custom SSL Certificate”
  • Click “Request or Import a Certificate with ACM”
  • If you go back to CloudFront you should be able to select “Custom SSL Certificate” now and the certificate corresponding to your domain name should show up in suggestions:
  • Scroll down and leave defaults until you get to “Default Root Object”. You’ll want to set this to the name of the document to bring up (e.g., index.html) if the user browses to / on the domain.
  • Optional: I set Logging to On and selected my logging bucket that I used for the static site as the bucket, adding a log prefix for it.
  • To finish, click [Create Distribution]
  • You may be waiting quite a while for changes to propagate to the edge locations, but at some point before the “In Progress” changes to “Deployed” you will be able to pull up via the domain listed under the “Domain Name” column in your list of CloudFront Distributions.

Pointing the domain name at your distribution

  • Go back to Route 53 and go into the hosted zone for your domain name
  • Check the checkbox next to your A record and then go up to Actions -> Edit
  • Change “Value/Route traffic to” from “Alias to S3 endpoint” to “Alias to CloudFront distribution” in the “Choose Distribution” input box.
  • Enter the domain name (“asdfkjdfasoiadsf9u.cloudfront.net.”) as your domain name. (The new interface wasn’t suggesting distributions like the last version of the interface did… it may change next week, of course.)

Locking down S3

If you selected “Restrict bucket access” and had CloudFront update your S3 policy, your public access setting on the bucket is still unaffected. You’ll want to remove that:

  • Go back to Services -> Amazon S3
  • Go to your domainname.com bucket
  • Click Permissions
  • Click Block public access
  • Check “Block all public access” and click [Save]

Some other details

If you want to have JavaScript and forms function properly You’ll want to set up CORS configuration by going to your S3 bucket, then selecting Permissions tab and clicking CORS configuration:

&amp;lt;CORSConfiguration&amp;gt;
 &amp;lt;CORSRule&amp;gt;
   &amp;lt;AllowedOrigin&amp;gt;https://thomaspowell.work&amp;lt;/AllowedOrigin&amp;gt;
   &amp;lt;AllowedMethod&amp;gt;PUT&amp;lt;/AllowedMethod&amp;gt;
   &amp;lt;AllowedMethod&amp;gt;POST&amp;lt;/AllowedMethod&amp;gt;
   &amp;lt;AllowedMethod&amp;gt;DELETE&amp;lt;/AllowedMethod&amp;gt;

   &amp;lt;AllowedHeader&amp;gt;*&amp;lt;/AllowedHeader&amp;gt;
 &amp;lt;/CORSRule&amp;gt;
 &amp;lt;CORSRule&amp;gt;
   &amp;lt;AllowedOrigin&amp;gt;*&amp;lt;/AllowedOrigin&amp;gt;
   &amp;lt;AllowedMethod&amp;gt;GET&amp;lt;/AllowedMethod&amp;gt;
 &amp;lt;/CORSRule&amp;gt;
&amp;lt;/CORSConfiguration&amp;gt;

Some mistakes I made:

  • A certificate for *.domainname.com does not cover domainname.com. You have to add both if you want wildcard and domainname.com itself covered.

Next up… preventing someone from running up a $1,000 AWS bill by hammering your site (i.e., monitoring your site’s access… with better granularity than AWS Budgets…

Hosting a static site on S3 with a purchased domain

S3 Static Site Setup

Creating buckets

  • In your AWS Console, go to Services -> S3
  • Optional:
    • Click [ + Create Bucket ]
    • Type in a bucket name for static site logging (i.e., domainname.com-logs)
    • Accept next all the way to Create Bucket
  • Click [ + Create Bucket ]
  • Type in your domain name (for example, “domainname.com”)
  • If you created a logs bucket:
    • Check “Log requests for access to your bucket” under “Server access logging”
    • Enter the logging bucket name under the “Target bucket” field.
  • Hit [Next] for Configure Options
  • Under Set Permissions, uncheck “Block all public access” and check the box that says “I acknowledge that the current settings may result in this bucket and the objects within becoming public”
  • Click Create Bucket

Creating a static site

  • Drag and drop your static HTML files and assets within the site root in your project and drag to S3. Before sure you have the html file you want to use as an index and as an error page.
  • Click [Next] on the Upload modal.
  • Under “Manage public permissions” change “Do not grant public read access to this object(s)” to “Grant public read access to this object(s)” and click [Next]
  • Click [Upload] from the “Set Properties” step (skip Storage Class configuration, etc. screen).
  • Under the “Properties” tab for your bucket, click on the “Static website hosting” tile
  • Select “Use this bucket to host a website”, enter the name of your index and error documents, and click [Save].
  • You should be able to click the link under endpoint and see your index page.

Create a hosted zone for your domain in Route 53

  • In Route 53, select Hosted zone link on the left of the console
  • Click [Create hosted zone]
  • Enter your Domain name
  • Select “Public hosted zone”
  • Click “Create hosted zone”
  • If you have a domain purchased elsewhere than AWS, copy the name servers under the “Hosted zone details” and set on your domain. (e.g., on Namecheap, it’s under Domain->Nameservers->Custom DNS… BE SURE TO HIT THE GREEN CHECKMARK AFTER EDITS!!)

Create a Record in the Hosted Zone

  • Select “Create record”
  • Select “Simple record”
  • Under “Define simple record”
    • leave the record name blank
    • Value/Route traffic to
      • Alias to S3 website endpoints
      • Select your region
      • In the field that says “Choose S3 Bucket”, you should see your bucket as an option:
  • To finish, click [Define simple record] and then [Create records]