kubectl and eks… You must be logged in to the server (Unauthorized)

Say you have a setup with EKS using IAM API keys with Admin permissions and are using an AWS profile that you’ve confirmed can retrieve the eks config with aws eks update-kubeconfig --name context-name --region appropriate-region-name-n.

But then kubectl get pods --context context-name still fails, oddly with a “You must be logged in to the server (Unauthorized)”

Similarly, kubectl describe configmap -n kube-system aws-auth fails with the same message.

Is your username and/or role included/spelled correctly?

If you have access to all of the resources used by EKS then perhaps the ConfigMap is the issue. Check out How do I resolve an unauthorized server error when I connect to the Amazon EKS API server? for more details, and presumably the “You’re not the cluster creator” section.

Debugging steps:

  • Have the cluster creator or member of the system:masters group run kubectl describe configmap -n kube-system aws-auth and verify that the mapUsers or mapRoles section mention the identity with adequate permissions as returned by aws sts get-caller-identity
  • Double check that if the identity/roles are there that they have adequate permissions.
  • Double check that the identity/role ARN and username actually match. (This was a relatively simple setup, so in my case it was just a misspelling of the username that was the cause.)


Ruby on Jets Webpacker errors and Invalid Configuration Object

The problem

tl;dr to the solution that worked for me

I was trying to get Ruby on Jets up and running and ran into webpacker errors including “CLI for webpack must be installed” (and others) all the way to “Invalid configuration object”.

On creating a new app with jets (3.0.8) with npm 7.11.2, node 16.1.0, and yarn 1.22.5 and creating a basic blog CRUD app like follows:

 jets new blog_app --database=postgresql
 cd blog_app
 jets generate scaffold post title:string post:text
 jets db:create
 jets db:migrate
 jets server

I get an error with browsing to localhost:8888/posts:

ActionView::Template::Error at /posts
Webpacker can't find application.js in /mnt/c/Users/twill/projects/jets/blog_app/public/packs/manifest.json. Possible causes: 1. You want to set webpacker.yml value of compile to true for your environment unless you are using the `webpack -w` or the webpack-dev-server. 2. webpack has not yet re-run to reflect updates. 3. You have misconfigured Webpacker's config/webpacker.yml file. 4. Your webpack configuration is not creating a manifest. Your manifest contains: { }

On the back end, I could see:

[Webpacker] Compilation failed:
warning package.json: No license field
CLI for webpack must be installed.
webpack-cli (https://github.com/webpack/webpack-cli)
We will use "npm" to install the CLI via "npm install -D webpack-cli".
Do you want to install 'webpack-cli' (yes/no):

Adding dependencies one by one

I added webpack-cli with yarn add webpack-cli and:

[webpack-cli] Failed to load '/mnt/c/Users/twill/projects/jets/blog_app/config/webpack/development.js' config
[webpack-cli] Error: Cannot find module '@rails/webpacker'

Ok, add yarn add @rails/webpacker and run jets server and reload the page:

[Webpacker] Compilation failed:
 warning package.json: No license field
 [webpack-cli] Invalid configuration object. Webpack has been initialized using a configuration object that does not match the API schema.
 configuration.node should be one of these: false | object { __dirname?, __filename?, global? } -> Include polyfills or mocks for various node stuff. Details: configuration.node has an unknown property 'dgram'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.
 configuration.node has an unknown property 'fs'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.
 configuration.node has an unknown property 'net'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.
 configuration.node has an unknown property 'tls'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.
 configuration.node has an unknown property 'child_process'. These properties are valid:
 object { __dirname?, __filename?, global? }
 -> Options object for node compatibility features.  

That led me eventually to Webpack 5: configuration.node has an unknown property ‘dram’. There properties are valid: issue comment on GitHub, with the mention of rails webpacker and that “Rails’ webpacker 5.x.x is only compatible with webpack 4.x.x”

The Solution

After lots of teardowns and tweaks to the project to troubleshoot, I finally ended up with a sequence that produces a scaffold for posts that renders with webpack:

jets new blog_app --database=postgresql
cd blog_app
jets generate scaffold post title:string post:text

# jets db:drop # used while experimenting with yard packages below
jets db:create
jets db:migrate

# this is the key line that makes the currently installed @rails/webpacker work
yarn add webpack@4 
yarn add webpack-cli
yarn add @rails/webpacker
jets server

Automatically Moving Files Between S3 Buckets with Lambda (part 1)

I have an S3 bucket that I want to attach to an application’s upload area, but I want to move them out of the bucket accessible to the application after they’ve been uploaded. Eventually, I want to have this be after a small delay, but initially I wanted to test out the concept itself.

Step 1: Have source and destination buckets in S3

Create buckets for source and destination. The ACLs on both of the buckets are the same (non-public) in my case.

Step 2: Create a Lambda Execution Role

  • Go to IAM > Roles > Create Role
  • Choose Lambda as a Use Case
  • Next: Permissions
  • Search for S3 and check AmazonS3FullAccess
AmazonS3FullAccess selection
  • Search for “lambdabasic” and check AWSLambdaBasicExecutionRole (for CloudWatch logs)
AWSLambdaBasicExecutionRole selection
  • Click [Next: Tags] > [Next: Review] and give your role a name and verify that the S3 and Lambda policies are added:
Verify policies and name role
  • Click [Create Role]

Step 3: Prep the Lambda Code

  • Clone https://github.com/stringsn88keys/move_file_on_upload
  • Be sure to have the correct ruby version (2.7.0 at the time of writing) installed
  • Change into move_file_on_upload folder
  • bundle install locally
  • bundle config set --local deployment 'true' to vendor the gems for the AWS Lambda
  • zip ../move_file_on_upload.zip ** to package the zip

Step 4: Create the Lambda

  • Go to AWS Lambda in the AWS Console and click [Create Function]
  • Name the function, set Ruby 2.7 as the runtime, and use the role you created
Function name, Runtime, and Permissions
  • [Create function]

Step 5: Add S3 Trigger

  • Click [+ Add Trigger]
  • Search and select S3
  • Fill in your source bucket and select all object create events
  • If you get this error (“configurations overlap”), select your bucket in S3, click the Properties tab, and you’ll see an Event Notifications that’s been orphaned by a previous config (be sure to delete the dependent Lambda as well if it exists):
Configurations overlap error

Step 6: Upload your code

  • Go back to the [Code] tab for your lambda and select the [Upload from] dropdown and select zip file.
Upload .zip file
  • Go to the [Configuration] section and add a FROM_BUCKET and TO_BUCKET environment variable for your Lambda to know what to process

Step 7: Monitor and test

  • You can test the Lambda execution via the Test dropdown
  • S3 put is one of the available test templates
  • Click “Create” after selecting S3 Put and you’ll be able to watch the event get executed.
  • Go to CloudWatch -> Logs -> Log Groups and you should see a log group for /aws/lambda/your_function_goes_here
  • If all else is successful, you should see “the specified key does not exist”

Step 8: Test it live

  • Create a folder in your source bucket.
  • Go into that folder
  • Upload a file.
  • The file should pretty quickly be copies to the destination bucket in the same folder structure and removed from its original location.
  • The top level folder under the bucket should remain.

Adding a https to an S3 static site via CloudFront

Ok, so we’ve set up a static site hosted from an S3 bucket with a custom domain using Route 53. But sadly, it’s:

Not Secure

Request a Certificate in Certificate Manager

  • Go to Services -> Certificate Manager
  • Click [Request a Certificate]
  • In the window that opens from “Request or Import a Certificate with ACM”, enter your domain name (domainname.com) and click [Next]
  • Select DNS validation and click [Next]
  • Click [Review]
  • Click [Confirm and Request] if the details look correct.
  • Expand the domain in validation:
  • Click [Create record in Route 53] and confirm by clicking [Create] again.
  • You’ll be waiting from several minutes to half an hour for the validation to happen, during which time status will display as “Pending validation”
  • Click [Continue] to finish the request process and go back to the Certificate Manager main screen.
  • Click the (refresh icon) button to update status, and when status turns to “Issued” you are ready to use it in CloudFront.
Pending validation
Ready for use

Setting up a CloudFront distribution

  • In the AWS Console, go to Services → Cloudfront
  • Click [Create Distribution]
  • Click [Get Started] under Web

Create Distribution

  • Under “Origin Domain Name” select the selection under “Amazon S3 buckets” that corresponds to your static web site bucket. (e.g., domainname.com.s3.amazonaws.com)
  • Optional: Restrict Bucket Access [Yes] so that you can control access through the CloudFront distribution alone.
    • Set “Origin Access Identity” to “Create a new identity”
    • Set “Grant Read Permissions on Bucket” to “Yes, Update Bucket Policy”
  • Under Viewer Protocol Policy I select “Redirect HTTP to HTTPS” just to keep things uniform.

Set up SSL

  • Under Alternate Domain Names, enter your domain name (e.g., domainname.com)
  • Select “Custom SSL Certificate”
  • Click “Request or Import a Certificate with ACM”
  • If you go back to CloudFront you should be able to select “Custom SSL Certificate” now and the certificate corresponding to your domain name should show up in suggestions:
  • Scroll down and leave defaults until you get to “Default Root Object”. You’ll want to set this to the name of the document to bring up (e.g., index.html) if the user browses to / on the domain.
  • Optional: I set Logging to On and selected my logging bucket that I used for the static site as the bucket, adding a log prefix for it.
  • To finish, click [Create Distribution]
  • You may be waiting quite a while for changes to propagate to the edge locations, but at some point before the “In Progress” changes to “Deployed” you will be able to pull up via the domain listed under the “Domain Name” column in your list of CloudFront Distributions.

Pointing the domain name at your distribution

  • Go back to Route 53 and go into the hosted zone for your domain name
  • Check the checkbox next to your A record and then go up to Actions -> Edit
  • Change “Value/Route traffic to” from “Alias to S3 endpoint” to “Alias to CloudFront distribution” in the “Choose Distribution” input box.
  • Enter the domain name (“asdfkjdfasoiadsf9u.cloudfront.net.”) as your domain name. (The new interface wasn’t suggesting distributions like the last version of the interface did… it may change next week, of course.)

Locking down S3

If you selected “Restrict bucket access” and had CloudFront update your S3 policy, your public access setting on the bucket is still unaffected. You’ll want to remove that:

  • Go back to Services -> Amazon S3
  • Go to your domainname.com bucket
  • Click Permissions
  • Click Block public access
  • Check “Block all public access” and click [Save]

Some other details

If you want to have JavaScript and forms function properly You’ll want to set up CORS configuration by going to your S3 bucket, then selecting Permissions tab and clicking CORS configuration:

<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>https://thomaspowell.work</AllowedOrigin>
   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>

   <AllowedHeader>*</AllowedHeader>
 </CORSRule>
 <CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
 </CORSRule>
</CORSConfiguration>

Some mistakes I made:

  • A certificate for *.domainname.com does not cover domainname.com. You have to add both if you want wildcard and domainname.com itself covered.

Next up… preventing someone from running up a $1,000 AWS bill by hammering your site (i.e., monitoring your site’s access… with better granularity than AWS Budgets…


Hosting a static site on S3 with a purchased domain

S3 Static Site Setup

Creating buckets

  • In your AWS Console, go to Services -> S3
  • Optional:
    • Click [ + Create Bucket ]
    • Type in a bucket name for static site logging (i.e., domainname.com-logs)
    • Accept next all the way to Create Bucket
  • Click [ + Create Bucket ]
  • Type in your domain name (for example, “domainname.com”)
  • If you created a logs bucket:
    • Check “Log requests for access to your bucket” under “Server access logging”
    • Enter the logging bucket name under the “Target bucket” field.
  • Hit [Next] for Configure Options
  • Under Set Permissions, uncheck “Block all public access” and check the box that says “I acknowledge that the current settings may result in this bucket and the objects within becoming public”
  • Click Create Bucket

Creating a static site

  • Drag and drop your static HTML files and assets within the site root in your project and drag to S3. Before sure you have the html file you want to use as an index and as an error page.
  • Click [Next] on the Upload modal.
  • Under “Manage public permissions” change “Do not grant public read access to this object(s)” to “Grant public read access to this object(s)” and click [Next]
  • Click [Upload] from the “Set Properties” step (skip Storage Class configuration, etc. screen).
  • Under the “Properties” tab for your bucket, click on the “Static website hosting” tile
  • Select “Use this bucket to host a website”, enter the name of your index and error documents, and click [Save].
  • You should be able to click the link under endpoint and see your index page.

Create a hosted zone for your domain in Route 53

  • In Route 53, select Hosted zone link on the left of the console
  • Click [Create hosted zone]
  • Enter your Domain name
  • Select “Public hosted zone”
  • Click “Create hosted zone”
  • If you have a domain purchased elsewhere than AWS, copy the name servers under the “Hosted zone details” and set on your domain. (e.g., on Namecheap, it’s under Domain->Nameservers->Custom DNS… BE SURE TO HIT THE GREEN CHECKMARK AFTER EDITS!!)

Create a Record in the Hosted Zone

  • Select “Create record”
  • Select “Simple record”
  • Under “Define simple record”
    • leave the record name blank
    • Value/Route traffic to
      • Alias to S3 website endpoints
      • Select your region
      • In the field that says “Choose S3 Bucket”, you should see your bucket as an option:
  • To finish, click [Define simple record] and then [Create records]