kubectl and eks… You must be logged in to the server (Unauthorized)

Say you have a setup with EKS using IAM API keys with Admin permissions and are using an AWS profile that you’ve confirmed can retrieve the eks config with aws eks update-kubeconfig --name context-name --region appropriate-region-name-n.

But then kubectl get pods --context context-name still fails, oddly with a “You must be logged in to the server (Unauthorized)”

Similarly, kubectl describe configmap -n kube-system aws-auth fails with the same message.

Is your username and/or role included/spelled correctly?

If you have access to all of the resources used by EKS then perhaps the ConfigMap is the issue. Check out How do I resolve an unauthorized server error when I connect to the Amazon EKS API server? for more details, and presumably the “You’re not the cluster creator” section.

Debugging steps:

  • Have the cluster creator or member of the system:masters group run kubectl describe configmap -n kube-system aws-auth and verify that the mapUsers or mapRoles section mention the identity with adequate permissions as returned by aws sts get-caller-identity
  • Double check that if the identity/roles are there that they have adequate permissions.
  • Double check that the identity/role ARN and username actually match. (This was a relatively simple setup, so in my case it was just a misspelling of the username that was the cause.)


Automatically Moving Files Between S3 Buckets with Lambda (part 1)

I have an S3 bucket that I want to attach to an application’s upload area, but I want to move them out of the bucket accessible to the application after they’ve been uploaded. Eventually, I want to have this be after a small delay, but initially I wanted to test out the concept itself.

Step 1: Have source and destination buckets in S3

Create buckets for source and destination. The ACLs on both of the buckets are the same (non-public) in my case.

Step 2: Create a Lambda Execution Role

  • Go to IAM > Roles > Create Role
  • Choose Lambda as a Use Case
  • Next: Permissions
  • Search for S3 and check AmazonS3FullAccess
AmazonS3FullAccess selection
  • Search for “lambdabasic” and check AWSLambdaBasicExecutionRole (for CloudWatch logs)
AWSLambdaBasicExecutionRole selection
  • Click [Next: Tags] > [Next: Review] and give your role a name and verify that the S3 and Lambda policies are added:
Verify policies and name role
  • Click [Create Role]

Step 3: Prep the Lambda Code

  • Clone https://github.com/stringsn88keys/move_file_on_upload
  • Be sure to have the correct ruby version (2.7.0 at the time of writing) installed
  • Change into move_file_on_upload folder
  • bundle install locally
  • bundle config set --local deployment 'true' to vendor the gems for the AWS Lambda
  • zip ../move_file_on_upload.zip ** to package the zip

Step 4: Create the Lambda

  • Go to AWS Lambda in the AWS Console and click [Create Function]
  • Name the function, set Ruby 2.7 as the runtime, and use the role you created
Function name, Runtime, and Permissions
  • [Create function]

Step 5: Add S3 Trigger

  • Click [+ Add Trigger]
  • Search and select S3
  • Fill in your source bucket and select all object create events
  • If you get this error (“configurations overlap”), select your bucket in S3, click the Properties tab, and you’ll see an Event Notifications that’s been orphaned by a previous config (be sure to delete the dependent Lambda as well if it exists):
Configurations overlap error

Step 6: Upload your code

  • Go back to the [Code] tab for your lambda and select the [Upload from] dropdown and select zip file.
Upload .zip file
  • Go to the [Configuration] section and add a FROM_BUCKET and TO_BUCKET environment variable for your Lambda to know what to process

Step 7: Monitor and test

  • You can test the Lambda execution via the Test dropdown
  • S3 put is one of the available test templates
  • Click “Create” after selecting S3 Put and you’ll be able to watch the event get executed.
  • Go to CloudWatch -> Logs -> Log Groups and you should see a log group for /aws/lambda/your_function_goes_here
  • If all else is successful, you should see “the specified key does not exist”

Step 8: Test it live

  • Create a folder in your source bucket.
  • Go into that folder
  • Upload a file.
  • The file should pretty quickly be copies to the destination bucket in the same folder structure and removed from its original location.
  • The top level folder under the bucket should remain.

AWS Certified Developer – Associate (DVA-C01) Experience

It’s been a little over 1 month since my AWS Certified Solutions Architect – Associate (SAA-C02) exam. After finishing the Solutions Architect Associate exam, I immediately registered for the Developer Associate exam, which I took yesterday and passed.

Wasting Time

I took a practice exam (through Linux Academy) shortly after passing the SAA exam, having actually gone through most of a Developer Associate training course. DO NOT DO THIS UNLESS YOU ARE READY TO DIVE DEEP INTO ANYTHING YOU MISSED. I managed to get about 72% on the practice exam, which is essentially a passing score so I let myself lose focus over the holidays.

Next mistake was going through a couple of versions of the course (30 hours each) instead of just using the practice test results to read the white papers, FAQs, and AWS service pages I needed to review. (Linux Academy/aCloudGuru courses provide links to associated AWS reference for this.)

Foundations in SAA

I highly recommend getting the Solutions Architect – Associate first. It covers a good breadth of topics that will give you a solid foundation. I recommend the Linux Academy courses that have labs that you can walk through via a transcript just to traverse all the areas you need to know.

Differences from SAA

All of the AWS Code* services factor into a significant chunk of the DVA exam. If you have solid experience with docker, k8s, and git, you’ll probably have a better than 50% chance on questions about services that you didn’t look into, but there are also things that are about the “AWS way” or the “way that AWS services push you to do things” that I shook my head at. (Physical team organization? Wat?)

Also…

  • Learn how to set up X-Ray on the various compute environments.
  • Create and deploy a Serverless Application Model project and dig into the configuration files for it.
  • 0 bytes is the minimum object size, 100 MB is the recommended threshold to use multipart upload to S3, 5GB it is required, 5TB is the object size limit.
  • cfn-init
  • View Protocol Policy in CloudFront (https/http)
  • Bucket policies for enforcing server side encryption
  • Learn supported platforms for CodeDeploy
  • Learn lifecycle hooks for CodeDeploy (just look at the diagrams a few times)
  • Learn which services trigger Lambda asynchronously and synchronously
  • Lambda requires dependency packaging, and CPU is controlled by RAM allocation.
  • Dead letter queues
  • Lambda, by definition, does not do in-place deploy (because in-place requires a tangible server)


AWS Certified Solutions Architect Associate (SAA-C02) Experience

Prep

Online Exam

  • PearsonVue browserlock kept detecting itself on Catalina on my MacBook Air and exiting, so I ended up using a Windows laptop instead.
  • OnVue app on Windows kept detecting “gamebar”, which I had to disable (and reboot Windows to fully take effect)
  • Make sure the laptop that you’re using can be maneuvered to show all the workspace within reach.
  • Have a phone handy for check-in, but a convenient place to stash out of sight/sound range.
  • Make sure anything with any print or writing is out of view and out of reach of the workspace area before check-in.

Contents That I Could’ve Been More Prepared For

  • SQS vs. Kinesis use cases
  • Priority queueing – I think this requires either AmazonMQ or a separate SQS queues (depending on priority levels)
  • ECS launch types. ECS vs. Fargate vs. EKS
  • Amazon RDS Read Replicas vs Aurora Global Database

Deploying a Minecraft Server to EC2 (with some cost analysis)

Caveat… this is likely not the option you want to pursue for a small to medium scale Minecraft server… Lightsail will be more cost-effective due to the amount of data transfer involved. Also, this is partly an exercise it navigating Amazon Linux as a largely Ubuntu user. If you really want to host your own Minecraft server on a virtual server, I’ve also done this exercise minus 90% of the steps on a 2GB Linode (get a $100, 60-day credit through that referral link) and you will not get a huge egress bill for the insane amount of data you transfer out. OR if you want a fairly plug-and-play solution, PebbleHost offers Minecraft hosting for as little as $1/month ($3-5 basic plan recommended depending on your needs)

Purchase a Domain

I’m going through Namecheap for a .online domain because… well… it’s cheap… and registering revrestfarcenim.online (.online domain through Route 53 would be $39 vs. $1.88 for the first year with Namecheap).

Launch an EC2 instance

If you’re trying to use free-tier resources for this, you’ll want to go for a t2.micro, but you’ll also need to modify the java parameters for the server to fit within those memory limits.

  • Go to Services -> EC2
  • Scroll down in the main window to “Launch instance” and click [Launch Instance]
  • Select Amazon Linux 2 AMI (should be the first option)
  • I’m selecting t3a.small for this to be similar in “virtual” resources as the $10-15/mo virtual server hosting providers (including Amazon Lightsail).
  • Click [Next: Configure Instance Details]
  • You’ll get a default vpc and subnet created and selected… if you don’t want to use these, you can click the available links to create new ones.
  • For “Auto-assign Public IP”, I have “Use subnet setting (Enable)” because I’m going to want to have this publicly accessible.
  • Scroll to near the bottom of the “Step 3” form and find “T2/T3 Unlimited” and uncheck “Enable” unless you want to run up a bill because you forgot about the Minecraft server.
  • Click [Next: Add Storage] to configure your space.
  • Click [Next: Add Tags]
  • Click [Next: Configure Security Group]
  • Click [Add Rule] and add a Custom TCP Rule that allows traffic to port 25565 (the port for Minecraft) from 0.0.0.0/0
  • Click [Review and Launch]
  • Click [Launch] and choose “Create a new key pair” named “minecraftserver”
  • Click on Instances and select your new instance, noting the IPv4 Public IP in the instance details below (leave tab open for reference in the next section)

Create a Hosted Zone in Route 53

  • In a new tab, go to Services -> Route 53 -> Hosted Zones
  • Click [Create hosted zone]
  • Type in your domain name for your server and select “Public hosted zone”
  • Copy the values for the NS record and populate those as the nameservers for your domain (for me, this is on Namecheap
Route 53 NS record
Namecheap Custom DNS settings
  • Now go back to Route 53 and [Create Record]
  • Choose “Simple routing”
  • Click [Define simple record]
  • Leave the record name blank.
  • Under “Value/Route traffic to” select “IP address or another value depending on the record type” and paste your IP in.
  • Be sure “Record type” is “A” and click [Define simple record]
  • Click [Create records] on the “Configure records” screen.

Install and setup Minecraft

  • Using your keypair from instance creation, ssh into your your instance
chmod 400 minecraftserver.pem # or whatever the filename is
ssh -i minecraftserver.pem ec2-user@revrestfarcenim.online
  • Create minecraft server folder and user
sudo mkdir /srv/minecraft-server # assuming EBS mount
sudo adduser --system --home /srv/minecraft-server minecraft
sudo chown minecraft.minecraft /srv/minecraft-service
  • Do updates and install java
sudo yum update
sudo amazon-linux-extras install java-openjdk11
  • Download minecraft server, run for the first time, and set the eula.txt
sudo -u minecraft wget https://launcher.mojang.com/v1/objects/a412fd69db1f81db3f511c1463fd304675244077/server.jar
sudo -u minecraft java -Xmx1024M -Xms1024M -jar server.jar nogui
# you'll get an error, so edit the eula.txt
sudo -u minecraft nano eula.txt # or vi, just set to eula=true
# run minecraft again and try to connect to server at revrestfarcenim.online

Making Minecraft a service

  • Run sudo nano /lib/systemd/minecraft-server.service
  • Paste a config similar to:
[Unit] 
Description=start and stop the minecraft-server 

[Service]
WorkingDirectory=/srv/minecraft-server
User=minecraft
Group=minecraft
Restart=on-failure
RestartSec=20 5
ExecStart=/usr/bin/java -Xms1024M -Xmx1024M -jar server.jar nogui

[Install]
WantedBy=multi-user.target
  • Now, enable the service with sudo systemctl enable /lib/systemd/minecraft-server.service
  • And start the service with sudo systemctl start minecraft-server.service
  • As it’s starting, you can check the status with sudo systemctl status minecraft-server.service.
  • With this systemctl setup, you should also be able to reboot the instance and have the Minecraft server come back up

Troubleshooting

  • If you don’t go with the default gateway and subnets created with the instance, you may find yourself having to explicitly set up an Internet Gateway and a Route Table (0.0.0.0/0 to your igw)
  • Make sure the associated security group is allowing port 25565 to connect (especially if SSH is working)

Teardown

Be sure to delete your hosted zone (you’ll need to delete the A record before deleting the zone) and terminate your instance to avoid running up charges for things you’re not using. I deleted the VPC as well just to avoid clutter and half-baked subnets and security groups, but that’s only because I have nothing of long-term value in the account.

Cost Analysis

There are multiple hits that you’re going to take by hosting this on EC2:

  • egress costs (9¢ per GB after your first GB): In my limited tests, the Minecraft worlds we started up required 100-200MB to initially download per session. It’s unclear if that’s the case for every session, but if you have 20 of your friends use a server, that might be 9¢ x 0.20 GB x 20 for one session each… 36¢ per average number of sessions… that could add up quickly. By contrast, you could get a different hosting provider (including Lightsail) to bundle 2TB of transfer instead.
  • hosted zone cost (50¢ for a distinct domain’s hosted zone)
  • If you use a separate EBS volume, minimum cost there is 80¢.
  • t3a.small cost is 1.88¢ per hour or $13.53/month… you could have the server shut down during off-hours, but then you’re not comparing to “always-on” options.

Monitoring your S3 buckets for #omgcat usage

If you’re serving up a static website from S3, especially if you have larger assets stored there, you may want to put monitoring on the requests or bytes downloaded from S3, just to make sure someone’s not running up terabytes of transfers or millions of requests.

Enabling Metrics on S3

You will have to enable metrics on S3 in order to get CloudWatch alarms on them.

  • Go to Services -> S3 -> Buckets and select the bucket for your static site.
  • Select Management tab and [Metrics] and then click on the pencil icon next to the bucket icon.
  • Once enabled, the metrics will take a bit to populate.

Setting up Simple Notification Service (SNS)

(These are the same instructions as in Monitoring your CloudFront #omgcat usage)

Set up a topic

  • First, you’re going to want to be notified. Go to Services -> Simple Notification Service to set up a pathway for that to happen.
  • Next, click “Topics” and then [Create topic]
  • Name your topic something that adequately describes the purpose (I just used domainname-com)
  • Scroll down to [Create Topic]

Set up a subcription

  • Under “Amazon SNS” left sidebar, click “Subscriptions” and [Create Subscription]
  • Click on the Topic ARN field and you should be able to see an ARN with your topic name as the last part of the ARN. Click that ARN
  • Under Protocol, select your preferred method of notification (I’m going with SMS.
  • Under Endpoint, enter your cell number, including country code (+18005551212 for (800) 555-1212 in the US)

Setting up a CloudWatch Alarm

  • Go to Services -> CloudWatch -> Alarms and [Create alarm]
  • [Select metric] and select S3
  • If you don’t see “Request Metrics per Filter” then the metrics haven’t started populating yet.
  • Check “GetRequests” or “BytesDownloaded” and [Select Metric]
  • Set conditions as you would like to have flag any anomalies and click [Next]
  • Choose “In Alarm” and “Select an existing SNS topic” and click in the box below “Send Notification To…” to get suggestions and select the SNS topic corresponding to the notification method you set up. Click [Next]
  • Name your alarm and click [Next]
  • Review the summary and click [Create Alarm]


Monitoring your CloudFront #omgcat usage

Ok, so you’ve created a lovely static site and/or set up a CloudFront distribution for https for it. But CloudFront bills by the GB. What if somebody decides your assets are perfect to hotlink to or just straight up makes an insane boatload of requests? How do you protect yourself from getting a frightening bill before AWS Budget can even notify you?

Setting up Simple Notification Service (SNS)

Set up a topic

  • First, you’re going to want to be notified. Go to Services -> Simple Notification Service to set up a pathway for that to happen.
  • Next, click “Topics” and then [Create topic]
  • Name your topic something that adequately describes the purpose (I just used domainname-com)
  • Scroll down to [Create Topic]

Set up a subcription

  • Under “Amazon SNS” left sidebar, click “Subscriptions” and [Create Subscription]
  • Click on the Topic ARN field and you should be able to see an ARN with your topic name as the last part of the ARN. Click that ARN
  • Under Protocol, select your preferred method of notification (I’m going with SMS.
  • Under Endpoint, enter your cell number, including country code (+18005551212 for (800) 555-1212 in the US)

Get Notified

  • Go to Services -> CloudFront -> Alarms
  • [Create Alarm]
  • Under Metric, choose the threshold that you want to detect on… maybe it’s Requests, maybe it’s Bytes Downloaded…
  • Select the distribution that you want to watch ( domainname.com should be mentioned in the dropdown)
  • For “Send a notification to”, select the SNS topic that corresponds to the notification method you set up.
  • Since mine is a dev/test site, I don’t expect more than a request/second so
  • Finally, [Create Alarm]

Testing the Alarm

  • If you have a low enough threshold you can probably just hold down F5 (or whatever your refresh key is) for a few seconds. (Word of caution: Don’t do this with a page that downloads a lot of assets!)
  • In bash you can also do the following.
for i in {1..61}
do
  curl https://domainname.com
done
  • If your notifications are working, you should get an message through your preferred notification method.
  • Under Services -> CloudWatch -> Alarms, you should also see your Alarm count be > 0.

Adding a https to an S3 static site via CloudFront

Ok, so we’ve set up a static site hosted from an S3 bucket with a custom domain using Route 53. But sadly, it’s:

Not Secure

Request a Certificate in Certificate Manager

  • Go to Services -> Certificate Manager
  • Click [Request a Certificate]
  • In the window that opens from “Request or Import a Certificate with ACM”, enter your domain name (domainname.com) and click [Next]
  • Select DNS validation and click [Next]
  • Click [Review]
  • Click [Confirm and Request] if the details look correct.
  • Expand the domain in validation:
  • Click [Create record in Route 53] and confirm by clicking [Create] again.
  • You’ll be waiting from several minutes to half an hour for the validation to happen, during which time status will display as “Pending validation”
  • Click [Continue] to finish the request process and go back to the Certificate Manager main screen.
  • Click the (refresh icon) button to update status, and when status turns to “Issued” you are ready to use it in CloudFront.
Pending validation
Ready for use

Setting up a CloudFront distribution

  • In the AWS Console, go to Services → Cloudfront
  • Click [Create Distribution]
  • Click [Get Started] under Web

Create Distribution

  • Under “Origin Domain Name” select the selection under “Amazon S3 buckets” that corresponds to your static web site bucket. (e.g., domainname.com.s3.amazonaws.com)
  • Optional: Restrict Bucket Access [Yes] so that you can control access through the CloudFront distribution alone.
    • Set “Origin Access Identity” to “Create a new identity”
    • Set “Grant Read Permissions on Bucket” to “Yes, Update Bucket Policy”
  • Under Viewer Protocol Policy I select “Redirect HTTP to HTTPS” just to keep things uniform.

Set up SSL

  • Under Alternate Domain Names, enter your domain name (e.g., domainname.com)
  • Select “Custom SSL Certificate”
  • Click “Request or Import a Certificate with ACM”
  • If you go back to CloudFront you should be able to select “Custom SSL Certificate” now and the certificate corresponding to your domain name should show up in suggestions:
  • Scroll down and leave defaults until you get to “Default Root Object”. You’ll want to set this to the name of the document to bring up (e.g., index.html) if the user browses to / on the domain.
  • Optional: I set Logging to On and selected my logging bucket that I used for the static site as the bucket, adding a log prefix for it.
  • To finish, click [Create Distribution]
  • You may be waiting quite a while for changes to propagate to the edge locations, but at some point before the “In Progress” changes to “Deployed” you will be able to pull up via the domain listed under the “Domain Name” column in your list of CloudFront Distributions.

Pointing the domain name at your distribution

  • Go back to Route 53 and go into the hosted zone for your domain name
  • Check the checkbox next to your A record and then go up to Actions -> Edit
  • Change “Value/Route traffic to” from “Alias to S3 endpoint” to “Alias to CloudFront distribution” in the “Choose Distribution” input box.
  • Enter the domain name (“asdfkjdfasoiadsf9u.cloudfront.net.”) as your domain name. (The new interface wasn’t suggesting distributions like the last version of the interface did… it may change next week, of course.)

Locking down S3

If you selected “Restrict bucket access” and had CloudFront update your S3 policy, your public access setting on the bucket is still unaffected. You’ll want to remove that:

  • Go back to Services -> Amazon S3
  • Go to your domainname.com bucket
  • Click Permissions
  • Click Block public access
  • Check “Block all public access” and click [Save]

Some other details

If you want to have JavaScript and forms function properly You’ll want to set up CORS configuration by going to your S3 bucket, then selecting Permissions tab and clicking CORS configuration:

<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>https://thomaspowell.work</AllowedOrigin>
   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>

   <AllowedHeader>*</AllowedHeader>
 </CORSRule>
 <CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
 </CORSRule>
</CORSConfiguration>

Some mistakes I made:

  • A certificate for *.domainname.com does not cover domainname.com. You have to add both if you want wildcard and domainname.com itself covered.

Next up… preventing someone from running up a $1,000 AWS bill by hammering your site (i.e., monitoring your site’s access… with better granularity than AWS Budgets…


Hosting a static site on S3 with a purchased domain

S3 Static Site Setup

Creating buckets

  • In your AWS Console, go to Services -> S3
  • Optional:
    • Click [ + Create Bucket ]
    • Type in a bucket name for static site logging (i.e., domainname.com-logs)
    • Accept next all the way to Create Bucket
  • Click [ + Create Bucket ]
  • Type in your domain name (for example, “domainname.com”)
  • If you created a logs bucket:
    • Check “Log requests for access to your bucket” under “Server access logging”
    • Enter the logging bucket name under the “Target bucket” field.
  • Hit [Next] for Configure Options
  • Under Set Permissions, uncheck “Block all public access” and check the box that says “I acknowledge that the current settings may result in this bucket and the objects within becoming public”
  • Click Create Bucket

Creating a static site

  • Drag and drop your static HTML files and assets within the site root in your project and drag to S3. Before sure you have the html file you want to use as an index and as an error page.
  • Click [Next] on the Upload modal.
  • Under “Manage public permissions” change “Do not grant public read access to this object(s)” to “Grant public read access to this object(s)” and click [Next]
  • Click [Upload] from the “Set Properties” step (skip Storage Class configuration, etc. screen).
  • Under the “Properties” tab for your bucket, click on the “Static website hosting” tile
  • Select “Use this bucket to host a website”, enter the name of your index and error documents, and click [Save].
  • You should be able to click the link under endpoint and see your index page.

Create a hosted zone for your domain in Route 53

  • In Route 53, select Hosted zone link on the left of the console
  • Click [Create hosted zone]
  • Enter your Domain name
  • Select “Public hosted zone”
  • Click “Create hosted zone”
  • If you have a domain purchased elsewhere than AWS, copy the name servers under the “Hosted zone details” and set on your domain. (e.g., on Namecheap, it’s under Domain->Nameservers->Custom DNS… BE SURE TO HIT THE GREEN CHECKMARK AFTER EDITS!!)

Create a Record in the Hosted Zone

  • Select “Create record”
  • Select “Simple record”
  • Under “Define simple record”
    • leave the record name blank
    • Value/Route traffic to
      • Alias to S3 website endpoints
      • Select your region
      • In the field that says “Choose S3 Bucket”, you should see your bucket as an option:
  • To finish, click [Define simple record] and then [Create records]