I am a cheapskate, at least when it comes to cloud services.
I will happily shell out for a nice home lab, but there is something about a monthly payment that brings out my frugality. Thus I try to pare down as much usage of cloud resources as I can.
I’ve got a handful of stuff that I host on some EC2 instances. Largest among them is probably my Ubiquiti UniFi controller, which services not only my WiFi installation but also that of some “clients” (read: friends).
My day job is working with Kubernetes. At Rancher Labs, I spend all day talking to clients about Kubernetes – so it only made sense for me to want to host these projects on K8s. However, being the cheapskate that I am, running K8s in the cloud is not what *I* would consider cheap. EKS is like $72/mo just for the control plane – not including any worker nodes. I love Rancher software but running a full K8s stack would require at least t2.mediums, which would run me about $30/mo each. ($0.0464 * 24 * 30).
Sure I could do spot instances, or long-term contracts, or whatever. But I found a solution I liked a little more: Amazon Lightsail.
If you’re not familiar with Amazon Lightsail, here is a snippet from a description on the AWS website:
Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan.https://aws.amazon.com/lightsail/
What this really means? Cheap virtual machines. A 1GB/1CPU instance with 40GB SSD and 2TB transfer will run you five US dollars per month. A comparable
t2-series instance (
t2.micro) will cost approximately $8 USD/mo.
1GB/1CPU is not a lot of horsepower, so obviously a full k8s cluster does not make much sense. However, did I mention I work for Rancher Labs? We have this awesome little distribution of Kubernetes called k3s.
If you’re not familiar with k3s, here’s a snippet from the site:
K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.https://k3s.io/
See that little “resource-constrained” portion? Great! Let’s set up some cheap lightsail instances, and run k3s on them.
You’re going to need an AWS account. I think this can be a lightsail-only account, but if you have a full AWS account, you can use that too.
You’ll also want to get a copy of Alex Ellis’ excellent k3sup tool. This is what we will use to install k3s onto the nodes.
Also have a copy of
kubectl handy. Latest install of k3s leverages Kubernetes 1.17, so if you have that or greater, perfect.
Details such as OS and instance size may be modified to your taste. These are what I used, but feel free to experiment!
- Log onto the Lightsail console, and create a new instance. Select
Linux/Unixplatform, and then use
Ubuntu 18.04 LTS. For the instance size, select the $5 USD option.
- Create four nodes using this pattern.
- One will be your master node. Call that one “master”
- Three will be agents. Call them “agent” and scale the count to 3:
- Be sure you save your SSH keypair to a well-known location! This is important as we will use that SSH key to connect to the nodes and provision k3s.
- Once all the nodes have been created, let’s give them static IPs. This is important in case you need to stop/start your nodes in the future – we don’t want their IPs to change!
- For each node, click on the name of the node and go to “Networking” tab.
- On the networking tab, click “Create static IP”
- Select your instance, and assign the new static IP to that instance.
- Repeat this process for each node in your cluster (master, agent-1, agent-2, agent-3).
- In order to communicate with our master node, we’ll need to adjust the firewall rules for the node.
- Once again, click on the master node and go to the “Networking” tab.
- Click on “Add Rule”
- Specify “Custom” application, “TCP” protocol, and “6443” as the port.
- Important: Consider restricting this to an IP! By default this will be open to the world and anyone will be able to connect to your Kubernetes API server on 6443. I limit the IP address to my home IP. This can be discovered by going to ipchicken.com.
- Click “Create” to save this rule.
- In order for our agent nodes to communicate with the master (and with each other), we will need to add firewall rules between the nodes. Grab a piece of paper (or text editor) and jot down the IPs of your nodes. For example:
Now, go node-by-node and setup firewall rules according to the following steps:
- Click on the node, and go to the “Networking” tab
- Click on “Add Rule”
- Specify “All Protocols” application
- Check the Restrict to IP address box, and enter the IP addresses of every node except the node you are editing. For example, if I am configuring the rules for agent-2, it may look like this:
- Perform these steps for all nodes (master, and all agents).
- Now that the nodes are setup, let’s head to your command line. We need to install the k3s master first. To do so, execute the following command:
k3sup install --ip <master_node_ip> --user ubuntu --ssh-key <path_to_ssh_key> --local-path ~/.kube/lightsail
This will install the master k3s node, and output a
~/.kube/lightsail. If that is not a valid location on your system, you may need to tweak this command.
- Once you have a valid
kubeconfigfile, let’s test if the master is working. Issue the following commands:
kubectl get nodes
You should see an output similar to:
Yay our first k3s node is up!
NAME STATUS ROLES AGE VERSION
ip-172-26-1-104 Ready master 2m v1.17.2+k3s1
- Let’s join the remaining agent nodes. To do so, issue the following command for one of your agent nodes:
k3sup join --server-ip <master_node_ip> --ip <agent_ip> --user ubuntu --ssh-key <path_to_ssh_key>
This should completely quickly and a new node should join your cluster! To verify, execute
kubectl get nodesonce again, and check output:
NAME STATUS ROLES AGE VERSION
ip-172-26-1-104 Ready master 5m v1.17.2+k3s1
ip-172-26-2-76 Ready <none> 1m v1.17.2+k3s1
- Issue the command in Step 8 for the remaining nodes. Hooray! You have built a k3s cluster on Lightsail.