Bootstrap a Kubernetes cluster with custom binaries. No Kubeadm!
Currently, a lot of Kubernetes bootstrappers are black boxes which don't provide a lot of flexibility on how the cluster is set up. Devkube changes that. Based off of Kelsey's KTHW, it wires together a cluster with custom binaries, complete with Flannel and CoreDNS.
- Use custom binaries for each component (
kube-scheduler,kube-api-server,kubeletetc).- The
variables.ymlfile has links for the components. - This is useful if you are developing a new Kubernetes feature and wish to test the change out on a real cluster
- The
- Change the flags on any component
- Update the config file for the component and run the relevant playbook to deploy the changes
- Add new nodes
- Create the VM, run the relevant playbooks and the node should automatically join the cluster
- Use cluster for testing manifests etc
- Since the cluster can be created in a few minutes on bare VMs, devkube can be used to quickly get clusters up and down for testing purposes
-
Provision the VMs on your favorite cloud provider
- The VMs should be running Ubuntu 18.04 and be able to communicate with each other via private IPs (on all ports)
- The master node should have the
6443port open for theapi-server.
-
Download the required dependencies on your machine
-
Edit the
hosts.inifile- Enter the public ip, the ssh-able username and the private ip of the server
- Make sure the server is ssh-able with the command
ssh <user>@<public-ip>and the user has root access - The first server (
k8s-node-1) will become the master and all the other nodes will join in as workers
-
Run the
play.ymlplaybookansible-playbook play.yml -i hosts.ini
Some of the main files are:
variables.yml- This file has various top level configs, including release
releasehas 2 possible values:latest- this will pull in the latest artifact from kubernetes CIv1.16.0-beta.1etc - specific version, can be anything
root_certs.yml- This sets up the root CA and generates the public and signing key for it
certs.yml- This sets up all the certificates for the master and worker node components and transfers them to the servers
control_plane.yml- This downloads the control plane binaries and bootstraps the control plane
workers.yml
-
Add a new node to the existing cluster
- Run
certs.ymlto generate the certs for the new node and transfer them to the server - Run
workers.ymlto install the binaries and get the node up.
- Run
-
Change the flag on the
api-server- edit the
kube-apiserver.service.j2file and run thecontrol_plane.ymlplaybook.
- edit the
-
Change the kubelet binary
- update the
variables.ymlfile with the new binary endpoint - run
workers.ymlplaybook
- update the
-
Avoid downloading worker node binaries
- this might be required if for example you change a flag on kubelet and don't need a fresh download of binaries
- run
ansible-playbook workers.yml -i hosts.ini --skip-tags "downloads"
Please file an issue if you face any problems. Better still, help fix it and make a PR!
