How to connect to Google Kubernetes clusters in parallel
My approach to connecting to multiple clusters across multiple accounts from my terminal
In this post I’m going to talk through the approach I use to switch between multiple Google Kubernetes Engine clusters on the command line. I’d expect a lot of the stuff in here has some benefit for non-GCP Kubernetes clusters too, but the ones I use on a day-to-day basis are all hosted there.
Key outcomes for me were:
- To be able to switch from one cluster to another with only a small number of commands, even if I have to authenticate with different user accounts (as primarily a GCP user, this means different email addresses / GCP projects).
- To be connected to different Kubernetes in different windows - for example tailing logs in a Prod & Non-Prod cluster at the same time - and for that connection to persist across multiple
kubectl
commands.
Spoiler Alert: I solved this with a bit of fiddling of ~/.kube/config
plus the marvellous kubie.
Background
I switched to this approach probably around six months ago. At work we have a relatively small number of clusters so up until that point I was pretty comfortable using what I think is the most common approach of kubectx + kubens and this worked well enough. However I found that I was increasingly getting an inconsistent experience when switching between the clusters I use for work and others I was using for running personal websites (like this one!) and for fiddling with things - so I started looking for alternatives.
For the purposes of explaining things, lets assume a hypothetical setup like this:
- A collection of GKE clusters across several GCP projects at work. Access to these is through a common Production email account, but they are split by project/cluster.
- A sandbox GKE cluster in a different Google organisation where the Prod email account doesn’t have access.
The hierarchy would therefore look a bit like this:
- user:
- account: alex@prod.work
gcp-projects:
- gcp-project:
- name: prod
cluster: brie
- gcp-project:
- name: staging
cluster: cheddar
- user:
- account: alex@dev.work
- gcp-projects:
- gcp-project:
- name: sandbox
cluster: chutney
I feel I should point out here that, at work, we do not name our clusters after types of cheese. I just really fancied some cheese when writing this, ok?
Enough background - on with how I set things up.
First, Multiple Google Accounts
My approach here was massively inspired by this blog post by Googler Daz Wilkin. I’m not going to repeat what is explained really well there already so go have a read if you want to understand why the following works!
For brand new setup, you will need to
gcloud init
once to set up the default configuration. It will also be necessary togcloud auth login
on each account used at least once, and this may need refreshing once in a while (but not often enough for me to really notice).
I threw away my pre-saved gcloud
configurations - not gonna need ‘em! All I have in ~/.config/gcloud/
is a config_default
, which gets updated with a simple bash script when I need to switch between Google Accounts/Projects.
The switching script looks like this - with the bits wrapped in << >>
to be replaced. I alias this so I would simply do switch dev
for a pre-saved project, or switch gcp-project user-email
to activate a new/rarely-used project.
Second, Multiple Kubernetes Contexts
Here, my ~/.kube/config
file does not exist and I set up new configs under ~/.kube/configs/
whenever I have a new cluster I need to deal with. For the number I have to worry about, this is quite manageable, but it could be automated if you had a frequently-changing enough list to be worthwhile. The steps look like this (this must be done in a brand new shell not using kubie
- see below!):
- Auth to the new cluster as normal:
gcloud container clusters get-credentials ${cluster} --project=${project} --zone=${zone}
. This adds an entry to your blank~/.kube/config
. - Copy a template config file (see below) into
~/.kube/configs/
with a unique name. - Take the values for
clusters.cluster.certificate-authority-data
andclusters.cluster.server
from the no-longer blank~/.kube/config
and put them into your new file created from the template. - Update the
name:
fields for the cluster to reflect what you want it to be known as when you list your contexts -clusters.name
,contexts.name
andcontexts.context.cluster
. It does not have to match exactly the cluster name if you want to save typing. - Delete
~/.kube/config
(unless you want to have a default cluster for when not usingkubie
- but you’ll need to keep this file tidy to avoid confusion!).
The template I mentioned for this looks as follows:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <<<your-cert-goes-here>>> # taken from .kube/config
server: <<<https://(your-apiserver-ip)>>> # taken from .kube/config
name: <<<name-of-context>>> # your choice of name
contexts:
- context:
cluster: <<<name-of-context>>> # your choice of name
user: gcloud-account
name: <<<name-of-context>>> # your choice of name
kind: Config
preferences: {}
users:
- name: gcloud-account
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud # your path may vary
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
While it looks like a few things to change, in practice it only takes a few seconds - and only needs doing for a fresh cluster. Quite ok really.
Using my list of clusters from earlier, I would have brie.yaml
, cheddar.yaml
and chutney.yaml
all in my .kube/configs/
directory, with valid certs/server details, but no mention of any GCP account details (we will just use the current gcloud config at the time we connect to them, thanks to the switch
script).
Finally, Loading Parallel Kubernetes Contexts
To make use of this shiny new config, we bring in kubie. This tool works in a similar way to kubectx
+ kubens
- you specify kubie ctx
to set your current cluster, and kubie ns
to select a namespace. The difference being, that when you run kubie ctx
you spawn a new shell within your terminal window, with the context loaded to that.
What that means in practice is you can have a terminal on e.g. the left of your screen connected to prod
and a terminal on the right of your screen connected to dev
, and both continue to work independently from each other. This is really marvellous.
I have sufficient muscle memory that I had to
alias kctx='kubie ctx'
andalias kns='kubie ns'
to save re-learning / more typing
There’s also a kubie exec
to run just one command using a different context without swapping out the whole shell if you prefer - for example kubie exec cheddar kube-system kubectl get pods
. This is really handy if you want to use this in scripts across multiple clusters.
There’s way more info/options available - see the project on github for more ideas.
How This Works in Practice
If working with two clusters and a shared user account, then I simply issue kctx brie
and kctx cheddar
in separate terminals and I’m away.
If the second cluster needs a separate user account, then I would switch sandbox alex@dev.work
first, then kctx chutney
, and I’m sorted. The only thing I need to keep in mind here is that my gcloud context has switched globally (no equivalent of kubie
here), so any gcloud SDK commands are going to be against sandbox
in both terminal windows (unless I switch again) - but my kubectl
commands are fine (I suspect until the refresh token expires, but in practice I’ve never had an issue).
To show this working in practice:
Before you get any ideas, the Cheddar cluster is long-since deleted - that certificate in the video is useless 😄
And here’s an example with two different clusters being watched at the same time:
And that’s a wrap - hopefully this inspires you to give kubie a try!