CODEX

Reliable Kubernetes on a Raspberry Pi Cluster: Security

Scott Jones
CodeX
Published in
6 min readJan 18, 2021

--

Photo by Miłosz Klinowski on Unsplash

With a fully working k3s cluster at our disposal, and access to dashboards of its contents we need to consider locking access down, especially when we want to deploy other (potentially insecure) apps to our cluster.

Part 1: Introduction
Part 2: The Foundations
Part 3: Storage
Part 4: Monitoring
Part 5: Security

The Problem

Everything we have deployed up to date is readily accessible to anyone with the URL. This isn’t so bad if it’s only on your local network (let’s ignore the fact that people can gain access to your local network for now), but as soon as you expose it to the internet you’re opening yourself up to a whole new can of worms. Let’s say you run a system such as Node-Red like I do. By default, it provides no authentication. Let’s then say you hook that up to your Home Assistant instance. That’s secure right? It forces a username and password on you to access it, or long-lived API tokens for things like Node-Red to access it. Hold on a minute though — what about this?

A possible flow in Node-Red

An anonymous user could set up a flow connecting to my preconfigured Home Assistant service (which coincidently supports auto-complete on entities) and upon seeing what entities I have, create themselves a one button click way of unlocking my front door. With some more work, you could also find out where I live, and also when the house is empty. I’m sure you can see why this needs to be locked down!

The (Catch-All) Solution

A very simple way to secure this is by putting everything behind some oAuth system. Since I have everything tied to my Google account anyway, Google OAuth is the perfect solution for this. This way, only I (and my family) will be able to access any of these systems meaning I’m no longer making the keys to my house publically available. Dashboard metrics are considerably less risky to expose, but let’s go back to the Traefik dashboard we exposed back in Part 2. Looking back at traefik-dashboard.yaml we have this:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboardsecure
namespace: traefik
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik.yourdomainhere.com`)
kind: Rule
services:
- name: api@internal
kind: TraefikService
tls:
certResolver: cloudflare

The Caveat

Using Google’s OAuth system, it requires that we use a valid top-level domain for our SSO provider. This is good practice anyway since you have control over the DNS records and don’t have to edit host files everywhere. It is also a prerequisite to getting valid LetsEncrypt certificates for our systems. The rest of the article is working on the assumption you have this setup.

SSO Provider

First things first, we need to deploy an SSO provider to our cluster, to handle the authentication. We will put that in traefik-forward-auth.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: traefik-sso
namespace: traefik
labels:
app: traefik-sso
spec:
selector:
matchLabels:
app: traefik-sso
template:
metadata:
labels:
name: traefik-sso
app: traefik-sso
spec:
containers:
- name: traefik-sso
image: thomseddon/traefik-forward-auth:2-arm
imagePullPolicy: Always
env:
- name: PROVIDERS_GOOGLE_CLIENT_ID
valueFrom:
secretKeyRef:
name: traefik-sso
key: clientid
- name: PROVIDERS_GOOGLE_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: traefik-sso
key: clientsecret
- name: SECRET
valueFrom:
secretKeyRef:
name: traefik-sso
key: secret
- name: COOKIE_DOMAIN
value: yourdomainhere.com
- name: AUTH_HOST
value: oauth.yourdomainhere.com
- name: INSECURE_COOKIE
value: "false"
- name: WHITELIST
value: <<your.email@here.com>>
- name: LOG_LEVEL
value: debug
ports:
- containerPort: 4181
---
kind: Service
apiVersion: v1
metadata:
name: traefik-sso
namespace: traefik
spec:
selector:
app: traefik-sso
ports:
- protocol: TCP
port: 4181
targetPort: 4181
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: sso
namespace: traefik
spec:
forwardAuth:
address: http://traefik-sso.traefik.svc.cluster.local:4181
authResponseHeaders:
- "X-Forwarded-User"
trustForwardHeader: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-sso
namespace: traefik
spec:
entryPoints:
- websecure
routes:
- match: Host(`oauth.yourdomainhere.com`)
kind: Rule
services:
- name: traefik-sso
port: 4181
middlewares:
- name: traefik-sso@kubernetescrd
tls:
certResolver: cloudflare

There is a lot going on in here, but its not as horrible as it looks. The image we use for our SSO takes all its configuration from environment variables so we need to set that up. There is an ingress route to our new service, which is something we have come across before. The new thing here is the middleware. This middleware leverages the forwardAuth feature of Traefik. At a high level, all we are doing is telling Traefik that when it runs this middleware, it needs to go to our service to check for authentication.

We cannot apply this manifest yet, because it relies on three secrets which we have yet to create — traefik sso secret, clientsecret, and clientid. To get that, we need to head over to the Google Developers Console. When you’re there, hit credentials on the side, and create credentials up top (You may need to create a project if you don’t already have one). The one we want is OAuth client ID

Creating new credentials

When you’re in there, you need to set a type (Web Application) a name (Not important, but let’s say Traefik OAuth) and a valid redirect URI (https://oauth.yourdomainhere.com/_oauth). The rest is not necessary.

Valid example setup

Upon creation, you will be given your client id and secret which you need for our provider (Note: The credentials in the image are throwaway and were never used)

Example OAuth client

Once you have these, we can now create our secrets in k3s

$ kubectl create secret generic traefik-sso \
--from-literal=clientid=XXX.apps.googleusercontent.com \
--from-literal=clientsecret=XXX \
--from-literal=secret="$(openssl rand -base64 128)" \
-n traefik

Finally, with everything in place, we can apply the manifest we created above

$ sudo kubectl apply -f traefik-forward-auth.yaml

Now when you go to your Traefik dashboard and take a look at the middlewares you will see a new one in there.

SSO middleware

So that should all be in place now, but nothing is using it. We need to finally apply it to one of our routes, such as our dashboard example. Let’s update traefik-dashboard.yaml

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboardsecure
namespace: traefik
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik.yourdomainhere.com`)
kind: Rule
services:
- name: api@internal
kind: TraefikService
middlewares:
- name: traefik-sso@kubernetescrd

tls:
certResolver: cloudflare

Notice, the new middlewares section that has been added. This tells Traefik to use our new middleware.

Finally, apply it the usual way

$ sudo kubectl apply -f traefik-dashboard.yaml

The Moment of Truth

Now that’s all hooked up and running, go ahead and hit your dashboard at https://traefik.yourdomainhere.com. What you should see is the google account sign-in page.

Google sign in for Traefik

Drop your credentials in and you will find yourself back at your (now secured!) dashboard.

Congratulations! You can now secure any/all endpoints using the same OAuth and keep your web systems secure. As mentioned earlier in the article, the importance of this really cannot be understated. Even the simplest of dashboards could leak enough information that in the wrong hands serious damage could be done. Nevermind if you ever get to the point of exposing a mechanism of unlocking your front door to the world!

--

--

Scott Jones
CodeX

Home automation enthusiast. Self titled k8s Guru. RPi cluster god