Introduction
Single sign on (SSO) through OIDC is everywhere these days. So it is only natural that infrastructure access should be handled the same way. In this post, I'll show a simple example configuration of how to enable authentication through GitHub for a K3s Kubernetes cluster.
This is based on the article Kubernetes authentication via GitHub OAuth and Dex by Amet Umerov but adapted to work better with K3s and without Helm.
Disclaimer
I am by no means an expert in login flows or security. I would not blindly put this into production and expect it to be secure. Do your security homework, people!
With that said, this can be a starting point that can be developed upon and hardened. Or as way of learning a little bit about Kubernetes authentication.
What We Will Build
An authentication flow that enables login and authentication
to a K3s Kubernetes cluster by navigating to a simple web-page such as
login.k8s.example.com
. Authentication is here done with GitHub, where
organization team members may be given permissions based on their
team memberships. However, this may be adapted to work with any OIDC
provider, such as Microsoft, Google, GitLab, and more!
Prerequisites
There are a number of prerequisites required to be fulfilled that I consider to be out of scope of this article.
- A domain that can be pointed to our Kubernetes API and login endpoint.
I will use
k8s.example.com
throughout this article, but this needs to be changed to your proper one if you decide to follow along. - Cert Manager is required to be installed and
able to issue valid certificates for the aforementioned domain. There are
plenty of guides out there of how to configure a Let's Encrypt
ClusterIssuer
. - A GitHub organization. We will use the name
my-org
throughout this article.
Configuration
The manifests in this article will be managed by Kustomize:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- dex.yaml
- dex-k8s-authenticator.yaml
- certificates.yaml
- rolebinding.yaml
configMapGenerator:
- name: dex
files:
- config.yaml=dex-config.yaml
- name: dex-k8s-authenticator
files:
- config.yaml=dex-k8s-authenticator-config.yaml
As can be seen, we will configure a number of files. These are:
dex.yaml
- Dex deployment and related Kubernetes resources.dex-k8s-authenticator.yaml
- Dex K8s Authenticator deployment and related Kubernetes resources.certificates.yaml
- Cert manager certificate configuration.rolebinding.yaml
- Sample Kubernetes RoleBinding for authenticated users.
Dex and the Dex K8s Authenticator are both configured using YAML
supplied through ConfigMaps. We define the respective configs in
dex-config.yaml
and dex-k8s-authenticator-config.yaml
and later use
Kustomize's ConfigMapGenerator to create the ConfigMaps.
Dex Deployment
Here's dex.yaml
which contains all the resources used related to Dex. See
below for details.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: dex-auth-sa
---
# This should be configured in a secure way.
# This is left in here as a config hint.
#
# apiVersion: v1
# kind: Secret
# metadata:
# name: github-client-info
# data:
# github-client-id: ""
# github-client-secret: ""
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: dex
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dex
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: dex
subjects:
- kind: ServiceAccount
name: dex-auth-sa
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dex
namespace: kube-system
rules:
- apiGroups:
- dex.coreos.com
resources:
- "*"
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dex
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dex
subjects:
- kind: ServiceAccount
name: dex-auth-sa
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
name: dex
spec:
type: ClusterIP
ports:
- port: 5556
targetPort: http
name: http
selector:
app: dex
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dex
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
replicas: 1
minReadySeconds: 30
strategy:
rollingUpdate:
maxUnavailable: 0
selector:
matchLabels:
app: dex
template:
metadata:
labels:
app: dex
spec:
volumes:
- name: config
configMap:
name: dex
items:
- key: config.yaml
path: config.yaml
- name: tls-crt
hostPath:
path: /var/lib/rancher/k3s/server/tls/server-ca.crt
- name: tls-key
hostPath:
path: /var/lib/rancher/k3s/server/tls/server-ca.key
serviceAccountName: dex-auth-sa
nodeSelector:
node-role.kubernetes.io/control-plane: "true"
containers:
- name: dex
image: "dexidp/dex:latest"
imagePullPolicy: IfNotPresent
command: ["/usr/local/bin/dex", "serve", "/etc/dex/config.yaml"]
env:
- name: GITHUB_CLIENT_ID
valueFrom:
secretKeyRef:
name: github-client-info
key: github-client-id
- name: GITHUB_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: github-client-info
key: github-client-secret
- name: KUBERNETES_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 5556
livenessProbe:
httpGet:
path: /healthz
port: 5556
readinessProbe:
httpGet:
path: /healthz
port: 5556
initialDelaySeconds: 5
timeoutSeconds: 1
volumeMounts:
- name: config
mountPath: /etc/dex/config.yaml
subPath: config.yaml
readOnly: true
- name: tls-crt
readOnly: true
mountPath: /etc/dex/tls/tls.crt
- name: tls-key
readOnly: true
mountPath: /etc/dex/tls/tls.key
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dex
annotations:
kubernetes.io/tls-acme: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- dex.k8s.example.com
secretName: cert-auth-dex
rules:
- host: dex.k8s.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dex
port:
number: 5556
There are a few notable items in the configuration above.
- We first need to configure a GitHub Oauth application for Dex. To do
this, navigate to https://github.com/organizations/my-org/settings/applications
(change my-org). Create a new application and fill the following fields after
replacing
example.com
with your own domain:- Homepage URL:
https://dex.k8s.example.com
- Callback URL:
https://dex.k8s.example.com/callback
- Homepage URL:
- Once the application has been registered, take care to find the Client ID, and Client secret. They should be supplied to Dex using your secret management workflow of choice. Here, we will simply create a secret.
export CLIENT_ID="<your id>"
export CLIENT_SECRET="<your secret>"
kubectl create secret generic github-client-info \
--namespace kube-system \
--from-literal=github-client-id="$CLIENT_ID" \
--from-literal=github-client-secret="$CLIENT_SECRET"
- We are reusing K3s' generated server CA for our Dex TLS configuration. Thus, we are locking Dex to only run on control-plane nodes. This simplifies the process massively.
- An Ingress object is configured such that the
dex.k8s.example.com
route will be setup with a valid TLS certificate. This is important, if GitHub does not trust the certificate for the URLs entered when registered the application previously, the authorization flow will not work. The secretcert-auth-dex
will be created later.
Dex Configuration
The Dex configuration file, dex-config.yaml
, may be seen below. See the
Dex docs for more information.
issuer: https://dex.k8s.example.com
storage:
type: kubernetes
config:
inCluster: true
web:
http: 0.0.0.0:5556
frontend:
theme: "coreos"
issuer: "Example Co"
issuerUrl: "https://example.com"
logoUrl: https://example.com/images/logo-250x25.png
expiry:
signingKeys: "6h"
idTokens: "24h"
logger:
level: debug
format: json
oauth2:
responseTypes: ["code", "token", "id_token"]
skipApprovalScreen: true
connectors:
- type: github
id: github
name: GitHub
config:
# These are pulled from environment variables
clientID: $GITHUB_CLIENT_ID
clientSecret: $GITHUB_CLIENT_SECRET
redirectURI: https://dex.k8s.example.com/callback
orgs:
- name: my-org
loadAllGroups: true
useLoginAsID: true
# The 'name' must match the k8s API server's 'oidc-client-id'
staticClients:
- id: dex-k8s-authenticator
name: dex-k8s-authenticator
secret: "<generated secret>"
redirectURIs:
- https://login.k8s.example.com/callback
Notably, the static client dex-k8s-authenticator
has a secret
field.
This should be replaced by a random string, which should later be copied to
the next step.
Dex K8s Authenticator Deployment
The Dex K8s Authenticator is
what will allow us to navigate to a login.k8s.example.com
webpage to
authenticate with our Kubernetes cluster. Similarly to Dex, we deploy the
Dex K8s Authenticator using manifests. They are found in
dex-k8s-authenticator.yaml
.
---
apiVersion: v1
kind: Service
metadata:
name: dex-k8s-authenticator
spec:
type: ClusterIP
ports:
- port: 5555
targetPort: http
name: http
selector:
app: dex-k8s-authenticator
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dex-k8s-authenticator
spec:
replicas: 1
selector:
matchLabels:
app: dex-k8s-authenticator
template:
metadata:
labels:
app: dex-k8s-authenticator
spec:
containers:
- name: dex-k8s-authenticator
image: "mintel/dex-k8s-authenticator:latest"
imagePullPolicy: IfNotPresent
args: [ "--config", "config.yaml" ]
ports:
- name: http
containerPort: 5555
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: http
readinessProbe:
httpGet:
path: /healthz
port: http
volumeMounts:
- name: config
subPath: config.yaml
mountPath: /app/config.yaml
- name: tls
mountPath: /etc/tls/tls.crt
volumes:
- name: config
configMap:
name: dex-k8s-authenticator
- name: tls
hostPath:
path: /var/lib/rancher/k3s/agent/server-ca.crt
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dex-k8s-authenticator
annotations:
kubernetes.io/tls-acme: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- login.k8s.example.com
secretName: cert-auth-login
rules:
- host: login.k8s.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dex-k8s-authenticator
port:
number: 5555
The authenticator also requires its own YAML configuration file,
dex-k8s-authenticator-config.yaml
:
listen: http://0.0.0.0:5555
web_path_prefix: /
debug: false
clusters:
- client_id: dex-k8s-authenticator
client_secret: "<same secret as in the dex configuration>"
issuer: https://dex.k8s.example.com
k8s_ca_pem_file: /etc/tls/tls.crt
k8s_master_uri: https://k8s.example.com:6443
name: k8s.example.com
redirect_uri: https://login.k8s.example.com/callback
short_description: My K3s cluster
description: My K3s cluster
Certificates
Now that we have seen that we require two certificates, let's configure them
in certificates.yaml
:
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cert-auth-dex
namespace: kube-system
spec:
secretName: cert-auth-dex
dnsNames:
- dex.k8s.example.com
issuerRef:
name: letsencrypt
kind: ClusterIssuer
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cert-auth-login
namespace: kube-system
spec:
secretName: cert-auth-login
dnsNames:
- login.k8s.example.com
issuerRef:
name: letsencrypt
kind: ClusterIssuer
This assumes you have configured a cert-manager ClusterIssuer named
letsencrypt
.
Role Bindings
Now, if you remember from the start of this post. There is only one file
referenced by the Kustomization file we have yet to configure. That is the
rolebinding.yaml
file. This file contains the configuration of the
rights given to people within the GitHub organization. As this is somewhat
out of scope of this post, the configuration used here gives everyone in the
team kubernetes-admins
administrative access to the Kubernetes cluster.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dex-cluster-auth
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: Group
name: "my-org:kubernetes-admins"
apiGroup: rbac.authorization.k8s.io
Configuring K3s To Use Our Login Flow
Now that we have everything setup, we need to tell K3s (or more specifically, the Kubernetes API server) to use our authorization configuration. To do this, we need to supply K3s with some additional arguments during startup on the control-plane node. In case of a high availability (HA) setup, this has to be done on all control-planes. A simple way to do this (but perhaps not the best one, depending on your method of deployment) is to simply edit the systemd service file.
Edit the file /etc/systemd/system/k3s.service
on each of the control-plane
nodes. Here, locate the ExecStart=/usr/local/bin/k3s \
line and add the
following lines after server \
:
--tls-san='k8s.example.com' \
--kube-apiserver-arg='--anonymous-auth=false' \
--kube-apiserver-arg='--authorization-mode=RBAC,Node' \
--kube-apiserver-arg='--oidc-client-id=dex-k8s-authenticator' \
--kube-apiserver-arg='--oidc-groups-claim=groups' \
--kube-apiserver-arg='--oidc-issuer-url=https://dex.k8s.example.com' \
--kube-apiserver-arg='--oidc-username-claim=email' \
Then, stop the service, reload the configuration and restart it again:
systemctl stop k3s
systemctl daemon-relaod
systemctl start k3s
And that should be it! Navigate to https://login.k8s.example.com and follow the instructions!
Limitations
Development on the Dex K8s Authenticator has been rather inactive as of late, and it is likely that better options exist out there. One such example is kubelogin which likely is a better choice. However, swapping the two should not be a big effort.
Conclusions
We have configured K3s to use a custom login flow using Dex and GitHub. I know this is not perfect and ready for production, but it may point someone in the right direction. If nothing else, this article has served as a way for me to formulate my thoughts and write down my learnings from this process.
Thank you for reading!