Skip to content
froz edited this page Aug 19, 2025 · 4 revisions

Setting Up Kerbernetes

The first step in deploying Kerbernetes like any SPNEGO-enabled server is configuring DNS records and generating a keytab.

DNS Configuration

To ensure proper Kerberos authentication, you’ll need both forward and reverse DNS records.

  1. Forward DNS (A record) Create an A record that maps your Kerbernetes hostname to the IP address of your HTTP load balancer.

    Example (10.51.3.1 as the load balancer IP):

    $ dig kerbernetes.example.com
    kerbernetes.example.com.  60  IN  A   10.51.3.1
  2. Reverse DNS (PTR record) If reverse DNS checks are enforced by your Kerberos setup, you must also configure a PTR record that resolves the IP back to the hostname.

    Example:

    $ dig -x 10.51.3.1
    1.3.51.10.in-addr.arpa.  60  IN  PTR  kerbernetes.example.com.

Once both forward and reverse lookups resolve consistently, you can proceed to create and configure the Kerberos keytab.


Keytab Creation with MIT Kerberos

  1. Create a service principal for HTTP

    kadmin.local -q "addprinc -randkey HTTP/[email protected]"
  2. Export the principals to a keytab

    kadmin.local -q "ktadd -k /tmp/kerbernetes_krb5.keytab HTTP/[email protected]"

Keytab Creation with FreeIPA

  1. Add the host to FreeIPA

    ipa host-add kerbernetes.example.com --ip-address=10.51.3.1
  2. Add the HTTP service principal

    ipa service-add HTTP/kerbernetes.example.com
  3. Retrieve the keytab

    ipa-getkeytab -s ipa.example.com \
      -p HTTP/[email protected] \
      -k /etc/krb5.keytab

Keytab Creation with Active Directory

I don't know how to do it. Contributions are welcomes

Creating Secrets

Kerberos Keytab Secret

Kerbernetes requires a Kerberos keytab to authenticate against the KDC. Create a Kubernetes secret that contains your keytab file (krb5.keytab).

kubectl create secret generic krb5-keytab \
  --from-file=krb5.keytab=/etc/krb5.keytab

Make sure the name of this secret matches the value of secrets.keytabSecret in your Helm chart configuration (default: krb5-keytab).


Optional: LDAP Bind Secret

If LDAP integration is enabled and a bind DN/password is required, create a secret containing the bind password:

kubectl create secret generic ldap \
  --from-literal=bindPassword='supersecretpassword'

This secret should match the value of secrets.ldapSecret in your Helm chart configuration (default: ldap).


Example values.yaml (with OpenLDAP and Ingress)

Below is a sample configuration file to deploy Kerbernetes with OpenLDAP integration and an Ingress resource:

replicaCount: 1

token:
  # NOTE:
  # The `audience` depends on your Kubernetes distribution.
  # - For **k3s**, it is usually: `k3s`
  # - For **Talos**, use the Control Plane API URL
  # - For upstream Kubernetes, typically: `https://kubernetes.default.svc.cluster.local`
  #
  # If you encounter `token unauthorized` errors, check the kube-apiserver logs.
  # The logs will indicate the expected audience (scope) required.
  audience: "https://kubernetes.default.svc.cluster.local"

secrets:
  keytabSecret: krb5-keytab
  ldapSecret: ldap

ldap:
  enabled: true
  url: ldaps://ldap.example.com
  userBaseDN: ou=users,dc=example,dc=com
  userFilter: "(uid=%s)" # adapt this to your ldap distribution
  groupBaseDN: ou=groups,dc=example,dc=com
  groupFilter: "(member=%s)" # adapt this to your ldap distribution
  bindDN: cn=read,dc=example,dc=com

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: traefik
  hosts:
    - host: kerbernetes.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - hosts:
        - kerbernetes.example.com
      secretName: kerbernetes-tls

You can install using:

helm repo add froz42 oci://ghcr.io/froz42/kerbernetes
helm install kerbernetes froz42/kerbernetes -f values.yaml

Testing Kerberos Authentication

Once Kerbernetes is deployed and the keytab/Ingress are configured, you can test Kerberos authentication using curl with the --negotiate option.

Make sure you have a valid Kerberos ticket (kinit [email protected]), then run:

curl --negotiate -u : https://kerbernetes.example.com/api/auth/kerberos

If authentication is successful, you should receive a valid response token from Kerbernetes, if not check the logs.

Optional: LDAP Group Bindings

If LDAP is enabled, you can bind LDAP groups directly to Kubernetes RBAC roles using the LdapGroupBinding CRD provided by Kerbernetes.

For example, to map the LDAP group cn=admin,ou=tech,ou=groups,ou=bocal,dc=42paris,dc=fr to the built-in cluster-admin role:

apiVersion: rbac.kerbernetes.io/v1
kind: LdapGroupBinding
metadata:
  name: admin-binding
spec:
  ldapGroupDN: "cn=admin,ou=tech,ou=groups,ou=bocal,dc=42paris,dc=fr"
  bindings:
    - apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin

Explanation

  • ldapGroupDN → The full distinguished name of the LDAP group.
  • bindings → One or more Kubernetes RBAC roles/cluster roles that members of this LDAP group should inherit.
  • In this example, all users in the admin LDAP group will have full cluster-admin permissions inside Kubernetes.

Client Setup

To authenticate to Kubernetes using Kerberos, you need the kerbernetes client binary and an exec configuration in your kubeconfig.

1. Install the client

wget https://raw.githubusercontent.com/froz42/kerbernetes/main/client/kerbernetes
chmod +x kerbernetes
sudo mv kerbernetes /usr/local/bin/

Make sure it’s in your $PATH:

kerbernetes

2. Configure kubectl

Instead of editing ~/.kube/config manually, you can add the user with:

kubectl config set-credentials kerbernetes@your-cluster-name \
  --exec-command=kerbernetes \
  --exec-api-version=client.authentication.k8s.io/v1beta1 \
  --exec-arg="https://kerbernetes.example.com/api/auth/kerberos" \
  --exec-arg="your-cluster-name" \
  --exec-interactive-mode="Never"

Then set your context to use this user:

kubectl config set-context kerbernetes@your-cluster-name \
  --cluster=your-cluster-name \
  --user=kerbernetes@your-cluster-name
kubectl config use-context kerbernetes@your-cluster-name

3. Test the setup

To verify that everything is working, run:

kubectl get pods