Sharding MongoDB using Range strategy

Hi πŸ‘‹πŸ‘‹

In this article I will explore the topic of sharding a Mongo Database that runs on Kubernetes. Before we get started, if you want to follow along, please install the tools listed in the prerequisites section, and if you want to learn more about sharding, check out this fantastic article Sharding Pattern.

Prerequisites

Introduction

Let’s install a MongoDB instance on the Kubernetes cluster using helm.

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-mongo bitnami/mongodb-sharded

After the installation completes, save the database’s root password and replica set key. While doing this the first time I messed up and didn’t save them properly.

Run the following commands to print the password and replica set key on the command line. If you’re on Windows I have provided you with a Powershell function for base64 and if you’re on Unix don’t forget to pass –decode to base64.

kubectl get secret --namespace default my-release-mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64
kubectl get secret --namespace default my-release-mongodb-sharded -o jsonpath="{.data.mongodb-replica-set-key}" | base64

Sharding the Database

Verify that all your pods are running and start a shell connection to the mongos server.

	@denis ➜ ~ kubectl get pods
	NAME                                              READY   STATUS    RESTARTS   AGE
	my-mongo-mongodb-sharded-configsvr-0              1/1     Running   0          3m8s
	my-mongo-mongodb-sharded-configsvr-1              1/1     Running   0          116s
	my-mongo-mongodb-sharded-mongos-c4dd66768-dqlbv   1/1     Running   0          3m8s
	my-mongo-mongodb-sharded-shard0-data-0            1/1     Running   0          3m8s
	my-mongo-mongodb-sharded-shard0-data-1            1/1     Running   0          103s
	my-mongo-mongodb-sharded-shard1-data-0            1/1     Running   0          3m8s
my-mongo-mongodb-sharded-shard1-data-1            1/1     Running   0          93s
kubectl port-forward --namespace default svc/my-mongo-mongodb-sharded 27017:27017
# and in another terminal:
mongosh --host 127.0.0.1 --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

By running sh.status() you should get an output which contains two mongo shards:

shards
[
  {
    _id: 'my-mongo-mongodb-sharded-shard-0',
    host: 'my-mongo-mongodb-sharded-shard-0/my-mongo-mongodb-sharded-shard0-data-0.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017,my-mongo-mongodb-sharded-shard0-data-1.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017',
    state: 1
  },
  {
    _id: 'my-mongo-mongodb-sharded-shard-1',
    host: 'my-mongo-mongodb-sharded-shard-1/my-mongo-mongodb-sharded-shard1-data-0.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017,my-mongo-mongodb-sharded-shard1-data-1.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017',
    state: 1
  }
]

To enable sharding on the database and collection, I’m going to insert some dummy data in my_data database and my_users collections. The script used to insert the data is attached at the end of this blog post.

[direct: mongos]> sh.enableSharding("my_data")
{
  ok: 1,
  operationTime: Timestamp(3, 1628345449),
  '$clusterTime': {
    clusterTime: Timestamp(3, 1628345449),
    signature: {
      hash: Binary(Buffer.from("e57c8c37047f7aa170fb59f6b11e22aa65159a30", "hex"), 0),
      keyId: Long("6993682727694237708")
    }
  }
}

[direct: mongos]> db.my_users.createIndex({"t": 1})
[direct: mongos]> sh.shardCollection("my_data.my_users", { "t": 1 })

sh.addShardToZone("my-mongo-mongodb-sharded-shard-1", "TSR1")
sh.addShardToZone("my-mongo-mongodb-sharded-shard-0", "TSR2")

If you’ve made it this far, congrats, you’ve enabled sharding, now let’s define some rules.

Since we’re going to use a range sharding strategy based on the key t, and I have two shards available I would like my data to be distributed in the following way:

 sh.updateZoneKeyRange("my_data.my_users", {t: 46}, {t: MaxKey()}, "TSR2")
 sh.updateZoneKeyRange("my_data.my_users", {t: MinKey()}, {t: 46}, "TSR1")

Note: The label on the TSR2 Zone is wrong, the correct value is: 46 ≀ t < 1000

Running sh.status() should now yield the following output.

    collections: {
      'my_data.my_users': {
        shardKey: { t: 1 },
        unique: false,
        balancing: true,
        chunkMetadata: { shard: 'my-mongo-mongodb-sharded-shard-1', nChunks: 3 },
        chunks: [
          {
            min: { t: MinKey() },
            max: { t: 45 },
            'on shard': 'my-mongo-mongodb-sharded-shard-1',
            'last modified': Timestamp(2, 1)
          },
          {
            min: { t: 46 },
            max: { t: MaxKey() },
            'on shard': 'my-mongo-mongodb-sharded-shard-0',
            'last modified': Timestamp(0, 2)
          }
        ],
        tags: [
          { tag: 'TSR1', min: { t: MinKey() }, max: { t: 46} },
          { tag: 'TSR2', min: { t: 46 }, max: { t: MaxKey() } }
        ]
      }

To test the rules, use the provided python script, modify the times variable and run it with various values.

You can run db.my_users.getShardDistribution() to view the data distribution on the shards.

[direct: mongos]> db.my_users.getShardDistribution()

Shard my-mongo-mongodb-sharded-shard-0 at my-mongo-mongodb-sharded-shard-0/my-mongo-mongodb-sharded-shard0-data-0.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017,my-mongo-mongodb-sharded-shard0-data-1.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017
{
  data: '144KiB',
  docs: 1667,
  chunks: 1,
  'estimated data per chunk': '144KiB',
  'estimated docs per chunk': 1667
}

Shard my-mongo-mongodb-sharded-shard-1 at my-mongo-mongodb-sharded-shard-1/my-mongo-mongodb-sharded-shard1-data-0.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017,my-mongo-mongodb-sharded-shard1-data-1.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017
{
  data: '195KiB',
  docs: 2336,
  chunks: 3,
  'estimated data per chunk': '65KiB',
  'estimated docs per chunk': 778
}

Adding More Shards

To add more shards to the cluster all we need to do is run helm upgrade, if you don’t mess up the replica set key like I did it should work on the first run.

helm upgrade my-mongo bitnami/mongodb-sharded --set shards=3,configsvr.replicas=2,shardsvr.dataNode.replicas=2,mongodbRootPassword=tcDMM5sqNC,replicaSetKey=D6BGM2ixd3

If you mess up the key πŸ˜…, then to solve the issue and bring your cluster back online follow these steps.

  1. downgrade the cluster back to 2 shards
  2. SSH into an old working shard shard1 or shard0, and grab the credentials from the environment variables.

The kubernetes secret and mongos pod’s credential have been overridden by the upgrade and they are wrong!

MONGODB_ROOT_PASSWORD=tcDMM5sqNC
MONGODB_ENABLE_DIRECTORY_PER_DB=no
MONGODB_SYSTEM_LOG_VERBOSITY=0
MY_MONGO_MONGODB_SHARDED_SERVICE_PORT=27017
KUBERNETES_SERVICE_HOST=10.245.0.1
MONGODB_REPLICA_SET_KEY=D6BGM2ixd3

After you save the correct password and replica set key, search for the volumes that belong to the shards which have the wrong replica set key and delete them. In my case I only delete the volumes which belong to the 3rd shard that I’ve added, since counting starts from 0, I’m looking for shard2 in the name.

@denis ➜ Downloads kubectl get persistentvolumeclaims
NAME                                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
datadir-my-mongo-mongodb-sharded-configsvr-0       Bound    pvc-8e7fa303-9198-419e-a6c1-8de3e6d89962   8Gi        RWO            do-block-storage   132m
datadir-my-mongo-mongodb-sharded-configsvr-1       Bound    pvc-6e3bc70f-83a8-4e80-b856-c44a4295be35   8Gi        RWO            do-block-storage   131m
datadir-my-mongo-mongodb-sharded-shard0-data-0     Bound    pvc-f66647bc-ee3b-4820-b466-a11b197fde74   8Gi        RWO            do-block-storage   132m
datadir-my-mongo-mongodb-sharded-shard0-data-1     Bound    pvc-62257e91-d461-4ddb-af37-4876d2431703   8Gi        RWO            do-block-storage   131m
datadir-my-mongo-mongodb-sharded-shard1-data-0     Bound    pvc-9a062ba5-f320-49c9-ae15-d75e8e5f2cf8   8Gi        RWO            do-block-storage   132m
datadir-my-mongo-mongodb-sharded-shard1-data-1     Bound    pvc-068b04bd-8875-40d7-b47c-40092ceb7973   8Gi        RWO            do-block-storage   130m
datadir-my-mongo-mongodb-sharded-shard2-data-0     Bound    pvc-93d9a238-ae36-49e1-b0b6-f320baf89373   8Gi        RWO            do-block-storage   73m
datadir-my-mongo-mongodb-sharded-shard2-data-1     Bound    pvc-b09a8d0d-5012-4f23-8096-a713f3025521   8Gi        RWO            do-block-storage   50m
@denis ➜ Downloads kubectl get persistentvolumes
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                      STORAGECLASS       REASON   AGE
pvc-068b04bd-8875-40d7-b47c-40092ceb7973   8Gi        RWO            Delete           Bound    default/datadir-my-mongo-mongodb-sharded-shard1-data-1     do-block-storage            131m
pvc-321136d8-8a27-45cb-8ed1-8d636c530859   8Gi        RWO            Delete           Bound    default/datadir-my-release-mongodb-sharded-shard2-data-1   do-block-storage            143m
pvc-42dd7167-5836-4e94-bf42-473c6cea49a4   8Gi        RWO            Delete           Bound    default/datadir-my-release-mongodb-sharded-shard2-data-0   do-block-storage            145m
pvc-48714777-97b3-4acc-8562-7b69a8e3b488   8Gi        RWO            Delete           Bound    default/datadir-my-release-mongodb-sharded-shard1-data-1   do-block-storage            143m
pvc-499797e9-a5df-4c7b-a1fb-482c3dca36a6   8Gi        RWO            Delete           Bound    default/datadir-my-release-mongodb-sharded-shard3-data-1   do-block-storage            143m
pvc-61ec9e04-1bad-4312-ba16-fb24c12efb4b   8Gi        RWO            Delete           Bound    default/datadir-my-release-
...

After that’s done, run the helm upgrade command again and if everything is working get a mongosh connection πŸ˜€.

Running sh.status() will now show the 3rd shard.

[
  {
    _id: 'my-mongo-mongodb-sharded-shard-0',
    host: 'my-mongo-mongodb-sharded-shard-0/my-mongo-mongodb-sharded-shard0-data-0.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017,my-mongo-mongodb-sharded-shard0-data-1.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017',
    state: 1,
    tags: [ 'TSR2' ]
  },
  {
    _id: 'my-mongo-mongodb-sharded-shard-1',
    host: 'my-mongo-mongodb-sharded-shard-1/my-mongo-mongodb-sharded-shard1-data-0.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017,my-mongo-mongodb-sharded-shard1-data-1.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017',
    state: 1,
    tags: [ 'TSR1' ]
  },
  {
    _id: 'my-mongo-mongodb-sharded-shard-2',
    host: 'my-mongo-mongodb-sharded-shard-2/my-mongo-mongodb-sharded-shard2-data-0.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017,my-mongo-mongodb-sharded-shard2-data-1.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017',
    state: 1
  }
]

Next, update the sharding rules and everything will be working as in the diagram.

sh.addShardToZone("my-mongo-mongodb-sharded-shard-2", "TSR3")
sh.removeRangeFromZone("my_data.my_users", {t: 46}, {t: MaxKey()}, "TSR2")
sh.updateZoneKeyRange("my_data.my_users", {t: 46}, {t 1000}, "TSR2")
sh.updateZoneKeyRange("my_data.my_users", {t: 1000}, {t: MaxKey()}, "TSR3")

sh.status() should show something like

        chunks: [
          {
            min: { t: MinKey() },
            max: { t: 46 },
            'on shard': 'my-mongo-mongodb-sharded-shard-1',
            'last modified': Timestamp(0, 5)
          },
          {
            min: { t: 46 },
            max: { t: 1000 },
            'on shard': 'my-mongo-mongodb-sharded-shard-0',
            'last modified': Timestamp(3, 4)
          },
          {
            min: { t: 1000 },
            max: { t: MaxKey() },
            'on shard': 'my-mongo-mongodb-sharded-shard-2',
            'last modified': Timestamp(1, 5)
          }
        ],
        tags: [
          { tag: 'TSR1', min: { t: MinKey() }, max: { t: 46 } },
          { tag: 'TSR2', min: { t: 46 }, max: { t: 1000 } },
          { tag: 'TSR3', min: { t: 1000 }, max: { t: MaxKey() } }
        ]
      }

Conclusions

Shading a MongoDB can seem intimidating at first, but with some practice in advance you can do it! If sharding doesn’t work out for you, you can Convert Sharded Cluster to Replica Set, but, be prepared with some backups.

Thanks for reading πŸ“š and happy hacking! πŸ”©πŸ”¨

Base64 Powershell Function
function global:Convert-From-Base64 {
  [CmdletBinding()]
  [Alias('base64')]
  param (
    [parameter(ValueFromPipeline,Mandatory=$True,Position=0)]
    [string] $EncodedText
  )
  process {
    [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($EncodedText))
  }
}

Python Script

import random

import pymongo


def do_stuff():
    client = pymongo.MongoClient("mongodb://root:tcDMM5sqNC@127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000")
    col = client.my_data.my_users

    usernames = ["dovahkiin", "rey", "dey", "see", "mee", "rollin", "they", "hating"]
    hobbies = ["coding", "recording", "streaming", "batman", "footbal", "sports", "mathematics"]
    ages = [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]
    # times = [12, 14, 15, 23, 45, 32, 20]
    times = [47, 80, 93, 49, 96, 43]

    buffer = []
    for _ in range(1_000):
        first = random.choice(usernames).capitalize()
        mid = random.choice(usernames).capitalize()
        last = random.choice(usernames).capitalize()
        buffer.append(pymongo.InsertOne({
            "name": f"{first} '{mid}' {last}",
            "age": random.choice(ages),
            "hobbies": random.choice(hobbies),
            "t": random.choice(times)
        }))
    col.bulk_write(buffer)


if __name__ == '__main__':
    do_stuff()

References

https://bitnami.com/stack/mongodb-sharded/helm

https://docs.microsoft.com/en-us/azure/architecture/patterns/sharding

https://docs.mongodb.com/manual/core/zone-sharding/

https://docs.mongodb.com/manual/core/ranged-sharding/

https://docs.mongodb.com/manual/reference/method/sh.updateZoneKeyRange/

https://docs.mongodb.com/v5.0/core/sharding-choose-a-shard-key/

Blue vector created by starline – www.freepik.com

Docker basics for Developers

Introduction

Hello πŸ™‹β€β™‚οΈ

In this article we will discuss a tool called Docker 🐬

Docker is a platform which allows to package individual applications in containers. This achieves application isolation at the OS level without the need to use virtualization technologies by making use of the OS APIs.

Since it can be a little hard to get into Docker if you are new I will try to keep things short and concise.

A little example

Let us say you want to deploy two Applications on a Linux box, and one of them depends on imagemagick library version A and the other one depends on version B.

Since you can have only one version of the library installed at the same time you cannot deploy both applications. ☹

With Docker, you can package the Application and all its dependencies into containers. πŸ“¦

Something you can probably achieve without Docker as well if you try hard enough. But Docker makes this amazingly easy for us.

Once you’ve packaged your application you can publish it on  the Docker Hub so other people can make use of.

Let us recap:

  • 🐬Docker is a platform that empowers developers.
  • ☁Docker Hub is a Hub for sharing Docker containers, tools and plugins.
  • πŸ“¦Container is an abstraction at the application level.

Installing Docker βš™

To install Docker, follow the official guide it is well written.

If you are on Windows 10 Home, I recommend you install WSL and WSL2 before installing Docker.

Install WSL on Windows 10 | Microsoft Docs

Packaging πŸ“¦

To package you will need a Dockerfile. The Dockerfile is a text document that contains all the steps needed to package the application into a container.

For a Golang application an example Dockerfile would look like this:

# Golang is our base images.
FROM golang:1.7

# Make a directory called simplFT
RUN mkdir -p /go/src/github.com/metonimie/simplFT/

# Copy the current dir contents into simplFT
ADD . /go/src/github.com/metonimie/simplFT/

# Set the working directory to simplFT
WORKDIR /go/src/github.com/metonimie/simplFT/

# Install dependencies
RUN go get "github.com/zyxar/image2ascii/ascii"
RUN go get "github.com/spf13/viper"

# Build the application
RUN go build ./main.go

# Run simplFT when the container launches
CMD ["./main", "-config-name", "docker-config"]

After you have a Dockerfile setup, you can build an image running: docker build . -f path_to_dockerfile -t my_app

Once the image is built you can use the image as a base for containers, and you can run as many as you would like: docker run -d  -p 8080:8080 -p 8081:8081 my_app

To see running containers you can write: docker ps

And to stop a container: docker stop container_id

You can also SSH into containers, mount volumes, expose ports, obtain logs and many more.

If you have secrets in the current directory!!!

Create a file called .dockerignore and include all files that contain sensitive information in it. If you do not, you may leak sensitive information into your Docker images.

Docker and Docker-Compose

Docker-compose is another tool which allows run recipes that involve multiple containers with ease.

If you are a Developer then your application will depend on other services like Nginx, Reddit, MongoDB, RabbitMQ and so on. Having to setup your DEV environment and install all these on your machine is painful a boring, and if you install the wrong version of MongoDB your application may crash.

Just like with Dockerfile you can create a docker-compose.yaml file, in which can reference third party images or your own Dockerfile.

A local development environment for WordPress would look like this:

version: "3.9"
    
services:
  db:
    image: mysql:5.7
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
    
  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    volumes:
      - wordpress_data:/var/www/html
    ports:
      - "8000:80"
    restart: always
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
volumes:
  db_data: {}
  wordpress_data: {}

You can run docker-compose up -d to start all services and docker-compose down to stop them.

The full example can be found at Quickstart: Compose and WordPress | Docker Documentation.

Thanks for reading! 🧾

Stay healthy and take care!

Kubernetes OpenID Connect Integration with Resource Owner Flow

Hello πŸ˜„,

In this article, I will demonstrate how to configure Kubernetes (minikube) to use OpenID Connect as an authentication strategy.

We will cover the Resource Owner Password flow. Feel free chose the right authentication flow depending on your application’s needs.

Please refer to this diagram in order to choose the flow:

Note that the Client Credentials flow is not supported by Kubernetes. According to the official docs:

“To identify the user, the authenticator uses the id_token (not the access_token) from the OAuth2 token response as a bearer token. “

Since the Client Credentials flow only returns an access_token, we won’t be able to use this flow with Kubernetes.

Setup the OAuth client

To play around with these concepts we need to create an OAuth application. I prefer to use Auth0 so that everyone can follow along for free.

  1. Create an application of type Regular Web Application

2. Open the newly created application, go to Settings, scroll down and click advanced settings. On the Grant Types tab click the Password item and hit Save.

3. Goto Authentication -> Database and create a new database Connection. I named my connection Username-Password-Authentication.

4. Goto Settings -> Tenant and in the API Authorization Settings section set the Default Directory connection name to Username-Password-Authentication

5. Goto User Management -> Users and create a new user. I named my user: nuculabs-kube@nuculabs.dev and gave it the following password: Pa27rgN9KneN

Next, click on the user and set it’s email address as verified.

Setup Minikube

We can run Kubernetes locally on our computers using Minikube.

If it is your first time playing with Minikube then follow the installation instructions from: https://minikube.sigs.k8s.io/docs/start/ and start your minikube cluster with minikube start.

After your cluster is started stop it with minikube stop. This step is necessary otherwise you will encounter errors when running the next command.

Now, let’s start it with the following command:

 
minikube start --extra-config=apiserver.authorization-mode=RBAC --extra-config=apiserver.oidc-issuer-url="https://xxx.auth0.com/" --extra-config=apiserver.oidc-client-id=1RaJmjhjaapNLGXQjcYViQ15ZYzZoZdL --extra-config=apiserver.oidc-username-claim=email

The --extra-config=apiserver.oidc-issuer-url= must be equal to your Auth0 domain, it must start with https:// and end with a /.

The --extra-config=apiserver.oidc-client-id= will contain the Client ID of the OAuth client.

The --extra-config=apiserver.oidc-username-claim=email will be set to email because we want to map the email of our Auth0 user nuculabs-kube@nuculabs.dev to a user that we will create within Kubernetes.

Auth0 will return an id_token of the form Header.Payload.Signature. In our case, it will have a payload with a field email that will be equal to the user’s email address.

The following token doesn’t include the email claim, but it includes the name claim, in that case, we can use name as our apiserver.oidc-username-claim.

To play with JWTs visit: https://jwt.io/#debugger-io

If your Minikube cluster has started you should see the following output:

πŸ˜„ minikube v1.20.0 on Microsoft Windows 10 Home 10.0.19042 Build 19042
✨ Using the docker driver based on existing profile
πŸ‘ Starting control plane node minikube in cluster minikube
🚜 Pulling base image …
πŸ”„ Restarting existing docker container for "minikube" …
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 …
β–ͺ apiserver.authorization-mode=RBAC
β–ͺ apiserver.oidc-issuer-url=https://xxx.auth0.com/
β–ͺ apiserver.oidc-client-id=1RaJmjhjaapNLGXQjcYViQ15ZYzZoZdL
β–ͺ apiserver.oidc-username-claim=email
πŸ”Ž Verifying Kubernetes components…
β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Congrats for making it this far! πŸŽ‰βœ¨

The next step is to map our nuculabs-kube@nuculabs.dev user to a admin role inside the Kubernetes cluster.

Open your favorite code editor, create clusterrole.yaml and paste the following contents in it:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]

Next, create clusterrolebinding.yaml and paste:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: admin-binding
subjects:
- kind: User
  name: nuculabs-kube@nuculabs.dev
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin-role

Apply the role and the role binding in minikube:

kubectl apply -f clusterrole.yaml
kubectl apply -f clusterrolebinding.yaml

Great, now we’ve interacted with the minikube cluster using a client certificate and key and the minikube user.

PS C:\Users\denis\Downloads> cat C:\Users\denis\.kube\config
apiVersion: v1
clusters:
...
users:
- name: minikube
  user:
    client-certificate: C:\Users\denis\.minikube\profiles\minikube\client.crt
    client-key: C:\Users\denis\.minikube\profiles\minikube\client.key

Next, we’ll update kubeconfig to switch to our nuculabs-kube@nuculabs.dev user.

Updating kubeconfig

The kubeconfig file is usually found in the ~/.kube/config directory and it contains information about clusters and users.

To send requests to minikube with the nuculabs-kube@nuculabs.dev user, first we need to grab the id_token:

curl --request POST \
  --url https://xxx.auth0.com/oauth/token \
  --header 'Content-Type: application/x-www-form-urlencoded' \
  --data grant_type=password \
  --data client_id=1RaJmjhjaapNLGXQjcYViQ15ZYzZoZdL \
  --data client_secret=ZzxjjnOfxIj-__pUu7HCRNC9qknS5jYxOsTxd5Koz1uH7AmeXxtRClwlEW6WJLav \
  --data username=nuculabs-kube@nuculabs.dev \
  --data 'password=Pa27rgN9KneN'

{
  "access_token": "yepJxATWNN2hCZUmP4H4BzBaLycsUsJw",
  "id_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6Ik1FSkVNRVE1TWtFMlFrUTRNVEZFTWpZNE5qSkRORFkxTURBMk16bENORVJET1VNelJUSTBPUSJ9.eyJuaWNrbmFtZSI6Im51Y3VsYWJzLWt1YmUiLCJuYW1lIjoibnVjdWxhYnMta3ViZUBudWN1bGFicy5kZXYiLCJwaWN0dXJlIjoiaHR0cHM6Ly9zLmdyYXZhdGFyLmNvbS9hdmF0YXIvMzQ1NjMxNzQxNjU0ODViYWE1NDBjOTBkODc2MzAyN2I_cz00ODAmcj1wZyZkPWh0dHBzJTNBJTJGJTJGY2RuLmF1dGgwLmNvbSUyRmF2YXRhcnMlMkZudS5wbmciLCJ1cGRhdGVkX2F0IjoiMjAyMS0wNS0xM1QxOTo1Nzo0Mi42ODZaIiwiZW1haWwiOiJudWN1bGFicy1rdWJlQG51Y3VsYWJzLmRldiIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwiaXNzIjoiaHR0cHM6Ly9kbnV0aXUtZGV2LmV1LmF1dGgwLmNvbS8iLCJzdWIiOiJhdXRoMHw2MDlkNzlkMjk2OWU3MjAwNjhhNmFhNjciLCJhdWQiOiIxUmFKbWpoamFhcE5MR1hRamNZVmlRMTVaWXpab1pkTCIsImlhdCI6MTYyMDkzNTg2MiwiZXhwIjoxNjIwOTcxODYyfQ.avnUJv2aM0vpTkcoGiO54N4y765BxpXHV2alYAVXCpFeaNI2ISW-lW0sFUFrU3w35oM4p2Xh2tlMmVoQplSvJL0mJ9qZObMDvdqijFGdUbYN3XnD2F0kI5CCwshGhP59cbS_gdXIwwz3SOwDKKjGvacvEPcOofmhcBxNVW16qP7GS2JaAnrbGygCpj6AOyRcCkAL-jz0rxQCPrwZ5i1E4ofsbH0H8cVprYYazBpNRKPWcadcMqvGAXrtkrkSzHvrmTbQNaXi2rUiDzepDeJvciYJsNefLGmW2iStpB4KuN9M0wpXdV2PAd6lMYAd3sYSn_4NYLQKEbtmL5Sp1lkxow",
  "scope": "openid profile email address phone",
  "expires_in": 86400,
  "token_type": "Bearer"
}

Note that the token expires in 86400 seconds, 24 hours, we can only use it in that time interval. Feel free to decode the token on jwt.io to inspect it’s contents. πŸ˜€

To tell kubectl to use the id_token we’ve just retrieved, we need to update it as follows:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: C:\Users\denis\.minikube\ca.crt
    server: https://127.0.0.1:56840
  name: minikube
contexts:
- context:
    cluster: minikube
    namespace: default
    user: nuculabs-kube@nuculabs.dev
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube2
  user:
    client-certificate: C:\Users\denis\.minikube\profiles\minikube\client.crt
    client-key: C:\Users\denis\.minikube\profiles\minikube\client.key
- name: nuculabs-kube@nuculabs.dev
  user:
    auth-provider:
      name: oidc
      config:
        client-id: 1RaJmjhjaapNLGXQjcYViQ15ZYzZoZdL 
        client-secret: ZzxjjnOfxIj-__pUu7HCRNC9qknS5jYxOsTxd5Koz1uH7AmeXxtRClwlEW6WJLav
        idp-issuer-url: https://xxx.auth0.com/
        id-token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6Ik1FSkVNRVE1TWtFMlFrUTRNVEZFTWpZNE5qSkRORFkxTURBMk16bENORVJET1VNelJUSTBPUSJ9.eyJuaWNrbmFtZSI6Im51Y3VsYWJzLWt1YmUiLCJuYW1lIjoibnVjdWxhYnMta3ViZUBudWN1bGFicy5kZXYiLCJwaWN0dXJlIjoiaHR0cHM6Ly9zLmdyYXZhdGFyLmNvbS9hdmF0YXIvMzQ1NjMxNzQxNjU0ODViYWE1NDBjOTBkODc2MzAyN2I_cz00ODAmcj1wZyZkPWh0dHBzJTNBJTJGJTJGY2RuLmF1dGgwLmNvbSUyRmF2YXRhcnMlMkZudS5wbmciLCJ1cGRhdGVkX2F0IjoiMjAyMS0wNS0xM1QxOTo1Nzo0Mi42ODZaIiwiZW1haWwiOiJudWN1bGFicy1rdWJlQG51Y3VsYWJzLmRldiIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwiaXNzIjoiaHR0cHM6Ly9kbnV0aXUtZGV2LmV1LmF1dGgwLmNvbS8iLCJzdWIiOiJhdXRoMHw2MDlkNzlkMjk2OWU3MjAwNjhhNmFhNjciLCJhdWQiOiIxUmFKbWpoamFhcE5MR1hRamNZVmlRMTVaWXpab1pkTCIsImlhdCI6MTYyMDkzNTg2MiwiZXhwIjoxNjIwOTcxODYyfQ.avnUJv2aM0vpTkcoGiO54N4y765BxpXHV2alYAVXCpFeaNI2ISW-lW0sFUFrU3w35oM4p2Xh2tlMmVoQplSvJL0mJ9qZObMDvdqijFGdUbYN3XnD2F0kI5CCwshGhP59cbS_gdXIwwz3SOwDKKjGvacvEPcOofmhcBxNVW16qP7GS2JaAnrbGygCpj6AOyRcCkAL-jz0rxQCPrwZ5i1E4ofsbH0H8cVprYYazBpNRKPWcadcMqvGAXrtkrkSzHvrmTbQNaXi2rUiDzepDeJvciYJsNefLGmW2iStpB4KuN9M0wpXdV2PAd6lMYAd3sYSn_4NYLQKEbtmL5Sp1lkxow

Note: The authentication won’t work if the user doesn’t has it’s email verified: E0513 20:03:32.089659 1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, oidc: email not verified] you will need to create another user or set the user’s email as verified from the Auth0 interface.

To verify that the authentication works save the kubeconfig file and run:

PS C:\Users\denis\Downloads> kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   33m

If you change a single byte in the id-token, then the authentication won’t work anymore.

✨ We’ve successfully interacted with Minikube with our Auth0 user. ✨

Refreshing the token

Since the Resource Owner flow doesn’t return a refresh_token, the oidc authorization provider plugin for kubectl won’t be able to refresh the token, thus a manual refresh is needed.

The Kubernetes documentation offers a solution for this: ExecCredentials. You can use an existing go plugin or write yourself a program that gets executed and prints an ExecCredential object to stdout, which looks like this:

{
  "apiVersion": "client.authentication.k8s.io/v1beta1",
  "kind": "ExecCredential",
  "status": {
    "token": "my-bearer-token"
  }
}

If an expirationTimestamp is provided along with the token then kubectl will cache the token until the token expires otherwise if the expirationTimestamp is missing then kubectl will use the token until the server responds with 401.

A kubectl file made for an ExecCredentials scenario would look like this:

users:
- name: nuculabs-kube@nuculabs.dev
  user:
    exec:
      apiVersion: "client.authentication.k8s.io/v1beta1"
      command: "my-program-that-prints-an-credential"
      args:
      - "print-token"

Thank you for reading πŸ“š!

References:

Article cover photo by θ΄θŽ‰ε„Ώ DANIST.