Running the code will return a {"Hello": "World"} json when you visit the root endpoint / at http://127.0.0.1:8000. π
When you check the console window, the following log lines are printed:
INFO: Started server process [10276]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:53491 - "GET / HTTP/1.1" 200 OK
Notice the Uvicorn log GET / HTTP/1.1″ 200 OK.
According to Uvicorn’s deployment docs we should run Uvicorn in a production settings with the following command: gunicorn -k uvicorn.workers.UvicornWorker main:create_app.
(venv2) β FastAPILogging gunicorn -k uvicorn.workers.UvicornWorker main:create_app
[2021-05-17 22:10:44 +0300] [6250] [INFO] Starting gunicorn 20.1.0
[2021-05-17 22:10:44 +0300] [6250] [INFO] Listening at: http://127.0.0.1:8000 (6250)
[2021-05-17 22:10:44 +0300] [6250] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2021-05-17 22:10:44 +0300] [6252] [INFO] Booting worker with pid: 6252
[2021-05-17 22:10:45 +0300] [6252] [WARNING] ASGI app factory detected. Using it, but please consider setting the --factory flag explicitly.
[2021-05-17 22:10:45 +0300] [6252] [INFO] Started server process [6252]
[2021-05-17 22:10:45 +0300] [6252] [INFO] Waiting for application startup.
[2021-05-17 22:10:45 +0300] [6252] [INFO] Application startup complete.
Now, if we visit the root endpoint, the console won’t print “GET / HTTP/1.1” 200 OK anymore/ π€¦ββοΈ.
To fix it we need a custom UvicornWorker β and a logging configuration file π.
Create a new file and name it logging.yaml, then paste the following contents in it:
handlers: console: class: logging.StreamHandler formatter: standard stream: ext://sys.stdout
loggers: uvicorn: error: propagate: true
root: level: INFO handlers: [console] propagate: no
This file will configure our root logger and our Uvicorn logger. To read more on the topic please see Python logging configuration.
Next, we will create a custom UvicornWorker class that will set log_config to the path of our logging.yaml file, to pass the logging configuration that we’ve just made to Uvicorn. π¦
I added the following code in main.py:
class MyUvicornWorker(UvicornWorker):
CONFIG_KWARGS = {
"log_config": "/mnt/c/Users/denis/PycharmProjects/FastAPILogging/logging.yaml",
}
βΆ If we run the application with:
gunicorn -k main.MyUvicornWorker main:create_app
We should see the Uvicorn access logs printed in the console π¦
(venv2) β FastAPILogging gunicorn -k main.MyUvicornWorker main:create_app
[2021-05-17 22:31:28 +0300] [6278] [INFO] Starting gunicorn 20.1.0
[2021-05-17 22:31:28 +0300] [6278] [INFO] Listening at: http://127.0.0.1:8000 (6278)
[2021-05-17 22:31:28 +0300] [6278] [INFO] Using worker: main.MyUvicornWorker
[2021-05-17 22:31:28 +0300] [6280] [INFO] Booting worker with pid: 6280
2021-05-17 22:31:28,185 - WARNING - ASGI app factory detected. Using it, but please consider setting the --factory flag explicitly.
2021-05-17 22:31:28,185 - INFO - Started server process [6280]
2021-05-17 22:31:28,185 - INFO - Waiting for application startup.
2021-05-17 22:31:28,185 - INFO - Application startup complete.
2021-05-17 22:31:30,129 - INFO - 127.0.0.1:54004 - "GET / HTTP/1.1" 200
In this article, I will demonstrate how to configure Kubernetes (minikube) to use OpenID Connect as an authentication strategy.
We will cover the Resource Owner Password flow. Feel free chose the right authentication flow depending on your application’s needs.
Please refer to this diagram in order to choose the flow:
Note that the Client Credentials flow is not supported by Kubernetes. According to the official docs:
“To identify the user, the authenticator uses the id_token (not the access_token) from the OAuth2 token response as a bearer token. “
Since the Client Credentials flow only returns an access_token, we won’t be able to use this flow with Kubernetes.
Setup the OAuth client
To play around with these concepts we need to create an OAuth application. I prefer to use Auth0 so that everyone can follow along for free.
Create an application of type Regular Web Application
2. Open the newly created application, go to Settings, scroll down and click advanced settings. On the Grant Types tab click the Password item and hit Save.
3. Goto Authentication -> Database and create a new database Connection. I named my connection Username-Password-Authentication.
4. Goto Settings -> Tenant and in the API Authorization Settings section set the Default Directory connection name to Username-Password-Authentication
5. Goto User Management -> Users and create a new user. I named my user: nuculabs-kube@nuculabs.dev and gave it the following password: Pa27rgN9KneN
Next, click on the user and set it’s email address as verified.
Setup Minikube
We can run Kubernetes locally on our computers using Minikube.
If it is your first time playing with Minikube then follow the installation instructions from: https://minikube.sigs.k8s.io/docs/start/ and start your minikube cluster with minikube start.
After your cluster is started stop it with minikube stop. This step is necessary otherwise you will encounter errors when running the next command.
The --extra-config=apiserver.oidc-issuer-url=must be equal to your Auth0 domain, it must start with https:// and end with a /.
The --extra-config=apiserver.oidc-client-id= will contain the Client ID of the OAuth client.
The --extra-config=apiserver.oidc-username-claim=email will be set to email because we want to map the email of our Auth0 user nuculabs-kube@nuculabs.dev to a user that we will create within Kubernetes.
Auth0 will return an id_token of the form Header.Payload.Signature. In our case, it will have a payload with a field email that will be equal to the user’s email address.
The following token doesn’t include the email claim, but it includes the name claim, in that case, we can use name as our apiserver.oidc-username-claim.
If your Minikube cluster has started you should see the following output:
π minikube v1.20.0 on Microsoft Windows 10 Home 10.0.19042 Build 19042
β¨ Using the docker driver based on existing profile
π Starting control plane node minikube in cluster minikube
π Pulling base image β¦
π Restarting existing docker container for "minikube" β¦
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.6 β¦
βͺ apiserver.authorization-mode=RBAC
βͺ apiserver.oidc-issuer-url=https://xxx.auth0.com/
βͺ apiserver.oidc-client-id=1RaJmjhjaapNLGXQjcYViQ15ZYzZoZdL
βͺ apiserver.oidc-username-claim=email
π Verifying Kubernetes componentsβ¦
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
π Enabled addons: storage-provisioner, default-storageclass
π Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Congrats for making it this far! πβ¨
The next step is to map our nuculabs-kube@nuculabs.dev user to a admin role inside the Kubernetes cluster.
Open your favorite code editor, create clusterrole.yaml and paste the following contents in it:
Note that the token expires in 86400 seconds, 24 hours, we can only use it in that time interval. Feel free to decode the token on jwt.io to inspect it’s contents. π
To tell kubectl to use the id_token we’ve just retrieved, we need to update it as follows:
Note: The authentication won’t work if the user doesn’t has it’s email verified: E0513 20:03:32.089659 1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, oidc: email not verified] you will need to create another user or set the user’s email as verified from the Auth0 interface.
To verify that the authentication works save the kubeconfig file and run:
PS C:\Users\denis\Downloads> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33m
If you change a single byte in the id-token, then the authentication won’t work anymore.
β¨ We’ve successfully interacted with Minikube with our Auth0 user. β¨
Refreshing the token
Since the Resource Owner flow doesn’t return a refresh_token, the oidc authorization provider plugin for kubectl won’t be able to refresh the token, thus a manual refresh is needed.
The Kubernetes documentation offers a solution for this: ExecCredentials. You can use an existing go plugin or write yourself a program that gets executed and prints an ExecCredential object to stdout, which looks like this:
If an expirationTimestamp is provided along with the token then kubectl will cache the token until the token expires otherwise if the expirationTimestamp is missing then kubectl will use the token until the server responds with 401.
A kubectl file made for an ExecCredentials scenario would look like this: