I have recently updated the firmware of my RØDE audio interface from 1.12.x to 1.13, and the device was no longer working properly on Linux. I found out that it was an audio sampling issue.
[feb21 16:20] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82
[ +0,005008] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82
[ +0,005019] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82
[ +0,004971] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82
[ +0,015019] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82
This is a short tutorial describing how to monitor your Kubernetes cluster container logs using Loki stack. But why? Because it is easier to view, filter your logs in Grafana and to store them persistently in Loki rather than viewing them in a terminal.
Let’s get started! Assuming you already have Microk8s installed, enable the following addons:
You can enable an add-on by running microk8s enable. Ex: microk8s enable dns
dns # CoreDNS
ha-cluster # Configure high availability on the current node
metrics-server # K8s Metrics Server for API access to service metrics
storage # Storage class; allocates storage from host directory
Note: Microk8s comes with a bundled kubectl and helm3. Just run microk8s kubectl or microk8s helm3. If you want to use your host kubectl you can configure it via: microk8s config > ~/.kube/config.
Warning: Be extra careful when running the microk8s config > ~/.kube/config command because it will overwrite the old config file.
Then proceed by installingLoki. Loki will store all the logs using object storage. This is efficient but the trade-off is that you can’t do complex aggregations and searches against your data. We are going to install Loki for exploration purposes but if you’re looking for a production ready version, check out the loki distributed helm chart.
Run the following helm commands to install Loki. You may want to install helm or use microk8s helm3 command.
Finally, we need to visualize the logs using Grafana. Install it by running the helm command and then, edit the service to change its type from ClusterIP to NodePort.
Changing the service type to NodePort will allow you to visit Grafana in your browser without the need of adding an ingester.
❗❗To use vscode as the default editor export the following environment variable: KUBE_EDITOR=code -w
helm install grafana grafana/grafana
kubectl edit service/grafana
# Change metadata.spec.type to NodePort
# Grab the service's port using kubectl get services and look for 32204:
# grafana NodePort 10.152.183.84 <none> 80:32204/TCP 6d
Note: If you’re on Windows to access the service you will need to run kubectl cluster-info and use the IP address of the cluster. On Linux you should be able to access http://localhost:32204.
Kubernetes control plane is running at https://172.20.138.170:16443
Grab your Grafana admin password by following the instructions from the helm notes. The notes are displayed after Grafana has been installed. If you don’t have base64 on your OS check out CyberChef, it can decode base64 text.
After you’ve successfully logged in, head to Settings -> DataSources and add the Loki data source.
Head back to the Explore menu and display Loki’s logs using the Loki data source in Grafana. You can click log browser to view all available values for the app label.
Promtail should now import logs into Loki and create labels dynamically for each newly created container. If you followed along, congratulations!
This is a short story on how I got my pull request merged into Apache Flink.
It started with the need to set CPU and Memory limits to Flink jobs running under Kubernetes.
The first thing I did was to join the user mailing list and ask around if someone has encountered the issue and if there’s a solution to it. The people from the mailing list were very friendly and they pointed me to an existing ticket on the Flink jira board, which was exactly what I needed.
After sending the signed document via email, I cloned the Flink project from GitHub and imported it into my IntelliJ IDE. Flink has some great documentation on how to setup your IDE and import the project.
Lastly, I’ve implemented the feature and submitted the PR flink/pull/17098. The first time I forgot to generate the code docs and I’ve got a CI error. After the error was fixed, the PR was merged. It did not speed things up as I initially thought since it was merged into Flink 1.15. Nonetheless, It was a smooth and fun process and the code review that I’ve received was also well done.
I hope your experience contributing to open-source software will be as fun as mine was.