How to make RØDE audio interface work on Linux

Hi πŸ‘‹,

I have recently updated the firmware of my RØDE audio interface from 1.12.x to 1.13, and the device was no longer working properly on Linux. I found out that it was an audio sampling issue.

dmesg -kH

[feb21 16:20] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82
[  +0,005008] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82
[  +0,005019] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82
[  +0,004971] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82
[  +0,015019] usb 1-9.3: 1:1: cannot set freq 44100 to ep 0x82

To fix it, I did the following:

1. Edit Pulse’s πŸ“– daemon.conf.

nano /etc/pulse/daemon.conf

And add the following lines:

default-sample-format = s24le
default-sample-rate = 48000
alternate-sample-rate = 48000

2. Disconnect the device and then kill the pulse daemon with πŸ”ͺ pulseaudio -k.

3. Connect back the device πŸ”Œ, it should work without problems.

Thanks for reading and happy hacking! πŸ₯·

Container log monitoring on Microk8s with Loki, Grafana and Promtail

Hi πŸ‘‹

This is a short tutorial describing how to monitor your Kubernetes cluster container logs using Loki stack. But why? Because it is easier to view, filter your logs in Grafana and to store them persistently in Loki rather than viewing them in a terminal.

Let’s get started! Assuming you already have Microk8s installed, enable the following addons:

You can enable an add-on by running microk8s enable. Ex: microk8s enable dns

addons:
  enabled:
    dns                  # CoreDNS
    ha-cluster           # Configure high availability on the current node
    metrics-server    # K8s Metrics Server for API access to service metrics
    storage               # Storage class; allocates storage from host directory

Note: Microk8s comes with a bundled kubectl and helm3. Just run microk8s kubectl or microk8s helm3. If you want to use your host kubectl you can configure it via: microk8s config > ~/.kube/config.

Warning: Be extra careful when running the microk8s config > ~/.kube/config command because it will overwrite the old config file.

Then proceed by installing Loki. Loki will store all the logs using object storage. This is efficient but the trade-off is that you can’t do complex aggregations and searches against your data. We are going to install Loki for exploration purposes but if you’re looking for a production ready version, check out the loki distributed helm chart.

Run the following helm commands to install Loki. You may want to install helm or use microk8s helm3 command.

helm repo add grafana https://grafana.github.io/helm-charts

helm install loki grafana/loki -

You should get the following pods and services by running kubectl get pods and kubectl get services:

NAME                        READY   STATUS        RESTARTS   AGE
loki-0                      1/1     Running       0          9m8s

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes      ClusterIP   10.152.183.1     <none>        443/TCP    54m
loki-headless   ClusterIP   None             <none>        3100/TCP   9m23s
loki            ClusterIP   10.152.183.187   <none>        3100/TCP   9m23s

Now, we can safely install Promtail. Promtail will import all the container logs into Loki and it should work auto-magically by auto-discovering all the pods that are running inside your cluster.

To let Promtail know about our existing Loki’s address, we can give it the service URL: http://loki-headless.default.svc.cluster.local:3100/loki/api/v1/push.

helm install promtail grafana/promtail --set config.lokiAddress=http://loki-headless.default.svc.cluster.local:3100/loki/api/v1/push

Finally, we need to visualize the logs using Grafana. Install it by running the helm command and then, edit the service to change its type from ClusterIP to NodePort.

Changing the service type to NodePort will allow you to visit Grafana in your browser without the need of adding an ingester.

❗❗To use vscode as the default editor export the following environment variable: KUBE_EDITOR=code -w

helm install grafana grafana/grafana

 kubectl edit service/grafana
# Change metadata.spec.type to NodePort
# Grab the service's port using kubectl get services and look for 32204:
# grafana                         NodePort    10.152.183.84    <none>        80:32204/TCP   6d

Note: If you’re on Windows to access the service you will need to run kubectl cluster-info and use the IP address of the cluster. On Linux you should be able to access http://localhost:32204.

kubectl cluster-info
Kubernetes control plane is running at https://172.20.138.170:16443

To access Grafana visit: http://172.20.138.170:32204 where 32204 is the service’s NodePort.

Grab your Grafana admin password by following the instructions from the helm notes. The notes are displayed after Grafana has been installed. If you don’t have base64 on your OS check out CyberChef, it can decode base64 text.


After you’ve successfully logged in, head to Settings -> DataSources and add the Loki data source.

Head back to the Explore menu and display Loki’s logs using the Loki data source in Grafana. You can click log browser to view all available values for the app label.

Promtail should now import logs into Loki and create labels dynamically for each newly created container. If you followed along, congratulations!

Thanks for reading and happy hacking! πŸ”§

How I got my PR merged into Apache Flink

Hi πŸ‘‹

This is a short story on how I got my pull request merged into Apache Flink.

It started with the need to set CPU and Memory limits to Flink jobs running under Kubernetes.

The first thing I did was to join the user mailing list and ask around if someone has encountered the issue and if there’s a solution to it. The people from the mailing list were very friendly and they pointed me to an existing ticket on the Flink jira board, which was exactly what I needed.

To speed things up, I decided to implement the ticket by myself. I wrote on the mailing list that I want to implement FLINK-15648 and started signing the Apache individual contributor license agreement.

After sending the signed document via email, I cloned the Flink project from GitHub and imported it into my IntelliJ IDE. Flink has some great documentation on how to setup your IDE and import the project.

Lastly, I’ve implemented the feature and submitted the PR flink/pull/17098. The first time I forgot to generate the code docs and I’ve got a CI error. After the error was fixed, the PR was merged. It did not speed things up as I initially thought since it was merged into Flink 1.15. Nonetheless, It was a smooth and fun process and the code review that I’ve received was also well done.

I hope your experience contributing to open-source software will be as fun as mine was.

Thanks for reading and happy hacking!

How to install aΒ specific Python version on Linux

Hello, πŸ‘‹

In this article I will show you how to install Python versions on Linux using the following methods: compiling from source, dead snakes ppa and pyenv.

To make things easier, if you want to follow along in an environment that you can break, you can create a local Kubernetes cluster using Minikube.

Next, I’m going to use the following yaml file to create an Ubuntu pod:

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
  labels:
    app: ubuntu
spec:
  containers:
  - image: ubuntu
    command:
      - "sleep"
      - "604800"
    imagePullPolicy: IfNotPresent
    name: ubuntu
  restartPolicy: Always

Save the above yaml in a file ubuntu_pod.yaml and run:

kubectl apply -f ./ubuntu_pod.yaml

To get a shell on the Ubuntu pod, run:

kubectl exec -it ubuntu -- /bin/bash

To start from scratch, simply delete the pod with kubectl delete pod/ubuntu and then recreate it.

Compiling Python from source

Before compiling Python, you will need to setup the build environment, thankfully, it is straightforward.

Pyenv has great instructions on it: https://github.com/pyenv/pyenv/wiki#suggested-build-environment.

On Ubuntu, to build Python, install the following packages:

apt-get update; apt-get install make build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev

Then, search the desired python version here and, for example to install Python 3.9, run:

wget https://www.python.org/ftp/python/3.9.9/Python-3.9.9.tgz
tar -xzf Python-3.9.9.tgz
cd Python-3.9.9

Then, run configure:

./configure --enable-optimizations

And finally run make install if you want to replace the default Python installation or make altinstall to install python under the binary name of python3.9

make altinstall

To test the installation run:

python3.9 --version
Python 3.9.9

pip3.9 --version
pip 21.2.4 from /usr/local/lib/python3.9/site-packages/pip (python 3.9)

Installing Python via a third party PPA deadsnakes

To install Python using the deadsnakes ppa run:

apt-get update
apt-get install software-properties-common
add-apt-repository ppa:deadsnakes/ppa
apt-get update
apt install python3.9 python3-pip

Then, to test the installation run:

root@ubuntu:/# python3.9 --version
Python 3.9.10

root@ubuntu:/# python3.9 -m pip --version
pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.9)

Installing Python via Pyenv

I already written an article on how to install Python using Pyenv, check it out if you wish.

https://nuculabs.wordpress.com//2020/06/27/pyenv-for-linux-users/

Thanks for reading! πŸ“š