Separate Audio from Video (with ffmpeg)

Hello 👋

In this short article I will show you how to split audio from video using ffmpeg.

When I worked on my Udemy course I needed a way to process audio in Audacity and edit the video in Kdenlive.

So I wrote two bash scripts, one for spliting audio and video and another one to combine the processed audio (usually a .wav file with the same name) with the video.

The result is the following

split-video.sh

filename=`echo "$1" | awk '{gsub(/.*[/]|[.].*/, "", $0)} 1'`
ffmpeg -i "$1" -vn -c:a copy "${filename}Temp.m4a"
ffmpeg -i "$1" -an -c:v copy "${filename}Temp.mp4"

combine-video.sh

filename=`echo "$1" | awk '{gsub(/.*[/]|[.].*/, "", $0)} 1'`
ffmpeg -i "./${filename}Temp.mp4" -i "./${filename}Temp.wav" -c:v copy -c:a aac "./${filename}Final.mp4"
rm "./${filename}Temp.m4a"
rm "./${filename}Temp.mp4"

Thanks for reading and happy hacking! 🏄

Course: Build a Movie Tracking API with FastAPI and Python

Hello everyone!

I’ve created my first course called “Build a Movie Tracking API with FastAPI and Python”

It will teach you to build and API from the ground up and the knowledge you will learn here will be transferable to other projects as well.

It also includes some content on:

  • Docker
  • Kubernetes
  • MongoDB
  • Validating JWT
  • Middleware
  • RESTFul advice
  • Repository Pattern
  • Metrics

If you’d like to support my work, you can access the course here. If you can’t buy it I’ll understand, please comment below and I will share a coupon code when available.

Access the course here!

This is a coupon for -100% off: https://www.udemy.com/course/build-a-movie-tracking-api-with-fastapi-and-python/?couponCode=2556ADACE4DE6631E64D

Exec as root user in Kubernetes

Hi 👋,

In this short tutorial I will show you a way of getting a root shell in containers running inside a modern Kubernetes cluster.

Prerequisites:

  • Root access to the cluster node in which the container is running.

Problem Statement

We wan’t root access into a running container, exec gives us non-root user.

➜  Downloads k get pods
NAME                     READY   STATUS    RESTARTS   AGE
my-release-cassandra-0   1/1     Running   0          2m9s

➜  Downloads k exec -it pod/my-release-cassandra-0 -- /bin/bash
I have no name!@my-release-cassandra-0:/$ whoami
whoami: cannot find name for user ID 1001
I have no name!@my-release-cassandra-0:/$ touch test
touch: cannot touch 'test': Permission denied
I have no name!@my-release-cassandra-0:/$ 

Solution

To obtain root access. First grab the Container ID from inside the pod.

k describe pod my-release-cassandra-0
Containers:
  cassandra:
    Container ID:  containerd://8fa7af3900d556aa8a91b1ac4cbe46335e8df233f8645b0a2329b2f0e6d76177
    Image:         docker.io/bitnami/cassandra:4.0.7-debian-11-r0

Then if it the id starts with containerd:// run the following command on the node the pod is running:

sudo runc --root /run/containerd/runc/k8s.io/ exec -t -u 0 8fa7af3900d556aa8a91b1ac4cbe46335e8df233f8645b0a2329b2f0e6d76177 /bin/bash

You should get a root shell into the Cassandra container:

root@my-release-cassandra-0:/# whoami
root
root@my-release-cassandra-0:/# touch test
root@my-release-cassandra-0:/# ls
bin	 boot  docker-entrypoint-initdb.d  etc	 lib	media  opt   root  run.sh  srv	test  usr
bitnami  dev   entrypoint.sh		   home  lib64	mnt    proc  run   sbin    sys	tmp   var

Thanks for reading and happy cloud surfing! 🏄

Apache Flink Checkpoints on S3 and S3 compatible storage

Hello,

Recently someone working at Yahoo emailed me regarding an old thread I’ve started on the Apache Flink user mailing list. I’ve replied to the e-mail but also decided to turn the reply into a blog post, because it might help other people as well.

Email

Hi,

I was able to get it working after tinkering with it. The issue was mainly a miscommunication, we didn’t formally know which authentication method we were using in AWS. We we’re using only s3://

Here are our configuration options:

On S3 compatible storage:

fs.s3a.access.key: ""
fs.s3a.secret.key: ""
fs.s3a.connection.ssl.enabled: "false"
fs.s3a.endpoint: "ceph-mcr-1.xxx.xxx.xxx:xxx"
fs.s3a.list.version: "1"
s3.path.style.access: "true"
containerized.master.env.ENABLE_BUILT_IN_PLUGINS: "flink-s3-fs-hadoop-1.13.2.jar"
containerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS: "flink-s3-fs-hadoop-1.13.2.jar"
state.backend: "rocksdb"
state.backend.incremental: "true"
state.checkpoints.dir: "s3://bucket-name/checkpoints/$cluster_name$/"
state.savepoints.dir: "s3://bucket-name/savepoints/$cluster_name$/"

On S3 with AWS:

fs.s3a.aws.credentials.provider: "com.amazonaws.auth.WebIdentityTokenCredentialsProvider",
containerized.master.env.ENABLE_BUILT_IN_PLUGINS: "flink-s3-fs-hadoop-1.13.2.jar",
containerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS: "flink-s3-fs-hadoop-1.13.2.jar",
state.backend: "rocksdb",
state.backend.incremental: "true",
state.checkpoints.dir: "s3://xxx/checkpoints/$cluster_name$/",
state.savepoints.dir: "s3://xxx/savepoints/$cluster_name$/"

fs.s3a.aws.credentials.provider was the authentication method (credentials provider) that we were missing, it’s not found in the Hadoop plugin docs[2] but it’s found in AWSJavaSDK docs[3][4]. AWS mounts secrets inside Flink pods so using this provider should make it work without further configuration.

Note that flink-s3-fs-hadoop-1.13.2.jar needs to be adapted to your Flink version. $cluster_name should also be substituted with your cluster/deployment name.

That’s pretty much it, I’m also attaching the Flink S3 docs[1] to the email. Thanks for reaching out! Hope you’ll figure it out!

Best,

Denis Nutiu

docs/deployment/filesystems/s3/[1]
hadoop-aws/tools/hadoop-aws/index.html#S3A[2]
amazonaws/auth/AWSCredentialsProvider.html[3]
amazonaws/auth/WebIdentityTokenCredentialsProvider.html[4]

As a side note, if you’re using the Flink Operator to deploy your Flink job you can set environment variables in the pod template file instead of flink-config.yaml.

Thanks for reading!