Exec as root user in Kubernetes

Hi πŸ‘‹,

In this short tutorial I will show you a way of getting a root shell in containers running inside a modern Kubernetes cluster.

Prerequisites:

  • Root access to the cluster node in which the container is running.

Problem Statement

We wan’t root access into a running container, exec gives us non-root user.

➜  Downloads k get pods
NAME                     READY   STATUS    RESTARTS   AGE
my-release-cassandra-0   1/1     Running   0          2m9s

➜  Downloads k exec -it pod/my-release-cassandra-0 -- /bin/bash
I have no name!@my-release-cassandra-0:/$ whoami
whoami: cannot find name for user ID 1001
I have no name!@my-release-cassandra-0:/$ touch test
touch: cannot touch 'test': Permission denied
I have no name!@my-release-cassandra-0:/$ 

Solution

To obtain root access. First grab the Container ID from inside the pod.

k describe pod my-release-cassandra-0
Containers:
  cassandra:
    Container ID:  containerd://8fa7af3900d556aa8a91b1ac4cbe46335e8df233f8645b0a2329b2f0e6d76177
    Image:         docker.io/bitnami/cassandra:4.0.7-debian-11-r0

Then if it the id starts with containerd:// run the following command on the node the pod is running:

sudo runc --root /run/containerd/runc/k8s.io/ exec -t -u 0 8fa7af3900d556aa8a91b1ac4cbe46335e8df233f8645b0a2329b2f0e6d76177 /bin/bash

You should get a root shell into the Cassandra container:

root@my-release-cassandra-0:/# whoami
root
root@my-release-cassandra-0:/# touch test
root@my-release-cassandra-0:/# ls
bin	 boot  docker-entrypoint-initdb.d  etc	 lib	media  opt   root  run.sh  srv	test  usr
bitnami  dev   entrypoint.sh		   home  lib64	mnt    proc  run   sbin    sys	tmp   var

Thanks for reading and happy cloud surfing! πŸ„

Apache Flink Checkpoints on S3 and S3 compatible storage

Hello,

Recently someone working at Yahoo emailed me regarding an old thread I’ve started on the Apache Flink user mailing list. I’ve replied to the e-mail but also decided to turn the reply into a blog post, because it might help other people as well.

Email

Hi,

I was able to get it working after tinkering with it. The issue was mainly a miscommunication, we didn’t formally know which authentication method we were using in AWS. We we’re using only s3://

Here are our configuration options:

On S3 compatible storage:

fs.s3a.access.key: ""
fs.s3a.secret.key: ""
fs.s3a.connection.ssl.enabled: "false"
fs.s3a.endpoint: "ceph-mcr-1.xxx.xxx.xxx:xxx"
fs.s3a.list.version: "1"
s3.path.style.access: "true"
containerized.master.env.ENABLE_BUILT_IN_PLUGINS: "flink-s3-fs-hadoop-1.13.2.jar"
containerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS: "flink-s3-fs-hadoop-1.13.2.jar"
state.backend: "rocksdb"
state.backend.incremental: "true"
state.checkpoints.dir: "s3://bucket-name/checkpoints/$cluster_name$/"
state.savepoints.dir: "s3://bucket-name/savepoints/$cluster_name$/"

On S3 with AWS:

fs.s3a.aws.credentials.provider: "com.amazonaws.auth.WebIdentityTokenCredentialsProvider",
containerized.master.env.ENABLE_BUILT_IN_PLUGINS: "flink-s3-fs-hadoop-1.13.2.jar",
containerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS: "flink-s3-fs-hadoop-1.13.2.jar",
state.backend: "rocksdb",
state.backend.incremental: "true",
state.checkpoints.dir: "s3://xxx/checkpoints/$cluster_name$/",
state.savepoints.dir: "s3://xxx/savepoints/$cluster_name$/"

fs.s3a.aws.credentials.provider was the authentication method (credentials provider) that we were missing, it’s not found in the Hadoop plugin docs[2] but it’s found in AWSJavaSDK docs[3][4]. AWS mounts secrets inside Flink pods so using this provider should make it work without further configuration.

Note that flink-s3-fs-hadoop-1.13.2.jar needs to be adapted to your Flink version. $cluster_name should also be substituted with your cluster/deployment name.

That’s pretty much it, I’m also attaching the Flink S3 docs[1] to the email. Thanks for reaching out! Hope you’ll figure it out!

Best,

Denis Nutiu

docs/deployment/filesystems/s3/[1]
hadoop-aws/tools/hadoop-aws/index.html#S3A[2]
amazonaws/auth/AWSCredentialsProvider.html[3]
amazonaws/auth/WebIdentityTokenCredentialsProvider.html[4]

As a side note, if you’re using the Flink Operator to deploy your Flink job you can set environment variables in the pod template file instead of flink-config.yaml.

Thanks for reading!

envsubst – Substitute Variables for Environment Variables

Hi πŸ‘‹,

Introduction

In this short article I want to showcase a nice and useful Linux command.

The comand πŸ₯ envsubst πŸ₯. In short it Substitutes the values of environment variables.

It’s great for populating configuration files with values from environment variables, a common operation for developers containerizing their applications.

Example Usecase

Let’s say we are containerizing an application and we have the following file configuration.yaml, and we want to modify the values of the environment field and the log_level field without adding the additional complexity of mounting the configuration.yaml file into the container/pod.

configuration:
  server: "the-app"
  environment: "production"
  log_level": "info"

To change the values of the environment and loglevel fields we create a configuration-template.yaml like so:

configuration:
    server: "the-app"
    environment: "$ENV_ENVIRONMENT"
    log_level": "$ENV_LOGLEVEL"

Then we run eventsubt to substitute the configuration values:

export ENV_ENVIRONMENT=stagging
export ENV_LOGLEVEL=trace

envsubst < configuration-template.yaml > configuration.yaml

The resulting configuration.yaml file will contain:

configuration:
    server: "the-app"
    environment: "stagging"
    log_level": "trace"%  

Conclusion

The envsubst command enables us to write configuration files with placeholders that will be replaced with actual values from the environment variables. Before being aware if this command I always thought that I have to somehow parse the files, substitute variables dynamically, ensure that the substitution didn’t change the file format or break something. Now I just envsubst .

Thanks for reading and happy dev-ing! πŸ–₯οΈπŸ§‘β€πŸ’»

envsubst help

Usage: envsubst [OPTION] [SHELL-FORMAT]

Substitutes the values of environment variables.

Operation mode:
-v, –variables output the variables occurring in SHELL-FORMAT

Informative output:
-h, –help display this help and exit
-V, –version output version information and exit

In normal operation mode, standard input is copied to standard output,
with references to environment variables of the form $VARIABLE or ${VARIABLE}
being replaced with the corresponding values. If a SHELL-FORMAT is given,
only those environment variables that are referenced in SHELL-FORMAT are
substituted; otherwise all environment variables references occurring in
standard input are substituted.

When –variables is used, standard input is ignored, and the output consists
of the environment variables that are referenced in SHELL-FORMAT, one per line.

Report bugs in the bug tracker at https://savannah.gnu.org/projects/gettext
or by email to bug-gettext@gnu.org.

Using confluent-kafka-go on MacOS M1

Hello,

TLDR;

brew install librdkafka openssl@3 pkg-config
export PKG_CONFIG_PATH="/opt/homebrew/opt/openssl@3/lib/pkgconfig"
go test -tags dynamic ./...

I’ve been transition from a Linux machine to a MacOS M1 machine at work and when I ran tests for a Golang project, I noticed that the test failed on modules depending on librdkafka.

Initially I’ve had problems with Kafka on MacOS M1 on Docker, since I was using an older image version that didn’t have any arm64 build, updating to images to version 7.2.1[1] fixed the issues.

This time however the problems was with my Golang dependencies and not the Docker containers. Running the tests resulted in:

go test ./...
[...]
ld: warning: ignoring file /Users/dnutiu/go/pkg/mod/github.com/confluentinc/confluent-kafka-go@v1.8.2/kafka/librdkafka_vendor/librdkafka_darwin.a, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
Undefined symbols for architecture arm64:

Undefined symbols for architecture arm64” My guess is that the published confluent-kafka-go package does not contain (yet) symbols for arm64, to fix the issues you can use the module with another librdkafka.

When installing librdkafka formula from Homebrew the library is built for arm64 architecture. To install run:

brew install librdkafka openssl@3 pkg-config

Next, we’ll use a tool called pkg-config to tell librdkafka where to find the other libraries, since it depends on openssl we need to export PKG_CONFIG_PATH. To grab the value run:

brew info openssl
==> openssl@3: stable 3.0.5 (bottled) [keg-only]
Cryptography and SSL/TLS Toolkit
https://openssl.org/
/opt/homebrew/Cellar/openssl@3/3.0.5 (6,444 files, 27.9MB)
  Poured from bottle on 2022-08-31 at 14:10:49
From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/openssl@3.rb
License: Apache-2.0
==> Dependencies
Required: ca-certificates ✘
==> Caveats
A CA file has been bootstrapped using certificates from the system
keychain. To add additional certificates, place .pem files in
  /opt/homebrew/etc/openssl@3/certs

and run
  /opt/homebrew/opt/openssl@3/bin/c_rehash

openssl@3 is keg-only, which means it was not symlinked into /opt/homebrew,
because macOS provides LibreSSL.

If you need to have openssl@3 first in your PATH, run:
  echo 'export PATH="/opt/homebrew/opt/openssl@3/bin:$PATH"' >> ~/.zshrc

For compilers to find openssl@3 you may need to set:
  export LDFLAGS="-L/opt/homebrew/opt/openssl@3/lib"
  export CPPFLAGS="-I/opt/homebrew/opt/openssl@3/include"

For pkg-config to find openssl@3 you may need to set:
  export PKG_CONFIG_PATH="/opt/homebrew/opt/openssl@3/lib/pkgconfig"

Check that everything works by running:

pkg-config --libs --cflags rdkafka

-I/opt/homebrew/Cellar/openssl@3/3.0.5/include -I/opt/homebrew/Cellar/librdkafka/1.9.2/include -I/opt/homebrew/Cellar/zstd/1.5.2/include -I/opt/homebrew/Cellar/lz4/1.9.4/include -L/opt/homebrew/Cellar/librdkafka/1.9.2/lib -lrdkafka

Now, all you need to do is run your tests with the -tags dynamic flag, it will instruct confluent-kafka-go to use the librdkafka library that we’ve built from source.

Thanks for reading and happy hacking!🫢