When you’re ready to deploy your documentation website, say in Docker with Nginx the following Dockerfile and Nginx default.conf should do.
Dockerfile
FROM python:3.9 as builder
WORKDIR /app
COPY . .
RUN pip install mkdocs mkdocs-material && mkdocs build
FROM nginx as deploy
# Copy the build to the nginx directory.
COPY --from=builder /app/site/ /usr/share/nginx/html/
# Copy the nginx configuration to the nginx config directory.
COPY default.conf /etc/nginx/conf.d/
EXPOSE 8080:8080/tcp
default.conf
server {
listen 8080;
root /usr/share/nginx/html/;
index index.html;
}
I thought that making videos will be easier that typing blog posts but to my surprise the difficulty is a bit higher. Fixing mistakes takes more time with videos and since I’m not that great of a presenter I struggle with presenting the content. Hopefully I will improve my skills with time and practice.
Click on mongodb-kafka-connect-mongodb-1.6.0.zip then unzip it and copy the directory into the plugin path /usr/share/java as defined in the CONNECT_PLUGIN_PATH: “/usr/share/java,/usr/share/confluent-hub-components” environment variable.
Connect needs to be restarted to pick-up the newly installed plugin. Verify that the connector plugin has been successfully installed:
➜ bin curl -s -X GET http://localhost:8083/connector-plugins | jq | head -n 20
[
{
"class": "com.mongodb.kafka.connect.MongoSinkConnector",
"type": "sink",
"version": "1.6.0"
},
{
"class": "com.mongodb.kafka.connect.MongoSourceConnector",
"type": "source",
"version": "1.6.0"
},
Note: If you don’t have jq installed you can omit it.
Creating the topics
Before starting the connector, let’s create the Kafka Topics events and events.deadletter, they will be used them in the connector.
To create the topics, we will need to download Confluent tools and run kafka-topics.
curl -s -O http://packages.confluent.io/archive/6.2/confluent-community-6.2.0.tar.gz
tar -xzf .\confluent-community-6.2.0.tar.gz
cd .\confluent-6.2.0\bin\
./kafka-topics --bootstrap-server localhost:9092 --list
__consumer_offsets
__transaction_state
_confluent-ksql-default__command_topic
_schemas
default_ksql_processing_log
docker-connect-configs
docker-connect-offsets
docker-connect-status
./kafka-topics --bootstrap-server localhost:9092 --create --topic events --partitions 3
Created topic events.
./kafka-topics --bootstrap-server localhost:9092 --create --topic events.deadletter --partitions 3
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues, it is best to use either, but not both.
Created topic events.deadletter.
Note: You will need Java to run the Confluent tools if you’re on Ubuntu you can type sudo apt install openjdk-8-jdk.
Starting the connector 🚙
To start the connector, it is enough to do a single post request to the connector’s API with the connector’s configuration.
The configuration that we will use is going to be:
In short, this POST will create a new connector named mongo-sink-connector using the com.mongodb.kafka.connect.MongoSinkConnector java class, run a single connector task that will get all the messages from the events topic and put them into the Mongo found at mongodb://mongodb:27017/my_events, database named my_events and collection named kafka_events. The records which will fail to be written into the database will be placed on a dead letter topic named events.deadletter, in my opinion this is better than discarding them, since we can inspect the topic to see what went wrong.
To verify that the connector is running, you can retrieve its first tasks status with:
➜ bin curl -s -X GET http://localhost:8083/connectors/mongo-sink-connector/tasks/0/status | jq
{
"id": 0,
"state": "RUNNING",
"worker_id": "connect:8083"
}
Querying the Database 🗃
Now that our Kafka Connect cluster is running and is configured, all that’s left to do is POST some dummy data into Kafka and check for it in the database.
That’s all! 🎉If we now connect to the database using mongosh or any other client, we can query the data.
mongosh
> use my_events
switched to db my_events
> db.kafka_events.findOne()
{
_id: ObjectId("6147242856623b0098fc756d"),
glossary: {
title: 'example glossary',
GlossDiv: {
title: 'S',
GlossList: {
GlossEntry: {
ID: 'SGML',
SortAs: 'SGML',
GlossTerm: 'Standard Generalized Markup Language',
Acronym: 'SGML',
Abbrev: 'ISO 8879:1986',
GlossDef: {
para: 'A meta-markup language, used to create markup languages such as DocBook.',
GlossSeeAlso: [ 'GML', 'XML' ]
},
GlossSee: 'markup'
}
}
}
}
}
Viewing Kafka Connect JMX Metrics
JConsole is a tool that can be used to view JMX metrics exposed by Kafka Connect, if you installed openjdk-8 it should come with it
Start JConsole and connect to localhost:9102. If you get a warning about an insecure connection, accept the connection, and ignore it.
After you’re connected click the MBeans tab and explore 🦹♀️
Summary
Getting into Kafka and Kafka Connect can be a bit overwhelming at first. I hope that this tutorial has provided you with the necessary basics so you can continue to play and explore on your own.
Spinning up a playground for Kafka and Connect using docker-compose isn’t that complicated, you can start from the confluent-cp-community repo, it will give you everything you need to get started. With some little modifications to the docker-compose file, we’ve spawned a MongoDB instance and exposed the JMX metrics in Kafka Connect.
Next, we’ve installed and configured the MongoDB connector and confirmed that it works as expected.
If you have any questions let me know in the comments.
In this article I will explore the topic of sharding a Mongo Database that runs on Kubernetes. Before we get started, if you want to follow along, please install the tools listed in the prerequisites section, and if you want to learn more about sharding, check out this fantastic article Sharding Pattern.
After the installation completes, save the database’s root password and replica set key. While doing this the first time I messed up and didn’t save them properly.
Run the following commands to print the password and replica set key on the command line. If you’re on Windows I have provided you with a Powershell function for base64 and if you’re on Unix don’t forget to pass –decode to base64.
kubectl get secret --namespace default my-release-mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64
kubectl get secret --namespace default my-release-mongodb-sharded -o jsonpath="{.data.mongodb-replica-set-key}" | base64
Sharding the Database
Verify that all your pods are running and start a shell connection to the mongos server.
To enable sharding on the database and collection, I’m going to insert some dummy data in my_data database and my_users collections. The script used to insert the data is attached at the end of this blog post.
If you’ve made it this far, congrats, you’ve enabled sharding, now let’s define some rules.
Since we’re going to use a range sharding strategy based on the key t, and I have two shards available I would like my data to be distributed in the following way:
To test the rules, use the provided python script, modify the times variable and run it with various values.
You can run db.my_users.getShardDistribution() to view the data distribution on the shards.
[direct: mongos]> db.my_users.getShardDistribution()
Shard my-mongo-mongodb-sharded-shard-0 at my-mongo-mongodb-sharded-shard-0/my-mongo-mongodb-sharded-shard0-data-0.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017,my-mongo-mongodb-sharded-shard0-data-1.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017
{
data: '144KiB',
docs: 1667,
chunks: 1,
'estimated data per chunk': '144KiB',
'estimated docs per chunk': 1667
}
Shard my-mongo-mongodb-sharded-shard-1 at my-mongo-mongodb-sharded-shard-1/my-mongo-mongodb-sharded-shard1-data-0.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017,my-mongo-mongodb-sharded-shard1-data-1.my-mongo-mongodb-sharded-headless.default.svc.cluster.local:27017
{
data: '195KiB',
docs: 2336,
chunks: 3,
'estimated data per chunk': '65KiB',
'estimated docs per chunk': 778
}
Adding More Shards
To add more shards to the cluster all we need to do is run helm upgrade, if you don’t mess up the replica set key like I did it should work on the first run.
After you save the correct password and replica set key, search for the volumes that belong to the shards which have the wrong replica set key and delete them. In my case I only delete the volumes which belong to the 3rd shard that I’ve added, since counting starts from 0, I’m looking for shard2 in the name.
Shading a MongoDB can seem intimidating at first, but with some practice in advance you can do it! If sharding doesn’t work out for you, you can Convert Sharded Cluster to Replica Set, but, be prepared with some backups.
Thanks for reading 📚 and happy hacking! 🔩🔨
Base64 Powershell Function
function global:Convert-From-Base64 {
[CmdletBinding()]
[Alias('base64')]
param (
[parameter(ValueFromPipeline,Mandatory=$True,Position=0)]
[string] $EncodedText
)
process {
[System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($EncodedText))
}
}