How to create your first Kafka Connect runtime in OpenShift

Sean Boyd Avatar

Share with

This post contains the steps to build your first IBM Event Streams Kafka Connect runtime in OpenShift.

It creates an IBM MQ Source connector runtime in a separate namespace from the Kafka cluster namespace.

This requires additional steps and configuration than when you run the connector in the same namespace as Kafka.

Estimated reading time: 7 minutes

Sample files

A sample YAML file can be downloaded from here.

My environment

For this post, I used a RHEL UNIX server to build the Kafka Connect runtime.

Log into the UNIX server and create the following directories.

mkdir ~/kafka
mkdir ~/kafka/kc

Prerequisites

Ensure the following exists prior to continuing this post:

  • The Kafka Connect MQ Source connector image has been built and pushed to the OpenShift registry. If you use an enterprise registry, ensure it has been pushed there.

Log into OpenShift

Log into OpenShift.

oc login -u sean --server=https://api.OPENSHIFT_DNS_NAME:6443

Select your project.

For this post, the project is a separate project from the Kafka project.

oc project YOUR_PROJECT_NAME

Create the secrets

Create two secrets to store:

  • The user credential used to access the Kafka topic.
  • The TLS CA cert used establish a TLS connection to the Kafka cluster.

This post stores the secrets in OpenShift.

Alternatively store the secrets in a vault such as HashiCorp vault. How to reference the external secrets is out-of-scope of this post.

ca.crt used for the TLS connection

The ca.crt file is provided by the Kafka admin.

The Kafka admin may send you a PKCS12 file (e.g. ca.p12). In this case, open the PKCS12 file and export the root cert. Name this file as ca.crt.

FTP the ca.crt file to the “~/kafka/kc” directory.

Change into the appropriate directory.

[sean@localhost ~]$ cd ~/kafka/kc/
[sean@localhost kc]$ pwd
/home/sean/kafka/kc
[sean@localhost kc]$ ls -l
total 8
-rw-r--r--. 1 sean sean 4755 Apr 12 21:44 ca.crt

Create a secret in OpenShift to store the ca.crt file.

Change “your-es-name-cluster-ca-cert” to you preferred secret name.

In the below command, “your-es-name” is the IBM Event Streams cluster name, and “cluster-ca-cert” is a suffix to highlight this is the cluster CA cert.

I missed adding a YYMMDD suffix in the secret name. You can add a date suffix for the same reasons as used for the user credentials below.

oc create secret generic your-es-name-cluster-ca-cert --from-file=ca.crt
oc create secret generic your-es-name-cluster-ca-cert --from-file=ca.crt
[sean@localhost kc]$ oc create secret generic your-es-name-cluster-ca-cert --from-file=ca.crt
secret/your-es-name-cluster-ca-cert created

Verify the created secret.

oc get secret your-es-name-cluster-ca-cert
[sean@localhost kc]$ oc get secret your-es-name-cluster-ca-cert
NAME                           TYPE     DATA   AGE
your-es-name-cluster-ca-cert   Opaque   1      63s

Issue the following command to view more details.

oc describe secret your-es-name-cluster-ca-cert
[sean@localhost kc]$ oc describe secret your-es-name-cluster-ca-cert
Name:         your-es-name-cluster-ca-cert
Namespace:    YOUR_PROJECT_NAME
Labels:       <none>
Annotations:  <none>

Type: Opaque

Data
ca.crt: 1850 bytes

User credentials to access the Kafka topic

The user cert and associated keys are provided by the Kafka admin.

Check the YAML file contents

Ensure the details in the YAML file are similar to the following. If the contents are different, the Kafka admin may have provided you the SCRAM file. When connecting from an OpenShift namespace to Kafka running in OpenShift, you need to use the user keys and not the SCRAM credentials.

kind: Secret
apiVersion: v1
metadata:
name: kafkaclientuser
namespace: your-es-name
:
labels:
    :
ownerReferences:
    - apiVersion: eventstreams.ibm.com/v1beta2
    kind: KafkaUser
    name: kafkaclientuser
    :
managedFields:
    :
    data:
    ca.crt: XXX
    user.crt: XXX
    user.key: XXX
    user.p12: XXX
    user.password: XXX
    type: Opaque

Modify the YAML file contents

The field “kafkaclientuser” name in the YAML file is automatically set when the Kafka admin provides the credentials.

You can happily keep this name. However, I prefer to change the name to include a YYMMDD suffix to cater for when a new key is provided during a renewal process. This gives me flexibility during the renewal process to add and verify the new secret before activating it.

For this post, rename “kafkaclientuser” to “kafkaclientuser-250326”. You need to change the value in 4 spots in the original file.

Remove the following from the YAML file:

namespace: your-es-name
uid: c434356d-1d6e-1234-8181-4356754b21bc
resourceVersion: '123456789'

Also delete all contents under the following sections:

ownerReferences:
:
managedFields:
:

The namespace (“namespace: your-es-name”) in the original file is the namespace where Kafka is running.

If you leave “namespace: your-es-name” in the file, it will try to update that namespace. As this example runs the connector in a different namespace, you’ll get an authorization error if you don’t remove it.

Create the secret

FTP the “secret-kafkaclientuser.yaml” file to the “~/kafka/kc” directory.

Change into the appropriate directory.

[sean@localhost ~]$ cd ~/kafka/kc/
[sean@localhost kc]$ pwd
/home/sean/kafka/kc
[sean@localhost kc]$ ls -l
total 12
-rw-r--r--. 1 sean sean 4755 Apr 12 21:44 ca.crt
-rw-r--r--. 1 sean sean  357 Apr 12 22:44 secret-kafkaclientuser.yaml

Create the secret to store the user credentials.

oc apply -f secret-kafkaclientuser.yaml
[sean@localhost kc]$ oc apply -f secret-kafkaclientuser.yaml
secret/kafkaclientuser-250326 created

Verify the created secret.

oc get secret kafkaclientuser-250326
[sean@localhost kc]$ oc get secret kafkaclientuser-250326
NAME                    TYPE    DATA  AGE
kafkaclientuser-250326  Opaque  0     64s

To see more details issue the following command.

oc describe secret kafkaclientuser-250326
[sean@localhost kc]$ oc describe secret kafkaclientuser-250326
Name:        kafkaclientuser-250326
Namespace:   YOUR_PROJECT_NAME
Labels:      app.kubernetes.io/instance=kafkaclientuser-250326
             app.kubernetes.io/managed-by=strimzi-user-operator
             app.kubernetes.io/name=strimzi-user-operator
             app.kubernetes.io/part-of=eventstreams-kafkaclientuser-250326
             eventstreams.ibm.com/cluster=your-es-name
             eventstreams.ibm.com/kind=KafkaUser
Annotations: <none>

Type: Opaque

Data
====

Deploy the Kafka Connect runtime

Now we are ready to deploy the Kafka Connect runtime.

The file “kc.yaml” contains the Kafka Connect custom resource definition. You can download a sample from here.

Refer to the comments in the file for any changes required.

FTP the “kc.yaml” file to the “~/kafka/kc” directory.

Change into the appropriate directory.

[sean@localhost ~]$ cd ~/kafka/kc/
[sean@localhost kc]$ pwd
/home/sean/kafka/kc
[sean@localhost kc]$ ls -l
total 20
-rw-r--r--. 1 sean sean 4755 Apr 12 21:44 ca.crt
-rw-r--r--. 1 sean sean 4510 Apr 12 21:24 kc.yaml
-rw-r--r--. 1 sean sean  357 Apr 12 22:44 secret-kafkaclientuser.yaml

Deploy and start the KafkaConnect runtime.

oc apply -f kc.yaml
[sean@localhost kc]$ oc apply -f kc.yaml
kafkaconnect.eventstreams.ibm.com/my-kafka-connect-name-01 created

To monitor the creation activity.

oc get events --sort-by "lastTimestamp"
[sean@localhost kc]$  oc get events --sort-by "lastTimestamp"
LAST SEEN   TYPE    REASON          OBJECT                                                MESSAGE
29s         Normal  Scheduled       pod/my-kafka-connect-name-01-connect-0                Successfully
5m30s       Normal  NoPods          poddisruptionbudget/my-kafka-connect-name-01-connect  No matching pods found
80s         Normal  Killing         pod/kafka-connect-mq-01-connect-0                     Stopping container
30s         Normal  NoPods          poddisruptionbudget/my-kafka-connect-name-01-connect  No matching pods found
27s         Normal  AddedInterface  pod/my-kafka-connect-name-01-connect-0                Add eth0 [10.1.1.1/24] from openshift-sdn
27s         Normal  Pulled          pod/my-kafka-connect-name-01-connect-0                Container image "registry.apps.OPENSHIFT_DNS_NAME/kc-mq-source-2.3.0:1.0" already present on machine
27s         Normal  Created         pod/my-kafka-connect-name-01-connect-0                Created container
27s         Normal  Started         pod/my-kafka-connect-name-01-connect-0                Started container

If you don’t see any errors in the events, you can monitor the pod creation as follows.

watch oc get po

Wait until the pod is running.

[sean@localhost kc]$ oc get po
NAME                                    READY  STATUS   RESTARTS  AGE
pod/my-kafka-connect-name-01-connect-0  1/1    Running  0         2m7s

Last updated: 12 April 2025

Sean Boyd Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *