Easily connect from your Kafka connector to MQ using TLS

Sean Boyd Avatar

Share with

This post guides you through the steps to easily connect from your Kafka connector to an IBM MQ queue manager using a TLS-secured connection.

The post is primarily for a first-time user like me. However, an experienced user can still use the post as a reference.

You’d think this is a simple procedure, and it should be. But I faced numerous challenges as the online documentation contained the what-to-do and not the how-to-do.

As it turns out, once you know the steps, the process is simple.

The hope is, that by providing these detailed steps, you can avoid the difficulties I faced.

Finally, I’ve included comments, thoughts and my experiences completing this activity for the first time.

Estimated reading time: 19 minutes

Overview of the steps required

Complete the following steps to connect to an IBM MQ queue manager using a TLS-secured connection:

  • Enable TLS on the IBM MQ client channel.
  • Obtain a JKS keystore from your IBM MQ admin.
  • Create an OpenShift secret to store the keystore and associated password.
  • Update the Kafka Connect YAML file to mount the secret as a volume.
  • Update the Kafka connector YAML file to configure the IBM MQ connection to use TLS.

Prerequisites

Ensure the following prerequisites have been met before proceeding:

  • You have access to an IBM MQ queue manager. This post only provides the steps to create the IBM MQ channel, queues and associated security.
  • You have access to a namespace with the IBM Event Streams operator installed.
  • You already have a Kafka Connect runtime deployed without using an IBM MQ TLS connection. This post primarily covers the changes required to enable TLS on the IBM MQ client connection.
  • You already have a Kafka connector deployed and running with a non-TLS IBM MQ connection, and this connector is already publishing to the Kafka topic. This post primarily covers the changes required to enable TLS on the IBM MQ client connection.
  • You have access to a server where the OpenShift “oc” CLI is installed.

Access your server

Log into your UNIX server.

The server needs the OpenShift “oc” CLI installed.

If you are using Windows, you’ll need to adjust the commands accordingly.

Prepare your queue manager

If you have already completed the IBM MQ configurations, skip this section.

If not, refer to the “Setting up the queue manager” link. Or just follow the below steps.

You can use IBM MQ Explorer, RUNMQSC or any tool you use to support IBM MQ. This post provides the RUNMQSC commands.

Create the IBM MQ channel

For easy validation testing, you can define a channel without the complexities of TLS and later switch to TLS. I followed this process.

The benefit of this approach is you can validate firewalls are open, the IBM MQ channels, queues and security are configured correctly, and that the Kafka topic exists and is accessible.

Once all environmental settings are validated, you can switch to a TLS enabled channel.

Define a non-TLS enabled channel

Issue the following command to define a non-TLS enabled channel.

Considering this post is to enable TLS, you can skip this step if you don’t plan on validating without TLS enabled.

DEFINE CHANNEL('KAFKA.NO.SSL') CHLTYPE(SVRCONN) DESCR('Stream from MQ to Kafka') HBINT(30) MAXINST(20) MAXINSTC(20) MAXMSGL(4194304) MCAUSER('noaccess') SHARECNV(10) SSLCAUTH(REQUIRED) SSLCIPH('') SSLPEER('')

Define a TLS enabled channel

Issue the following command to define a TLS enabled channel.

Change the SSLCIPH and SSLPEER values as per your company policy. I usually set this to the DN of the issued client SSL certificate – but I don’t add the serial number on the channel peer name.

DEFINE CHANNEL('KAFKA.SSL') CHLTYPE(SVRCONN) DESCR('Stream from MQ to Kafka') HBINT(30) MAXINST(20) MAXINSTC(20) MAXMSGL(4194304) MCAUSER('noaccess') SHARECNV(10) SSLCAUTH(REQUIRED) SSLCIPH(TLS_AES_128_GCM_SHA256) SSLPEER("CN=KAFKACLIENT,OU=KAFKA")

Channel settings

For exactly once messaging, it is recommended to set the HBINT as follows:

  • Set HBINT to 30

While not relevant for IBM Event Streams Kafka connectors, I recommended you set the following to protect your IBM MQ queue manager from rogue IBM MQ client apps, or misconfigured IBM MQ client apps, from consuming too many channels.

  • Set MAXINST to limit the number of client channels that can be created using this channel name.
  • Set MAXINSTC to limit the number of client channels that can be created with this channel name from a single client.

The MCAUSER on the channel has been set to prevent access to the queue manager by default.

Channel security

The below is a modified version of the “by using local operating system to authenticate” page.

The following is not a recommendation for how you should configure your environment. Always configure your channel auth records according to your company policy.

The MCAUSER should be set as a low-privileged user with the minimal access required to the queue manager.

The ADDRESS has been added to additionally limit the access to the queue manager. You can optionally set this.

The CLNTUSER is set to limit the users who can connect.

In this example, a username and password has not been enforced. The URL above contains the details to configure with a username and password. Set this according to your company policy.

Settings for the non-TLS enabled channel.

SET CHLAUTH('KAFKA.NO.SSL') TYPE(ADDRESSMAP) ADDRESS(*) MCAUSER('noaccess') DESCR('Default block')
SET CHLAUTH('KAFKA.NO.SSL') TYPE(BLOCKUSER) USERLIST('noaccess') DESCR('Default block')
SET CHLAUTH('KAFKA.NO.SSL') TYPE(USERMAP) ADDRESS('10.1.1.1-10') CLNTUSER('THE_CLIENT_USER') MCAUSER('mqkafka') DESCR('Allow access')

Settings for the TLS enabled channel. The SSL config is set on the channel.

SET CHLAUTH('KAFKA.SSL') TYPE(ADDRESSMAP) ADDRESS(*) MCAUSER('noaccess') DESCR('Default block')
SET CHLAUTH('KAFKA.SSL') TYPE(BLOCKUSER) USERLIST('noaccess') DESCR('Default block')
SET CHLAUTH('KAFKA.SSL') TYPE(USERMAP) ADDRESS('10.1.1.1-10') CLNTUSER('THE_CLIENT_USER') MCAUSER('mqkafka') DESCR('Allow access')

Define the queue(s)

If you are not using exactly once messaging, you can set the queue as non-persistent.

DEFINE QLOCAL('TO.KAFKA') DEFPSIST(NO)

If you are using exactly once messaging, it is recommended you set the queue as persistent.

DEFINE QLOCAL('TO.KAFKA.EXACTLY.ONCE') DEFPSIST(YES)

Queue manager security

Allow access to the queue manager.

SET AUTHREC OBJTYPE(QMGR) PRINCIPAL('mqkafka') AUTHADD(CONNECT,INQ)

Allow access to the queue(s).

The below example uses the exactly once queue name listed above.

SET AUTHREC PROFILE('TO.KAFKA.EXACTLY.ONCE') OBJTYPE(QUEUE) PRINCIPAL('mqkafka') AUTHADD(ALLMQI)

Accessing the SSL keystore

The MQ SSL keystore and password is accessed from within a Kafka connector. The connector refers to the file mounted in the Kafka Connect runtime.

The IBM Event Streams Kafka Connect deployed runtime contains the configuration to mount and access the keystore and password as an OpenShift secret.

The IBM Event Streams Kafka connector contains the IBM MQ SSL configurations to access the mounted keystore plus the IBM MQ channel TLS settings.

Using a secret allows you to access the SSL keystore without baking in any sensitive files into the docker image.

UNIX configuration

To follow this post, you need to create a directory in your home directory.

mkdir ~/kafka

Change to the kafka directory.

cd ~/kafka

Log into OpenShift

Log into OpenShift.

oc login -u sean --server=https://api.OPENSHIFT_DNS_NAME:6443

Select your project.

oc project YOUR_PROJECT_NAME

Create an OpenShift secret

Create an OpenShift secret to store the MQ SSL keystore and password.

FTP the IBM MQ client keystore

FTP your IBM MQ client keystore to the ‘~/kafka’ directory.

Verify the file.

cd ~/kafka
pwd
ls -l
[sean@localhost ~]$ cd ~/kafka

[sean@localhost kafka]$ pwd
/home/sean/kafka

[sean@localhost kafka]$ ls -l
total 12
-rw-r--r--. 1 sean sean 4755 Jun 17  2023 kafkaclient.jks

Convert the keystore to an OpenShift secret

Use the following command to create the secret.

Before you issue the command directly, refer to the “recommendation” below.

oc create secret generic mq-client-keystore-20250405 --from-file=client.jks=kafkaclient.jks --from-literal=password=XXX

Recommendation

If you issue the “oc create secret” command directly, it will be saved in the UNIX command history. This includes any cleartext password set in the command.

I rarely issue commands directly where sensitive data is part of the command.

The recommendation is to put the command in a file, execute the file, then delete the file.

For example, using the “vi” editor:

vi secret-jks.sh

Add the command to the file:

oc create secret generic mq-client-keystore-20250405 --from-file=client.jks=kafkaclient.jks --from-literal=password=XXX

Save the file using “:wq”.

Execute the file:

sh secret-jks.sh
[sean@localhost kafka]$ sh secret-jks.sh
secret/mq-client-keystore-20250405 created

Delete the file. I always seem to forget to do this.

rm secret-jks.sh

When you display the history, you’ll notice that the ‘oc’ command is not part of the history.

history

Command explanations

The following sections describe the command parameters.

mq-client-keystore-20250405

You can name this secret however you want.

In my example, 20250405 is yyyymmdd.

You don’t have to add this suffix – this is my standard.

I always add a date to the secret (or files) to assist in the renewal process. I prefer to create a new secret ahead of time, verify all is ok, and then amend the Kafka Connect deployment to use the new secret in a controlled fashion. This also makes the rollback simpler should there be any problems. Simply point back to the previous secret.

— from-file=client.jks=kafkaclient.jks

client.jks is the file name you’ll see in the “oc describe secret” command. This is the name you’ll reference in the Kafka connector YAML file.

kafkaclient.jks is the actual filename provided by your MQ admin.

These names can both be the same. I use different names here to show that they can be named differently.

In reality, the MQ admin may provide a keystore with the filename matching the username that the keystore is being created for. These names may change across environments. For example, kafkadev.jks for DEV andf kafkaprd.jks for PROD. For consistency in my Kafka connector YAML files, I prefer to use the same name and not adjust per environment. In my case I prefer to use the name client.jks irrespective of what the underlying username is.

–from-literal=password=XXX

Change XXX to your password.

Validate the secret

Issue the following commands to validate the created secret.

oc get secret mq-client-keystore-20250405
[sean@localhost kafka]$ oc get secret mq-client-keystore-20250405
NAME ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  TYPE ย  ย DATA ย AGE
mq-client-keystore-20250405 ย Opaque ย 2 ย  ย  2m53s

To see more details issue the following command.

oc describe secret mq-client-keystore-20250405
[sean@localhost kafka]$ oc describe secret mq-client-keystore-20250405
Name : ย  ย  ย  ย mq-client-keystore-20250405
Namespace: ย  ย YOUR_PROJECT_NAME
Labels: ย  ย  ย  <none>
Annotations: ย <none>

Type: Opaque

Data
====

client.jks: ย 5966 bytes
password: ย  ย 12 bytes

Deploy the Kafka Connect runtime

To enable SSL on the MQ client channel, you need to mount the keystore secret as a volume in the Kafka Connect runtime.

I have created a sample YAML file containing all the required configurations.

My challenges

Being new to IBM Event Streams, combined with the online documentation being a little light (it told me what-to-do but now how-to-do), this took me considerable time and effort.

It wasn’t until I discovered the following GitHub account that I was able to complete this task.

I even resorted to asking an AI chatbot questions, but without much luck.

I’m not sure whether this was due to me having no idea what I was doing, although that could be the case. But the primary problem I had was trying to add the volume configuration in the wrong section.

All search and AI results stated I needed to add the volume configuration in the YAML “containers” section of “spec”. While this works when the YAML “kind” was “Deployment”, I could not get this to work when the “kind” was “KafkaConnect”. I could not find any online doc that detailed how to complete this configuration.

After locating the GitHub account, I finally found the correct configuration. That was to add two configurations in the Kafka Connect YAML file:

  • The volume definition needs to be added under the “externalConfiguration” section.
  • A “config.providers” setting needs to be added to allow the connector to read the config from a secret.

After many hours of frustration, once I discovered the GitHub account, half an hour later I had everything working.

Mount the secret as a volume

To configure the Kafka Connect YAML file to mount the keystore as a volume, add the following two configurations.

Allow the connectors to read the configuration as a secret.

config.providers: file
config.providers.file.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider

Mount the secret as a volume.

externalConfiguration:
  volumes:
    - name: mq-client-keystore
  secret:
    secretName: mq-client-keystore-yyyymmdd

To see where to add, refer to the following.

apiversion: eventstreams.ibm.com/v1beta2
kind: KafkaConnect
metadata:
:
spec:
  authentication:
    :
  bootstrapServers: my=kafka-cluster-internal-bootstrap-url:9093
  config:
    :
    # MQ SSL
    # Refer: https://github.com/dalelane/mq-kafka-connect-tutorial/blob/master/08-setup-kafka-connect/resources/connect-cluster.yaml
    # Allow the connector to read the config from secrets
    config.providers: file
    config.providers.file.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider
    # MQ SSL
  image: my-image-repo/kc-mq-source-2.3.0:1.0
  replicas: 1
  resources :
  :
  template:
    pod :
      :
  tls:
    :
  # MQ JKS file
  # Refer: https://github.com/dalelane/mq-kafka-connect-tutorial/blob/master/08-setup-kafka-connect/resources/connect-cluster.yaml
  externalConfiguration:
    volumes:
      - name: mq-client-keystore
    secret:
      secretName: mq-client-keystore-yymmdd
  # MQ JKS file

For a full working YAML file, you can refer to sample YAML file.

If you don’t have access to GitHub, you can access the sample from here.

Deploy the file

Deploy the Kafka Connect runtime containing the MQ SSL configurations.

oc apply -f kc-cr.yaml
[sean@localhost kafka]$ oc apply -f kc-cr.yaml
kafkaconnect.eventstreams.ibm.com/my-kafka-connect-name-01 created

Wait for the pod to start.

Assuming no problems, this can take a minute or so to complete.

To monitor the creation process you can view the events.

oc get events -- sort-by "lastTimestamp"
[sean@localhost kafka]$ oc get events -- sort-by "lastTimestamp"
LAST SEEN   TYPE    REASON          OBJECT                                                MESSAGE
29s         Normal  Scheduled       pod/my-kafka-connect-name-01-connect-0                Successfully
5m30s       Normal  NoPods          poddisruptionbudget/my-kafka-connect-name-01-connect  No matching pods found
80s         Normal  Killing         pod/kafka-connect-mq-01-connect-0                     Stopping container
30s         Normal  NoPods          poddisruptionbudget/my-kafka-connect-name-01-connect  No matching pods found
27s         Normal  AddedInterface  pod/my-kafka-connect-name-01-connect-0                Add eth0 [10.1.1.1/24] from openshift-sdn
27s         Normal  Pulled          pod/my-kafka-connect-name-01-connect-0                Container image "YOUR_REGISTRY/kc-mq-source-2.3.0:1.0" already present on machine
27s         Normal  Created         pod/my-kafka-connect-name-01-connect-0                Created container
27s         Normal  Started         pod/my-kafka-connect-name-01-connect-0                Started container

I usually wait until I can see the image downloaded, then continue to monitor the process by watching the pod status.

The below uses the ‘watch’ keyword to keep monitoring the status automatically.

watch oc get po

Wait until the pod is running.

[sean@localhost kafka]$ oc get po
NAME                    READY  STATUS   RESTARTS  AGE
kc-mq-src-connect-0     1/1    Running  0         2m7s

Confirm the secret has been mounted as a volume

Connect to the running pod to verify the MQ SSL keystore has been mounted successfully.

SYNTAX:
oc rsh POD_NAME
oc rsh kc-mq-src-connect-0

This opens a UNIX shell to the running pod.

Display the mounted file systems.

df -h
sh-4.4$ df -h
Filesystem   Size  Used    Avail  Use%  Mounted on               
overlay      300G  235G    65G    79%   /            
tmpfs        64M   0       64M    0%    /dev            
tmpfs        64G   0       64G    0%    /sys/fs/cgroup            
shm          64M   0       64M    0%    /dev/shm            
tmpfs        13G   80M     13G    1%    /etc/passwd            
tmpfs        30M   16K     30M    1%    /tmp            
/dev/sda4    300G  235G    65G    79%   /etc/hosts            
tmpfs        2.0G  4.0K    2.0G   1%    /opt/kafka/connect-certs/ca-cert-for-tls-to-kafka             
tmpfs        2.0G  20K     2.0G   1%    /opt/kafka/connect-certs/kafka-user-keys             
tmpfs        2.0G  12K     2.0G   1%    /opt/kafka/external-configuration/mq-client-keystore             
tmpfs        2.0G  24K     2.0G   1%    /run/secrets/kubernetes.io/serviceaccount             
tmpfs        64G   0       64G    0%    /proc/acpi            
tmpfs        64G   0       64G    0%    /proc/scsi            
tmpfs        64G   0       64G    0%    /sys/firmware            

There are 3 filesystems that are of interest.

/opt/kafka/connect-certs/ca-cert-for-tls-to-kafka

This filesystem contains the cert used to establish a TLS connection to the Kafka cluster.

/opt/kafka/connect-certs/kafka-user-keys

This filesystem contains the user (authorization) keys provided by the Kafka admin to connect to the Kafka cluster. This user should have been granted access to the required Kafka topics.

/opt/kafka/external-configuration/mq-client-keystore

This is the filesystem we are looking for.

It contains the MQ SSL keystore that is used when connecting as a TLS enabled MQ client connection to the queue manager.

Let’s check the stored keystore details.

cd /opt/kafka/external-configuration/mq-client-keystore
ls -l
sh-4.4$ cd /opt/kafka/external-configuration/mq-client-keystore
sh-4.4$ 1s -1
total 0
lrwxrwxrwx. 1 root 1001410000 17 Apr 4 14:52 client.jks -> .. data/client.jks
lrwxrwxrwx. 1 root 1001410000 15 Apr 4 14:52 password -> .. data/password

As you can see, there are two files:

  • client.jks: The keystore file.
  • password: The password to open the keystore file.

We are done. Exit the UNIX shell.

exit

Create the Kafka MQ Source Connector

To configure the Kafka MQ Source Connector for SSL, the following changes are required to the YAML file.

apiversion: eventstreams.ibm.com/vibeta2
kind: KafkaConnector
metadata:
:
spec:
  class: com.ibm.eventstreams.connect.mqsource.MQSourceConnector
  config:
    :
    # MQ SSL
    # Refer: https://github.com/dalelane/mq-kafka-connect-tutorial/blob/master/08-setup-kafka-connect/resources/connector.yaml
    mq.ssl.use.ibm.ciper.mappings: false
    mq.ssl.cipher.suite: TLS_AES_128_GCM_SHA256
    mq.ssl.truststore.location: /opt/kafka/external-configuration/mq-client-keystore/client.jks
    mq.ssl.truststore.password: ${file:/opt/kafka/external-configuration/mq-client-keystore:password}
    mq.ssl.keystore.location: /opt/kafka/external-configuration/mq-client-keystore/client.jks
    mq.ssl.keystore.password: ${file:/opt/kafka/external-configuration/mq-client-keystore:password}
    # MQ SSL
    :

A full sample file can be downloaded from here.

If you cannot access GitHub, you can download from here.

Issue the following command to deploy the connector.

oc apply -f kc-mq.yaml
[sean@localhost kafka]$ oc apply -f kc-mq.yaml
kafkaconnector.eventstreams.ibm.com/my-kafka-connector-name-01 created

To check the deployment status, you can log into the OpenShift console and view the pod logs.

Alternatively, you can use the OC CLI.

Get the pod name.

oc get pods
[sean@localhost kafka]$ oc get pods
NAME                                              READY   STATUS    RESTARTS   AGE
my-kafka-connect-name-01-connect-0                1/1     Running   0          31s

You can tail the logs as follows.

Check for any errors.

oc logs -f my-kafka-connect-name-01-connect-0

Verification

Assuming no errors in the pod logs, use IBM MQ Explorer to confirm the input handles on the MQ queue is 1.

You should also see the SSL enabled channel running.

Problems

This section contains problems I faced.

Password verification failed

After I deployed the Kafka Connect runtime followed by my Kafka connector, I received a “password verification failed” error.

I double checked the secret including validating the password and re-confirmed the secret was mounted as a volume using the “oc rsh” command. All looked ok, yet the problem prevailed.

Eventually I realized I missed adding the following to the Kafka Connect deployment.

config.providers: file
config.providers.file.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider

This important configuration allows the connector to read the config as a secret.

Java stack trace

The following java stack trace shows the error message.

(org.apache. kafka.common.config.AbstractConfig) [task-thread-my-kafka-connector-name-01-0]
2025-04-06 13:23:11,030 ERROR [my-kafka-connector-name-01|task-0] Unexpected connect exception: (com.ibm.eventstreams.connect.mqsource.MQSourceTask)
[task-thread-my-kafka-connector-name-01-0]
org.apache.kafka.connect.errors.ConnectException: Error reading keystore /opt/kafka/external-configuration/mq-client-keystore/client.jks
at com. ibm.eventstreams. connect.mqsource. SSLContextBuilder. loadKeyStore (SSLContextBuilder.java:82)
at com. ibm.eventstreams. connect.mqsource.SSLContextBuilder.buildSslContext (SSLContextBuilder.java:49)
at com.ibm.eventstreams.connect.mqsource. JMSWorker.configure (JMSWorker.java:137)
at com. ibm.eventstreams. connect.mqsource. MQSourceTask.start (MQSourceTask.java:177)
at com.ibm.eventstreams.connect.mqsource.MQSourceTask.start (MQSourceTask.java:153)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.initializeAndStart (AbstractWorkerSourceTask.java:280)
at org.apache.kafka.connect.runtime.WorkerTask.doRun (WorkerTask. java:202)
at org.apache. kafka. connect.runtime. WorkerTask. run (WorkerTask. java:259)
at org. apache. kafka.connect. runtime.AbstractWorkerSourceTask.run (AbstractWorkerSourceTask. java:77)
at org. apache. kafka. connect. runtime. isolation. Plugins. lambda$withClassLoader$1 (Plugins. java:236)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:539)
at java. base/java. util. concurrent. FutureTask. run (FutureTask. java: 264)
at java.base/java.util.concurrent.ThreadPoolExecutor. runWorker (ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent. ThreadPoolExecutorSWorker.run (ThreadPoolExecutor.java: 635)
at java.base/java.lang.Thread.run (Thread. java: 857)
Caused by: java.io. IOException: Keystore was tampered with, or password was incorrect
at java. base/sun. security.provider. JavaKeyStore. engineLoad (JavaKeyStore.java: 813)
at java.base/sun.security. util. KeyStoreDelegator. engineLoad (KeyStoreDelegator.java: 221)
at java.base/java.security.KeyStore. load (KeyStore. java: 1473)
at com. ibm. eventstreams. connect.mqsource. SSLContextBuilder. loadKeyStore (SSLContextBuilder. java:76)
. 14 more
Caused by: java. security. UnrecoverableKeyException: Password verification failed
at java. base/sun. security. provider. JavaKeyStore. engineLoad (JavaKeyStore. java : 811)
... 17 more
2025-04-06 13:23:11,030 INFO 10.1.1.1- - [06/Apr/2025:13:23:10 +0000] "GET /connectors/my-kafka-connector-name-01/config HTTP/1.1" 200 1103 "-" "-" 39
(org.apache. kafka.connect.runtime.rest.RestServer) [qtp2141149190-36]
2025-04-06 13:23:11, 032 ERROR [my-kafka-connector-name-01|task-0] WorkerSourceTask{id=my-kafka-connector-name-01-0} Task threw an uncaught and unrecov
exception. Task is being killed and will not recover until manually restarted (org.apache. kafka.connect.runtime.WorkerTask) [task-thread-mq-src-mq-push-notification-ocp
org. apache.kafka.connect.errors. ConnectException: Error reading keystore /opt/kafka/external-configuration/mq-client-keystore/client.jks
at com. ibm. eventstreams. connect.mqsource. SSLContextBuilder. loadKeyStore (SSLContextBuilder. java: 82)
at com. ibm. eventstreams. connect.mqsource. SSLContextBuilder.buildSslContext (SSLContextBuilder.java: 49)
at com. ibm. eventstreams. connect.mqsource. JMSWorker. configure (JMSWorker, java: 137)
at com. ibm.eventstreams.connect.mqsource.MQSourceTask.start (MQSourceTask. java: 177)
at com. ibm. eventstreams. connect.mqsource. MQSourceTask.start (MQSourceTask. java: 153)
at org.apache.kafka.connect.runtime. AbstractWorkerSourceTask. initializeAndStart (AbstractWorkerSourceTask.java:280)
ntime.WorkerTask.doRun (WorkerTask, java:202)

References

 URLs to be helpful for this process.

Last updated: 6 April 2025

Sean Boyd Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *