Quick Guide: Configure Apache Kafka SSL/TLS Encryption for Enhanced Security

|
Last Updated:
|
|

In this quick guide, we will take you through steps on how to configure Apache Kafka SSL/TLS encryption for enhanced security. By default, Kafka uses PLAINTEXT, that is , all data is sent in clear text. By establishing a trusted communication channel between Kafka brokers and clients, SSL/TLS ensures the confidentiality and integrity of your data. Our step-by-step instructions simplify the setup process, including generating certificates and configuring Kafka.

Configure Apache Kafka SSL/TLS Encryption

Install and Setup Kafka with KRaft Algorithm

If you havent already setup Kafka, check our guide below on how to install and setup Kafka, without zookeeper, but with KRaft consensus algorithm.

Easy Steps: Install Apache Kafka on Debian 12

Generate SSL/TLS Certificates

To begin with, you need to generate TLS/SSL certificates for Kafka brokers and clients.

In this tutorial, we will be using our own self-signed SSL/TLS certificate. If possible, please use commercially signed/trusted CA certificates.

Generate CA Private Key

Run the following OpenSSL command to generate a private key for your CA:

mkdir /etc/ssl/kafka
openssl genpkey -algorithm RSA -out /etc/ssl/kafka/ca.key

The command generates an RSA private key and saves it in the file /etc/ssl/kafka/ca.key.

Generate CA self-signed certificate

Once you have the private key, you can now generate the CA self-signed certificate using the command below. When the command runs, you are prompted to provide information about your CA, such as the common name, organization, and location, contact email e.t.c. Common Name, must be provided.

openssl req -x509 -new -key /etc/ssl/kafka/ca.key -days 3650 -out /etc/ssl/kafka/ca.crt

Sample output;


You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:California
Locality Name (eg, city) []:San Francisco
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Kifarunix-Demo Inc
Organizational Unit Name (eg, section) []:Infrastracture
Common Name (e.g. server FQDN or YOUR name) []:kafka.kifarunix-demo.com
Email Address []:

You can provide all these information from the command line using the -subj option.

openssl req -x509 -new -key /etc/ssl/kafka/ca.key -days 3560 -out /etc/ssl/kafka/ca.crt \
-subj "/C=US/ST=California/L=San Francisco/O=Kifarunix-Demo Inc/CN=kafka.kifarunix-demo.com/[email protected]"

Note that it is not recommended to use wildcard CN. Instead, use SAN to define your other domains/IPs.

Generate Server Private Key and CSR

Next, generate the server private key and certificate signing request (CSR).

openssl req -new -newkey rsa:4096 -nodes -keyout /etc/ssl/kafka/server.key \
-out /etc/ssl/kafka/server.csr \
-subj "/C=US/ST=California/L=San Francisco/O=Kifarunix-Demo Inc/CN=kafka.kifarunix-demo.com/[email protected]"

Generate and Sign Server Certificate

Now, you need to generate the server certificate using the CSR, the CA cert and private key.

Note that since OpenSSL command doesn’t include the extensions such as Subject Alternative Names on the certificate, you need to provide this information manually.

SAN extension allows you to include additional subject names, such as domain names or IP addresses, in a single certificate, thus allowing a certificate to be valid for multiple entities or alternative names.

So, create a CNF file with your SAN extensions;

vim /etc/ssl/kafka/san.cnf

authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1=kifarunix-demo.com
DNS.2=*.kifarunix-demo.com

then generate and sign the server certificate;

openssl x509 -req -in /etc/ssl/kafka/server.csr -CA /etc/ssl/kafka/ca.crt \
-CAkey /etc/ssl/kafka/ca.key -CAcreateserial -out /etc/ssl/kafka/server.crt \
-days 3650 -extfile /etc/ssl/kafka/san.cnf

Sample output;

Certificate request self-signature ok
subject=C = US, ST = California, L = San Francisco, O = Kifarunix-Demo Inc, CN = kafka.kifarunix-demo.com, emailAddress = [email protected]

Create Kafka Keystore

Now that we have the server certificate and key, we need to generate Kafka keystore.

Convert Server Certificate to PKCS12

First, convert the certificate into PKCS12 format. When prompted, provide the keystore password and keep that password somewhere you can easily retrieve.


openssl pkcs12 -export \
	-in /etc/ssl/kafka/server.crt \
	-inkey /etc/ssl/kafka/server.key \
	-name kafka-broker \
	-out /etc/ssl/kafka/kafka.p12

Create Kafka Java KeyStore (JKS)

Next, create Kafka Java KeyStore (JKS) and import the certificate. You will be required to set the destination keystore and source keystore passwords.


keytool -importkeystore \
	-srckeystore /etc/ssl/kafka/kafka.p12 \
	-destkeystore /etc/ssl/kafka/kafka.keystore.jks \
	-srcstoretype pkcs12

Sample output;


Importing keystore /etc/ssl/kafka/kafka.p12 to /etc/ssl/kafka/kafka.keystore.jks...
Enter destination keystore password:  
Re-enter new password: 
Enter source keystore password:  
Entry for alias kafka-broker successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Create Kafka TrustStore

Similarly, create Kafka truststore containing your CA root certificate. This ensures that the connection between the clients/brokers via TLS can be proven to be signed by your CA.

keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file /etc/ssl/kafka/ca.crt

When executed, you will be prompted set the truststore password and whether to trust the certificate. And of course, trust it (yes)!

Also save the password.


Enter keystore password:  
Re-enter new password: 
Owner: CN=kafka.kifarunix-demo.com, OU=Infrastracture, O=Kifarunix-Demo Inc, L=San Francisco, ST=California, C=US
Issuer: CN=kafka.kifarunix-demo.com, OU=Infrastracture, O=Kifarunix-Demo Inc, L=San Francisco, ST=California, C=US
Serial number: 3c91690b7b180a5be423280485b8ea05f3582a6
Valid from: Sun Jul 16 02:02:37 EDT 2023 until: Wed Jul 13 02:02:37 EDT 2033
Certificate fingerprints:
	 SHA1: ED:01:33:C4:32:41:26:A0:2D:24:BC:39:0B:DF:F6:28:A1:5B:F3:0D
	 SHA256: D6:B7:78:58:F3:F6:41:7D:6C:A2:3B:9E:55:D6:1C:13:EA:07:0C:4D:D3:9F:3E:C5:82:EB:03:38:A9:60:1A:78
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3

Extensions: 

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
0000: CE 9A E0 3F 0E F5 DF BF   38 F5 AE 5B 33 B9 31 E7  ...?....8..[3.1.
0010: 3C AD A0 13                                        <...
]
]

#2: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
  CA:true
  PathLen: no limit
]

#3: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: CE 9A E0 3F 0E F5 DF BF   38 F5 AE 5B 33 B9 31 E7  ...?....8..[3.1.
0010: 3C AD A0 13                                        <...
]
]

Trust this certificate? [no]:  yes

Confirm your keystore/trustore details;

keytool -list -v -keystore /etc/ssl/kafka/kafka.keystore.jks
keytool -list -v -keystore /etc/ssl/kafka/kafka.truststore.jks

Configure Apache Kafka SSL/TLS Encryption

It is now time to configure Apache Kafka SSL/TLS Encryption. This can be done by updating the server.properties configuration as follows.

Note that we are running Kafka with KRaft algorithm in our setup.

Open the Kafka server/broker configuration for updates;

vim /opt/kafka/config/kraft/server.properties

By default, Kafka is set to accept plain text connections as you can see under Socker server settings section;


############################# Socket Server Settings #############################

# The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092,CONTROLLER://:9093

# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT

# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=PLAINTEXT://localhost:9092

# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

Thus, to enable SSL/TLS connection, we will update some of the configs here and add a few more SSL settings.

Note the controller.listener.names can be used if you have a Kafka cluster. At the moment, we are just running single node Kafka cluster. If you have a cluster, ensure you configure SSL/TLS settings on all nodes.

With comment lines removed, this is how our Socket server settings look like;


############################# Socket Server Settings #############################

listeners=SSL://kafka.kifarunix-demo.com:9092,CONTROLLER://kafka.kifarunix-demo.com:9093
inter.broker.listener.name=SSL
advertised.listeners=SSL://kafka.kifarunix-demo.com:9092
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:SSL,SSL:SSL


ssl.keystore.location=/etc/ssl/kafka/kafka.keystore.jks
ssl.keystore.password=ChangeME
ssl.key.password=ChangeME
ssl.truststore.location=/etc/ssl/kafka/kafka.truststore.jks
ssl.truststore.password=ChangeME
ssl.client.auth=required

Update the configuration according to your setup.

Note that the line ssl.client.auth=required enforces SSL/TLS client authentication. It specifies that clients connecting to the Kafka brokers must provide a valid client certificate for authentication.

Also, if you are using KRaft, ensure you update the controller address;


############################# Server Basics #############################

# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller

# The node id associated with this instance's roles
node.id=1

# The connect string for the controller quorum
[email protected]:9093

Save and exit the configuration file.

Test and Validate Kafka SSL/TLS Connection

Restart Kafka Service

You can now restart Kafka service to apply the changes;

systemctl restart kafka

Check the logs;

journalctl -f -u kafka

Check the status;


● kafka.service - Apache Kafka
     Loaded: loaded (/etc/systemd/system/kafka.service; disabled; preset: enabled)
     Active: active (running) since Sun 2023-07-16 07:03:36 EDT; 1min 14s ago
   Main PID: 129624 (java)
      Tasks: 90 (limit: 4642)
     Memory: 715.7M
        CPU: 14.477s
     CGroup: /system.slice/kafka.service
             └─129624 java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true "-Xlog>

Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,877] INFO Awaiting socket connections on kafka.kifarunix-demo.com:9092. (kafka.network.DataPlaneAcceptor)
Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,881] INFO [BrokerServer id=1] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,882] INFO [BrokerServer id=1] Finished waiting for all of the authorizer futures to be completed (kafka.server.Broker>
Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,882] INFO [BrokerServer id=1] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,882] INFO [BrokerServer id=1] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.Brok>
Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,882] INFO [BrokerServer id=1] Transition from STARTING to STARTED (kafka.server.BrokerServer)
Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,883] INFO Kafka version: 3.5.0 (org.apache.kafka.common.utils.AppInfoParser)
Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,883] INFO Kafka commitId: c97b88d5db4de28d (org.apache.kafka.common.utils.AppInfoParser)
Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,883] INFO Kafka startTimeMs: 1689505422882 (org.apache.kafka.common.utils.AppInfoParser)
Jul 16 07:03:42 kafka.kifarunix-demo.com kafka-server-start.sh[129624]: [2023-07-16 07:03:42,884] INFO [KafkaRaftServer nodeId=1] Kafka Server started (kafka.server.KafkaRaftServer)

Test Client Topic Creation Over SSL/TLS

To simulate how a Kafka client would send stream of data into Kafka and create a topic over ssl, we will use the kafka-topics.sh, command on the Kafka server.

Note that Kafka server is now using SSL/TLS and requires client authentication via the certificate. Thus, create a properties files to define the client-broker connection properties;

vim ~/kafka-client-ssl-test.properties

Enter the following content and update them accordingly!


security.protocol=SSL
ssl.keystore.location=/etc/ssl/kafka/kafka.keystore.jks
ssl.keystore.password=ChangeME
ssl.truststore.location=/etc/ssl/kafka/kafka.keystore.jks
ssl.truststore.password=ChangeME

Note, if client authentication is not required in the broker, then you dont need the ssl.keystore. settings.

Next, test the SSL/TLS connection to Kafka;


/opt/kafka/bin/kafka-topics.sh --create \
	--topic testssl-topic \
	--bootstrap-server kafka.kifarunix-demo.com:9092  \
	--command-config kafka-client-ssl-test.properties

If all goes well, then you should see such an output;

Created topic testssl-topic.

List the topics;

/opt/kafka/bin/kafka-topics.sh --list \
	--bootstrap-server kafka.kifarunix-demo.com:9092  \
	--command-config kafka-client-ssl-test.properties

Exclude Internal Kafka Broker Connections from SSL Authentication

Now, what if you want to exclude the connections within the Kafka broker from SSL authentication so you don't have to always provide a path to SSL configurations file as we did above while listing the topics created?

Edit the properties file and set the PLAINTEXT connection for localhost on specific port;

vim /opt/kafka/config/kraft/server.properties

See the the highlighted configs added;


listeners=SSL://kafka.kifarunix-demo.com:9092,CONTROLLER://kafka.kifarunix-demo.com:9093,PLAINTEXT://localhost:9094
...
advertised.listeners=SSL://kafka.kifarunix-demo.com:9092,PLAINTEXT://localhost:9094
...
listener.security.protocol.map=CONTROLLER:SSL,SSL:SSL,PLAINTEXT:PLAINTEXT

Restart Kafka;

systemctl restart kafka

Ensure the port is opened;

ss -altnp | grep :90

LISTEN 0      50     [::ffff:192.168.57.32]:9092             *:*    users:(("java",pid=140837,fd=159))                                                                                                                                     
LISTEN 0      50     [::ffff:192.168.57.32]:9093             *:*    users:(("java",pid=140837,fd=131))                                                                                                                                     
LISTEN 0      50         [::ffff:127.0.0.1]:9094             *:*    users:(("java",pid=140837,fd=161)) 

You can then run Kafka commands internally, without need for SSL authentication;

/opt/kafka/bin/kafka-topics.sh --list --bootstrap-server localhost:9094

And that is it on our guide on how to configure Apache Kafka SSL/TLS encryption.

Further Reading

Read more on Apache Kafka Security configuration page.

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
Kifarunix
Linux Certified Engineer, with a passion for open-source technology and a strong understanding of Linux systems. With experience in system administration, troubleshooting, and automation, I am skilled in maintaining and optimizing Linux infrastructure.

Leave a Comment