Apache Kafka is an open-source stream processing and message broker software application that allows you to process data streams via a distributed streaming platform. It acts as a messaging system between the sender and the recipient. Apache Kafka is based on a distributed architecture, so it provides high fault tolerance and scalability capabilities. It was originally developed by LinkedIn, but now it is a project of Apache Software Foundation. Apache Kafka provides an interface to read and write data to Kafka clusters or to import and export data to and from third-party systems.
In this post, we will explain how to install Apache Kafka on Rocky Linux 10.
Step 1- Install Java
Apache Kafka is a Java-based application, so Java must be installed on your server. If not installed, you can install it using the following command:
dnf update -y dnf install java-21-openjdk-devel -y
Once Java is installed, verify the Java installation using the following command:
java --version
You will get the Java version in the following output:
openjdk 21.0.8 2025-07-15 LTS OpenJDK Runtime Environment (Red_Hat-21.0.8.0.9-1) (build 21.0.8+9-LTS) OpenJDK 64-Bit Server VM (Red_Hat-21.0.8.0.9-1) (build 21.0.8+9-LTS, mixed mode, sharing)
Step 2 – Install Apache Kafka on Rocky Linux 10
First, go to the Apache official website and download the latest version of Apache Kafka using the wget command:
wget https://dlcdn.apache.org/kafka/4.1.0/kafka_2.13-4.1.0.tgz
Once the download is completed, extract the downloaded file using the following command:
tar -xvzf kafka_2.13-4.1.0.tgz
Once the downloaded file is extracted, move the extracted directory to /usr/local directory:
mv kafka_2.13-4.1.0 /usr/local/kafka
Once you are finished, you can proceed to the next step.
Step 3 – Configure Kafka
First, edit the Kafka server.properties file.
nano /usr/local/kafka/config/server.properties
Modify the following lines:
process.roles=broker,controller node.id=1 controller.listener.names=CONTROLLER listeners=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT inter.broker.listener.name=PLAINTEXT controller.quorum.voters=1@localhost:9093 log.dirs=/usr/local/kafka/data
Next, build the random uuid.
/usr/local/kafka/bin/kafka-storage.sh random-uuid
Output.
SSoviLO8RtmlnOyHEPOcMQ
Initialize the storage directory with that UUID.
/usr/local/kafka/bin/kafka-storage.sh format -t SSoviLO8RtmlnOyHEPOcMQ -c /usr/local/kafka/config/server.properties
Step 4 – Create Systemd Service File for Kafka
For the production environment, it is recommended to create a systemd service file to run both Zookeeper and Kafka in the background.
First, create a systemd service file for Kafka using the following command:
nano /etc/systemd/system/kafka.service
Add the following lines:
[Unit] Description=Apache Kafka Server Documentation=http://kafka.apache.org/documentation.html [Service] Type=simple Environment="JAVA_HOME=/usr/lib/jvm/jre-11-openjdk" ExecStart=/usr/bin/bash /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties ExecStop=/usr/bin/bash /usr/local/kafka/bin/kafka-server-stop.sh [Install] WantedBy=multi-user.target
Save and close the file, then reload the systemd daemon with the following command:
systemctl daemon-reload
Next, start both Zookeeper and Kafka services and enable them to start at system reboot:
systemctl start zookeeper systemctl start kafka systemctl enable zookeeper systemctl enable kafka
You can also check Kafka using the following command:
systemctl status kafka
You will get the following output:
● kafka.service - Apache Kafka Server Loaded: loaded (/etc/systemd/system/kafka.service; disabled; preset: disabled) Active: active (running) since Sun 2025-10-19 01:56:50 EDT; 3s ago Invocation: 0fcfe3b308ce4be3b9a8ae56328e5e9a Docs: http://kafka.apache.org/documentation.html Main PID: 15974 (java) Tasks: 48 (limit: 24809) Memory: 218.5M (peak: 218.9M) CPU: 4.866s CGroup: /system.slice/kafka.service └─15974 /usr/lib/jvm/jre-21-openjdk/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcur>
Step 5 – Create a Topic on Kafka
To test Apache Kafka, you will need to create at least one topic on the server.
Change the directory to Apache Kafka and create a test topic named topic1 with the following command:
cd /usr/local/kafka/ bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic topic1
You can now verify your created topic using the following command:
bin/kafka-topics.sh --list --bootstrap-server localhost:9092
You will get the following output:
topic1
Kafka provides two APIs: Producer and Consumer. The Producer is responsible for creating events and the Consumer displays them on the screen:
First, run the following command to create an event named event1 using the following command:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic event1
Type some text that you want to stream and display on the Consumer.
>Hi, this is my first event
Sample output:
[2025-10-22 07:58:05,318] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 3 : {event1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Open another terminal and run the following command to display the generated event data in real-time:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic event1 --from-beginning
You will get the following output:
Hi, this is my first event
Conclusion
In the above guide, you learned how to install Apache Kafka on Rocky Linux 10. For more information, you can visit the Apache Kafka documentation page. Get started with Apache Kafka on VPS hosting from Atlantic.Net!