Apache Kafka is a free, open-source, distributed event streaming platform that is used by thousands of companies for high-performance data pipelines. It is a message broker software application primarily used to build real-time streaming data pipelines and applications. Apache Kafka allows you to process data streams via a distributed streaming platform. It is based on a distributed architecture, so it provides high fault tolerance and scalability capabilities.
In this post, we will explain how to install Apache Kafka on Oracle Linux 10.
Step 1- Install Java
Apache Kafka is a Java-based application, so Java must be installed on your server. If not installed, you can install it using the following command:
dnf update -y dnf install java-21-openjdk-devel -y
Once Java is installed, verify the Java installation using the following command:
java --version
You will get the Java version in the following output:
openjdk 21.0.8 2025-07-15 LTS OpenJDK Runtime Environment (Red_Hat-21.0.8.0.9-1) (build 21.0.8+9-LTS) OpenJDK 64-Bit Server VM (Red_Hat-21.0.8.0.9-1) (build 21.0.8+9-LTS, mixed mode, sharing)
Step 2 – Install Apache Kafka on Oracle Linux 10
First, go to the Apache official website and download the latest version of Apache Kafka using the wget command:
wget https://dlcdn.apache.org/kafka/4.1.1/kafka_2.13-4.1.1.tgz
Once the download is completed, extract the downloaded file using the following command:
tar -xvzf kafka_2.13-4.1.1.tgz
Once the downloaded file is extracted, move the extracted directory to /usr/local directory:
mv kafka_2.13-4.1.1 /usr/local/kafka
Once you are finished, you can proceed to the next step.
Step 3 – Configure Kafka
First, edit the Kafka server.properties file.
nano /usr/local/kafka/config/server.properties
Modify the following lines:
process.roles=broker,controller node.id=1 controller.listener.names=CONTROLLER listeners=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT inter.broker.listener.name=PLAINTEXT controller.quorum.voters=1@localhost:9093 log.dirs=/usr/local/kafka/data
Next, build the random uuid.
/usr/local/kafka/bin/kafka-storage.sh random-uuid
Output.
SSoviLO8RtmlnOyHEPOcMQ
Initialize the storage directory with that UUID.
/usr/local/kafka/bin/kafka-storage.sh format -t SSoviLO8RtmlnOyHEPOcMQ -c /usr/local/kafka/config/server.properties
Step 4 – Create Systemd Service File for Kafka
For the production environment, it is recommended to create a systemd service file to run both Zookeeper and Kafka in the background.
First, create a systemd service file for Kafka using the following command:
nano /etc/systemd/system/kafka.service
Add the following lines:
[Unit] Description=Apache Kafka Server Documentation=http://kafka.apache.org/documentation.html [Service] Type=simple Environment="JAVA_HOME=/usr/lib/jvm/jre-21-openjdk" ExecStart=/usr/bin/bash /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties ExecStop=/usr/bin/bash /usr/local/kafka/bin/kafka-server-stop.sh [Install] WantedBy=multi-user.target
Save and close the file, then reload the systemd daemon with the following command:
systemctl daemon-reload
Next, start the Kafka service and enable it to start at system reboot:
systemctl start kafka systemctl enable kafka
You can also check Kafka using the following command:
systemctl status kafka
You will get the following output:
ā kafka.service - Apache Kafka Server
Loaded: loaded (/etc/systemd/system/kafka.service; disabled; preset: disabled)
Active: active (running) since Tue 2025-12-30 22:31:17 EST; 4s ago
Invocation: 69b9ca646b9d4a7da680eb793647a8e2
Docs: http://kafka.apache.org/documentation.html
Main PID: 178425 (java)
Tasks: 104 (limit: 24812)
Memory: 366.4M (peak: 366.7M)
CPU: 6.684s
CGroup: /system.slice/kafka.service
āā178425 /usr/lib/jvm/jre-21-openjdk/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcu>
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,219] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,239] INFO [BrokerServer id=1] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,239] INFO [BrokerServer id=1] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,240] INFO [BrokerServer id=1] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,240] INFO [BrokerServer id=1] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,240] INFO [BrokerServer id=1] Transition from STARTING to STARTED (kafka.server.BrokerServer)
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,241] INFO Kafka version: 4.1.1 (org.apache.kafka.common.utils.AppInfoParser)
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,241] INFO Kafka commitId: be816b82d25370ce (org.apache.kafka.common.utils.AppInfoParser)
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,244] INFO Kafka startTimeMs: 1767151882240 (org.apache.kafka.common.utils.AppInfoParser)
Dec 30 22:31:22 oracle bash[178425]: [2025-12-30 22:31:22,249] INFO [KafkaRaftServer nodeId=1] Kafka Server started (kafka.server.KafkaRaftServer)
Step 5 – Create a Topic on Kafka
To test Apache Kafka, you will need to create at least one topic on the server.
Change the directory to Apache Kafka and create a test topic named topic1 with the following command:
cd /usr/local/kafka/ bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic topic1
You can now verify your created topic using the following command:
bin/kafka-topics.sh --list --bootstrap-server localhost:9092
You will get the following output:
topic1
Kafka provides two APIs: Producer and Consumer. The Producer is responsible for creating events and the Consumer displays them on the screen:
First, run the following command to create an event named event1 using the following command:
bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic event1
Type some text that you want to stream and display on the Consumer.
>Hi, this is my first event
Sample output:
[2025-10-22 07:58:05,318] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 3 : {event1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Open another terminal and run the following command to display the generated event data in real-time:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic event1 --from-beginning
You will get the following output:
Hi, this is my first event
Conclusion
In the above guide, we explained how to install Apache Kafka on Oracle Linux 10. You can now integrate Apache Kafka with your application to collect and analyze a large amount of data. For more information, you can visit the Apache Kafka documentation page. Try it on dedicated hosting from Atlantic.Net!