This is part two of Kafka series. if you want to learn how Kafka works and Kafka architecture. Read here Kafka Architecture.Kafka Getting Started - Kafka Series - Part 2
In this article, we will see how to configure Kafka locally and run the Kafka server.
Firstly, On a single machine, a 3 broker Kafka instance is at best the minimum, for a hassle-free working. Also, the replication factor is set to 2.
Let's say A, B and C are our Kafka brokers. With replication factor 2, the data in A will be copied to both B & C, the data in B will be copied to A & C and the data of C is copied to A & B.
Meanwhile, extract the Kafka archive in the convenient place and cd into it. Use the terminal to run the Kafka ecosystem
First we need to run the Kafka ZooKeeper in the Terminal.
ZooKeeper used to manage the service discovery and to do the leadership election Kafka Brokers. it sends changes of the topology to Kafka, so each node in the cluster knows when a new broker joined, a Broker died, a topic was removed or a topic was added, etc. it provides an in-sync view of Kafka Cluster configuration.
start the ZooKeeper instace with
$ cp config/server.properties config/server.bk1.properties
1broker.id=1 #unique id for our broker instance2listeners=PLAINTEXT://:9093 #port where it listens3log.dirs=/home/neoito/kafka-logs-1 #to a place thats not volatile
1$ bin/kafka-server-start.sh config/server.bk1.properties2$ bin/kafka-server-start.sh config/server.bk2.properties3$ bin/kafka-server-start.sh config/server.bk3.properties
After that, we need to create a topic where Producer can push the records and consumer can subscribe/listen to it.
1bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
Above command will create a topic in the Kafka Broker with a replication factor of 3 with one partition
1bin/kafka-console-producer.sh --broker-list localhost:9093 --topic my-replicated-topic
the command will start the producer from the command cli where we can push the records to Kafka Brokers. After that, we need to start the consumer from the cli.
1bin/kafka-console-consumer.sh --bootstrap-server localhost:9093 --from-beginning --topic my-replicated-topic
it will start the consumer in the port 9093. --from-beginning command will read the records in the topic from the beginning.
That is to say, if we type anything in the producer cli, we can read those records from the consumers command line.
In conclusion, we can now subscribe to a topic and listen to the records without losing any data. it will be useful in several scenarios.
we will see how to use Apache Kafka with web application in next blog.Stay tuned!!!!! :-)