Kafka Consumer Lag Command Line

bin/kafka-consumer-groups. sh` - script for consuming messages from Kafka topic At the time of writing current Kafka version is 0. This combination of features means that Kafka consumers are very cheap—they can come and go without much impact on the cluster or on other consumers. The Datadog Agent emits an event when the value of the consumer_lag metric goes below 0, tagging it with topic, partition and consumer_group. My Kafka origin is running with one day lag, the messages are not getting broadcasted as I see in the Kafka consumer from the command line. Some of them are listed below: Command line client provided as default by Kafka; kafka-python. Kafka Streams Upgrade System Tests 0100. The CURRENT-OFFSET is the last offset processed by a consumer and LOG_END_OFFSET, the last event offset written be a consumer. 0]$ bin/kafka-console-consumer. From command line, execute the following command for starting two broker instances: # bin directory is the one which contains various kakfa shell scripts. Consumers are sink to data streams in Kafka Cluster. For a consumer to keep up, max lag needs to be less than a threshold and min fetch rate needs to be larger than 0. ms" Kafka Consumer instance poll timeout, which is specified for each Kafka spout using the setPollTimeoutMs method. We then added two consumers to the consumer group 'group1'. com:2181 --describe --group flume GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG OWNER flume t1 0 1 3 2 test-consumer-group_postamac. For the list of configurations, please reference Apache Kafka page. May this info could help. Metricly can help monitor the performance and throughput of your Kafka server using our Kafka collector for the Linux agent. Below are the articles related to Apache Kafka. This post isn't about installing Kafka, or configuring your cluster, or anything like that. sh to get consumer group details. Messages should be one per line. The balanced consumer coordinates state for several consumers who share a single topic by talking to the Kafka broker and directly to Zookeeper. • The producer side APIs add messages to the cluster for a topic. Use the at module to create schedul View all 1027 Hands-On Labs. Beware, that the consumer will commit its offset to zookeeper after a certain interval (default 10 seconds), so if you run this command a few times in a row you'll likely see the offset remain constant whilst lag increases, until a commit from the consumer will suddenly bring the offset up and hence lag down significantly in one go. It can manage hundreds of metrics from all the components of Kafka (Broker, Producer and Consumer) to pinpoint consumer lag. The following table shows which Apache Kafka (release 2. This post isn’t about installing Kafka, or configuring your cluster, or anything like that. Multiple clusters of the same type should be listed in the same `type. Before diving in, it is important to understand the general architecture of a Kafka deployment. sh which prints the Kafka version to the commandline. home:6667 --topic topic_name --group-id consumer_group_id; The output of the command will be: consumer_group_id topic_name consumer_group_id 123. IBM Event Streams has its own command-line interface (CLI) and this offers many of the same capabilities as the Kafka tools in a simpler form. The target audience would be the people who are willing to know about Apache Kafka, Zookeeper, Queues, Topics, Client - Server communication, Messaging system (Point to Point & Pub - Sub), Single node server, Multi node servers or Kafka cluster, command line producer and consumer, Producer application using Java API's and Consumer application. 10 and later version is highly flexible and extensible, some of the features include: Enhanced configuration API. Run local Kafka and Zookeeper using docker and docker-compose. Kafka Consumer. servers=esv4-hcl198. ms gets used to guarantee a minimum period that must pass before a message can be compacted. It is part of the confluent suite. Part 2 is about collecting operational data from Kafka, and Part 3 details how to monitor Kafka with Datadog. We create a Message Consumer which is able to listen to messages send to a Kafka topic. The following Kafka parameters are likely the most influential in the spout performance: “fetch. healthcare, which is its overall cost and quality. LoanDataKafkaConsumer consumes the loan data messages from the Topic “raw_loan_data_ingest”. May this info could help. sh and bin/kafka-console-consumer. 0 and above). kafka-shell. • The producer side APIs add messages to the cluster for a topic. Viewing offsets on a secure cluster In order to view offsets on a secure Kafka cluster, the consumer-groups tool has to be run with the command-config option. For this section, the execution of the previous steps is needed. Initially, no cluster is visible in Kafka Manager. Apache Kafka is a distributed streaming platform designed for high volume publish-subscribe messages and streams. Starting Kafka and Zookeeper. sh script, which is located in the bin directory of the Kafka distribution. These are the principal requirements and also you will need to be sure that you have in you consumer. I'm using kafka version 0. 0 Beta 2, the next major release of our database engine, featuring MemSQL SingleStore – a breakthrough new way. 0]$ bin/kafka-console-consumer. There are a few options. However sometimes the notebook is getting failed. 0 on Ubuntu 18. I will also explain few things along the way, and this demo will provide a good sense of some command line tools that Kafka provides. bin/kafka-consumer-groups. Path to properties file where you can customize producer. Use the at module to create schedul View all 1027 Hands-On Labs. Run the command: $ kafka-console-consumer. As explained in a previous post. The consumer is an application that feeds on the entries or records of a Topic in Kafka Cluster. sh in the Kafka directory are the tools that help to create a Kafka Producer and Kafka Consumer respectively. It's storing all data on disk. Like many JVM based applications it uses JMX (Java Management Extensions) for exposing metrics. / etc / kafka / zookeeper. bin/kafka-console-producer. Kafka, like almost all modern infrastructure projects, has three ways of building things: through the command line, through programming, and through a web console (in this case the Confluent Control Center). provides a set of commands to manipulate and modify the cluster prepaid visa card like payoneer topology kafka get broker list command line and get metrics for different states. So far, we have set up a Kafka cluster with an optimal configuration. Every one talks about it, writes about it. The largest delay in bringing a producer it is usually the authentication part. I want to help others avoid that pain if I can. This allows users to easily see which topics have fewer than the minimum number of in-sync replicas. Apache Kafka: A Distributed Streaming Platform. com:9092 buffer. docker-kafkacat - Dockerized kafkacat - a generic command line non-JVM Apache Kafka producer and consumer #opensource. The important part, for the purposes of demonstrating distributed tracing with Kafka and Jaeger, is that the example project makes use of a Kafka Stream (in the stream-app), a Kafka Consumer/Producer (in the consumer-app), and a Spring Kafka Consumer/Producer (in the spring-consumer-app). Check the number of messages read and written, as well as the lag for each consumer in a specific consumer group. Create Kafka Consumer by setting the following consumer configuration properties. A consumer is created that subscribes to the appropriate topic to start receiving messages. At least the number of Logstash nodes multiplied by. sh --list --zookeeper localhost:2181. Consumer groups are a feature of Apache Kafka which enable multiple consumer processes to divide the work of consuming Kafka topic. NET core gained more popularity because of its powerful original. consumer: tsc is not recognized as an internal or external command,. The target audience would be the people who are willing to know about Apache Kafka, Zookeeper, Queues, Topics, Client - Server communication, Messaging system (Point to Point & Pub - Sub), Single node server, Multi node servers or Kafka cluster, command line producer and consumer, Producer application using Java API's and Consumer application. Starting Kafka and Zookeeper. At least the number of Logstash nodes multiplied by. ms” Kafka Consumer instance poll timeout, which is specified for each Kafka spout using the setPollTimeoutMs method. Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. Now start a consumer by typing command "kafka-console-consumer. Apache Kafka: A Distributed Streaming Platform. Consumers are sink to data streams in Kafka Cluster. MemSQL extends our operational data platform with an on-demand, elastic cloud service, and new features to support Tier 1 workloads. I hope this post will bring you a list for easy copying and pasting. Hello! I am trying to execute simple example with Ignite and KafkaConsumer. kafka-console-consumer is a consumer command line to read data from a Kafka topic and write it to standard output. Topic deletion is enabled by default in new Kafka versions ( from 1. In this first scenario, we will see how to manage offsets from command-line so it will give us an idea of how to implement it in our application. The equivalent commands to start every service in its own terminal, without using the CLI are: # Start ZooKeeper. In the world of MicroServices,. To view offsets as in the previous example with the ConsumerOffsetChecker, you describe the consumer group using the following command: $ /usr/bin/kafka-consumer-groups --zookeeper zk01. In this video, I will provide a quick start demo. This article explores a different combination — using the ELK Stack to collect and analyze Kafka logs. So let's assume the following Kafka setup on Kubernetes. bin/kafka-topics. For example, you could use such a file to set all the properties needed for a SSL/SASL connection that the consumer will invoke. This tool has been removed in Kafka 1. Note: kafka-consumer-offset-checker is not supported in the new Consumer API. For example, you can use our command line tools to "tail". 4) (October 17 2019) Lightbend Console enables you to monitor applications running on Kubernetes. This combination of features means that Kafka consumers are very cheap—they can come and go without much impact on the cluster or on other consumers. bin/kafka-console-producer. Creating a producer and consumer can be a perfect Hello, World! example to learn Kafka but there are multiple ways through which we can achieve it. The following Kafka parameters are likely the most influential in the spout performance: "fetch. For a consumer to keep up, max lag needs to be less than a threshold and min fetch rate needs to be larger than 0. Command Line Interface (CLI) 101. "-property line. sh --bootstrap-server localhost:9092 --topic sample --from-beginning Hello Kafka! This starts a Kafka consumer and prints the producer's message to the console. $ bin/kafka-console-consumer. Message is always null although consumer started in command line shows all messages that was sent by Producer in m. This post is Part 1 of a 3-part series about monitoring Kafka. Remember that all of these command-line tasks can also be done programmatically. Run kafka-console-consumer again with the following arguments: $ kafka-console-consumer \. Kafka is a publish-subscribe message queuing system that's designed like a distributed commit log. localhost and 2181 are the default hostname and ports when you are running Kafka locally. For example, you could use such a file to set all the properties needed for a SSL/SASL connection that the consumer will invoke. We can use this command for any of the required partition. Hi Robert, Thanks for your response. 1: Central: 5: Oct, 2019: 2. To consume messages we open a second bash shell and cd into the /bin directory as before, and to receive messages we use the kafka-console-consumer command line client: sudo. mainClass="AkkaTestConsumer". Path to properties file where you can customize producer. Version (1. The ~/ kafka /bin/kafka-console-producer. 0 on Ubuntu 18. Scripting Kafka To be fair, the command is short because I have simplified the Kafka console consumer in this LOC. sh --list --zookeeper localhost:2181. In this way it is a perfect example to demonstrate how. There are two parts to this question: 1. py file, there's no more Unknown command line flag 'f'. sh --zookeeper esv4-hcl197. Use the kafka-consumer-groups command to change the configuration of our brokers and topics while the cluster is up and running. sh --list--zookeeper localhost:2181 Push a file of messages to Kafka. com:9092 buffer. bin/kafka-console-producer. ms" Kafka Consumer instance poll timeout, which is specified for each Kafka spout using the setPollTimeoutMs method. log Listing messages from a topic bin/kafka-console-consumer. In this way it is a perfect example to demonstrate how. Kafka, Kafka Consumer Lag, and Zookeeper metrics are all collected using this collector. Now you will be able to manage WildFly application server remotely with the server ip. 0 and later for both reading from and writing to Kafka topics. sh to monitor the lag of my consumers when my cluster is kerberized. As I stated above I wanted to be able to use Rx and create re-usable bit of the architecture. Multiple consumer applications could be connected to the Kafka Cluster. We can start the producer on one of our servers. id will start from message 101. Every deployment consists of. Now start a consumer by typing command "kafka-console-consumer. Apache Kafka is a distributed streaming platform. Typically, you would publish messages using a Kafka client library from within your program, but since that involves different setups for different programming languages, you can use the shell script as a language-independent way of. bin/kafka-topics. Is there anyway we can achieve this?. In a healthy Kafka cluster, all producers are pushing messages into topics and all consumers are pulling those messages at the other end of the topics. If you have been using Apache Kafka ® for a while, it is likely that you have developed a degree of confidence in the command line tools that come with it. The Kafka Plugin can read/write between Deepgreen DB and Kafka. The equivalent commands to start every service in its own terminal, without using the CLI are: # Start ZooKeeper. But since our aim in the Strimzi project is to offer a Kubernetes-native experience when running Apache Kafka, we want to have the metrics exposed as a Prometheus endpoint. However, simply sending lines of text will result in messages with null keys. Now that we have two brokers running, let's create a Kafka topic on them. Before diving in, it is important to understand the general architecture of a Kafka deployment. You can get it here. My Kafka origin is running with one day lag, the messages are not getting broadcasted as I see in the Kafka consumer from the command line. I want to help others avoid that pain if I can. Kafka runs on port 9092 with an IP address machine that of our Virtual Machine. com:2181 --messages 50000000 --topic test --threads 1 server-config. Topic deletion is enabled by default in new Kafka versions ( from 1. Kafka also has a command to send messages through the command line; the input can be a text file or the console standard input. And when you type any input from the 'kafka-console-producer. I want to store log files in DBFS with timestamp so i can refer these log files if it fails. Here’s my pipeline, a variation on the Taxi Tutorial pipeline presented in the SDC documentation:. These scripts read from STDIN and write to STDOUT and are frequently used to send and receive data via Kafka over the command line. Skip to main content. How do I build a system that makes it unlikely for consumers to lag? The answer is that you want to be able to add enough consumers to handle all the incoming data. Message is always null although consumer started in command line shows all messages that was sent by Producer in m. And we'll press Enter and we get the full documentation. Graphite Command Line / Script X X ü the most Click on LAG to sort on consumer lag across. Messages should be one per line. To visualize Kafka cluster data as gathered by Burrow, there are open source projects available, such as the browser-based BurrowUI and burrow-dashboard, the command-line UI tool burrow-client, and various plug-ins to other tools. To run the consumer from the command line, generate the JAR and then run from within Maven (or generate the JAR using Maven, then run in Java by adding the necessary Kafka JAR(s) to the classpath): mvn clean package mvn exec:java -Dexec. For example a consumer can reset to an older offset to reprocess. The SHOW TOPICS command has been enhanced to include the number of active consumers and also the number of active consumer groups which are reading the topics. How do I build a system that makes it unlikely for consumers to lag? The answer is that you want to be able to add enough consumers to handle all the incoming data. bin/kafka-run-class. Datadog will automatically collect the key metrics discussed in parts one and two of this series, and make them available in a template dashboard, as seen above. It's storing all data on disk. The CURRENT-OFFSET is the last offset processed by a consumer and LOG_END_OFFSET, the last event offset written be a consumer. This option specifies the property file that contains the necessary configurations to run the tool on a secure cluster. /bin/kafka-consumer-offset-checker. To verify that your Kafka setup works you may for instance use the command-line utility kafkacat (see link for installation instructions). For a consumer to keep up, max lag needs to be less than a threshold and min fetch rate needs to be larger than 0. System tools can be run from the command line using the run class script (i. NET core gained more popularity because of its powerful original. We can use this command for any of the required partition. Command Line Interface (CLI) 101. The only thing that needs to be adjusted is the configuration, to make sure to point the producers and consumers to Pulsar service rather than Kafka and to use a particular Pulsar topic. sh to get consumer group details. The difference between the committed offset and the most recent offset in each partition is called the consumer lag. STORM-1849: HDFSFileTopology should use the 3rd argument as topologyName. where localhost:2181 is one or more of your Zookeeper instance hostnames and ports. We can start the producer on one of our servers. The Datadog Agent emits an event when the value of the consumer_lag metric goes below 0, tagging it with topic, partition and consumer_group. So, just before jumping head first and fully integrating with Apache Kafka, let's check the water and plan ahead for painless integration. When I was running the quick start example in command line, I found I can't create multiple consumers in command line. sh --bootstrap-server localhost:9092 --describe --group user. Kafka is the leading open-source, enterprise-scale data streaming technology. This library is targeting Kafka 0. sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic-name Example. Attention: MirrorMaker does not provide the same reliability guarantees as the replication features in MapR Event Store For Apache Kafka. Kafka -Version 1. Is there anyway we can achieve this?. Kafka Console Producer and Consumer Example – In this Kafka Tutorial, we shall learn to create a Kafka Producer and Kafka Consumer using console interface of Kafka. EDIT: The kafka-avro-console-consumer is not part of the package I linked above. Prerequisites. Here’s my pipeline, a variation on the Taxi Tutorial pipeline presented in the SDC documentation:. kafka-shell. Related Posts How To Compact Druid Data Segments Using Compaction Task. My introduction to Kafka was rough, and I hit a lot of gotchas along the way. Kafka is a publish-subscribe message queuing system that's designed like a distributed commit log. To view offsets as in the previous example with the ConsumerOffsetChecker, you describe the consumer group using the following command: $ /usr/bin/kafka-consumer-groups --zookeeper zk01. Kafka Broker | Command-line Options and Procedure. It registers a consumer group id that is associated with several consumer processes to balance consumption across the topic. Testing confluent Kafka using simple console consumer and producer Creating Topic sijeeshtopic /opt/confluent-kafka/confluent-4. Consumers are sink to data streams in Kafka Cluster. $ kafka-consumer-groups --bootstrap-server localhost:9092 --listNote: This will only show information about consumers that use the Java consumer API (non-ZooKeeper-based consumers). Once installed, the tools should be available through the command geomesa-kafka:. Kafka Streams is a client library for processing and analyzing data stored in Kafka. Configuring your Kafka deployment to expose metrics. I am talking about tools that you know and love such as kafka-console-producer, kafka-console-consumer and many others. To install Confluent. For example, you can use our command line tools to "tail". Creating a Kafka Topic − Kafka provides a command line utility named kafka-topics. sh --new-consumer --describe --group consumer-tutorial-group --bootstrap-server localhost:9092. The kafka consumer from console has the group id 'console'. However, as the exporter doesn't use the official. Install and Evaluation of Yahoo's Kafka Manager. This makes the containers identifiable; KAFKA_BROKER_ID pins the identifier of the broker to its slot-id. There are a few options. reset value to determine whether to reset to earliest or latest. sh --zookeeper 255. Metrics like consumer lag (from the queue server and client perspective!) weren’t previously available to us in such an organized fashion. Our aspiring leaders overlook the main problem with U. We do that by using a couple of Kafka command line tools that ship with any Kafka installation. In this video series, I am going to explain about basic concepts of Apache Kafka starting from Kafka Introduction, Key Concepts in Kafka, Kafka Architecture, Command-line Kafka Producer and. February 13, 2017, at 7:27 PM kafka. The New Relic Kafka on-host integration reports metrics and configuration data from your Kafka service, including important metrics like providing insight into brokers, producers, consumers, and topics. Some of them are listed below: Command line client provided as default by Kafka; kafka-python. 0 on Ubuntu 18. sh command-line instructions consume the events from within the Kafka topic. Kafka uses the Zookeeper API, which is a centralized service that maintains configuration information. At this point, the Kafka Cluster is running. The Kafka consumer config parameters may also have an impact on the performance of the spout. This simulation test consists of 24 multiple choice questions and gives you the look and feel of the real Kafka certification exam. id is generated using: console-consumer-${new Random(). bat --zookeeper localhost:2181 --topic test". Once properties files are ready, then we can start the broker instances. In the world of MicroServices,. Sematext has a incredibly deep monitoring solution for Kafka. How do I build a system that makes it unlikely for consumers to lag? The answer is that you want to be able to add enough consumers to handle all the incoming data. Usually when I invite Apache Kafka to a project I end up with writing my own wrappers around Kafka’s Producers and Consumers. The Console provides visibility for KPIs, reactive metrics, monitors and alerting, and includes a large selection of ready-to-use dashboards. manager directory then run the build command. $ kafka-consumer-groups --bootstrap-server localhost:9092 --listNote: This will only show information about consumers that use the Java consumer API (non-ZooKeeper-based consumers). Next, launch a separate terminal window and use the command line tool to launch a consumer that will display the content of new messages received in the app_events category: $ kafka-console-consumer \ --zookeeper localhost:2181 \ --topic app_events 19. config: Lets you specify the name of a properties file that contains a set of Kafka consumer configurations. Once installed, the tools should be available through the command geomesa-kafka:. home:6667 --topic topic_name --group-id consumer_group_id; The output of the command will be: consumer_group_id topic_name consumer_group_id 123. Kafka uses Zookeeper, which is a centralized service for maintaining configuration. Metricly can help monitor the performance and throughput of your Kafka server using our Kafka collector for the Linux agent. Produce some messages from the command line console-producer and check the consumer log. separator=XYZ". Kafka, like almost all modern infrastructure projects, has three ways of building things: through the command line, through programming, and through a web console (in this case the Confluent Control Center). Starting Kafka and Zookeeper. [UPDATE: Check out the Kafka Web Console that allows you to manage topics and see traffic going through your topics - all in a browser!] When you're pushing data into a Kafka topic, it's always helpful to monitor the traffic using a simple Kafka consumer script. enabled=true and offsets. The only thing that needs to be adjusted is the configuration, to make sure to point the producers and consumers to Pulsar service rather than Kafka and to use a particular Pulsar topic. With Datadog, you can collect Kafka metrics for visualization, alerting, and full-infrastructure correlation. Typically, you would publish messages using a Kafka client library from within your program, but since that involves different setups for different programming languages, you can use the shell script as a language-independent way of. It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, exactly-once processing semantics and simple yet efficient management of application state. Kafka in the NuGet Package Manager UI, or run the following command in the Package Manager Console: Install-Package Confluent. It registers a consumer group id that is associated with several consumer processes to balance consumption across the topic. Version Repository Usages Date; 2. Messages should be one per line. There are two parts to this question: 1. In order to do performance testing or benchmarking Kafka cluster, we need to consider the two aspects: Performance at Producer End Performance at Consumer End We need to do […]. It's time to do performance testing before asking developers to start the testing. id will start from message 101. Create a Kafka producer: kafkacat -P -b kafka:9092 -t important. Build an endpoint that we can pass in a message to be produced to Kafka. To install the tools, see Setting up the Kafka Command Line Tools. We have seen some popular commands that provided by Apache Kafka command line interface. Consumer metrics; There's a nice write up on which metrics are important to track per category. Creating a Kafka Topic − Kafka provides a command line utility named kafka-topics. Prometheus Kafka Consumer Group Exporter. Using command line args: kafka-consumer-lag --brokers kafka01. kafka-console-consumer is a convenient command line tool to read data from Kafka topics. Consumers are scalable. Burrow has a modular design that includes the following subsystems: Clusters run an Apache Kafka client that periodically updates topic lists and the current HEAD offset (the most recent offset) for every partition. For the list of configurations, please reference Apache Kafka page. Kafka is used in production by over 33% of the Fortune 500 companies such as Netflix, Airbnb, Uber, Walmart and LinkedIn. The following Kafka parameters are likely the most influential in the spout performance: “fetch. So, just before jumping head first and fully integrating with Apache Kafka, let's check the water and plan ahead for painless integration. I am talking about tools that you know and love such as kafka-console-producer, kafka-console-consumer and many others. The documentation on monitoring of Kafka Streams is a bit sparse, so I will shed some light on interesting metrics to monitor when running Kafka Streams applications. Starting Kafka and Zookeeper. "-property line. It is part of the confluent suite. Once installed, the tools should be available through the command geomesa-kafka:. In this first scenario, we will see how to manage offsets from command-line so it will give us an idea of how to implement it in our application. In this video, I will provide a quick start demo. All of the command line. manager directory then run the build command. To consume messages encoded in Avro simply run the following command to get the decoded messages. Recently we deployed it in prod. This post is Part 1 of a 3-part series about monitoring Kafka. Burrow is a monitoring companion for Apache Kafka that provides consumer lag checking. Produce some messages from the command line console-producer and check the consumer log. Create a Spring Kafka Kotlin Producer. sh --bootstrap-server localhost:9092 --topic demo [/code] Hope you were able to setup the basic kafka messaging system. This combination of features means that Kafka consumers are very cheap—they can come and go without much impact on the cluster or on other consumers. # bin/kafka-consumer-groups. localhost and 2181 are the default hostname and ports when you are running Kafka locally. Find and contribute more Kafka tutorials with Confluent, the real-time event streaming experts. This is the sixth post in this series where we go through the basics of using Kafka. If you are not familiar with Apache Kafka or want to learn about it, check out their site!. MicroStrategy can log messages to Kakfa which are stored as text files. Apache Kafka Tutorial provides details about the design goals and capabilities of Kafka. The consumer will transparently handle the failure of servers in the Kafka cluster, and adapt as topic-partitions are created or migrate between brokers. The Console provides visibility for KPIs, reactive metrics, monitors and alerting, and includes a large selection of ready-to-use dashboards. For this post, we are going to cover a basic view of Apache Kafka and why I feel that it is a better optimized platform than Apache Tomcat. Consumer Lag per Client. Below is the response : 1) What version of Kafka are you using? - 1. ZK_HOSTS=192. Once installed, the tools should be available through the command geomesa-kafka:. Kafka offers a wide range of config properties.