All Projects → lensesio → Kafka Cheat Sheet

lensesio / Kafka Cheat Sheet

Curated by Lenses.io

Labels

Projects that are alternatives of or similar to Kafka Cheat Sheet

Ksql Udf Deep Learning Mqtt Iot
Deep Learning UDF for KSQL for Streaming Anomaly Detection of MQTT IoT Sensor Data
Stars: ✭ 219 (-12.4%)
Mutual labels:  kafka
Storagetapper
StorageTapper is a scalable realtime MySQL change data streaming, logical backup and logical replication service
Stars: ✭ 232 (-7.2%)
Mutual labels:  kafka
Kafka Ui
Open-Source Web GUI for Apache Kafka Management
Stars: ✭ 230 (-8%)
Mutual labels:  kafka
Digital Restaurant
DDD. Event sourcing. CQRS. REST. Modular. Microservices. Kotlin. Spring. Axon platform. Apache Kafka. RabbitMQ
Stars: ✭ 222 (-11.2%)
Mutual labels:  kafka
Devops Bash Tools
550+ DevOps Bash Scripts - AWS, GCP, Kubernetes, Kafka, Docker, APIs, Hadoop, SQL, PostgreSQL, MySQL, Hive, Impala, Travis CI, Jenkins, Concourse, GitHub, GitLab, BitBucket, Azure DevOps, TeamCity, Spotify, MP3, LDAP, Code/Build Linting, pkg mgmt for Linux, Mac, Python, Perl, Ruby, NodeJS, Golang, Advanced dotfiles: .bashrc, .vimrc, .gitconfig, .screenrc, .tmux.conf, .psqlrc ...
Stars: ✭ 226 (-9.6%)
Mutual labels:  kafka
Kmq
Kafka-based message queue
Stars: ✭ 239 (-4.4%)
Mutual labels:  kafka
Gimel
Big Data Processing Framework - Unified Data API or SQL on Any Storage
Stars: ✭ 216 (-13.6%)
Mutual labels:  kafka
Spring Cloud Shop
spring cloud 版分布式电商项目,全力打造顶级多模块,高可用,高扩展电商项目
Stars: ✭ 248 (-0.8%)
Mutual labels:  kafka
Tributary
Streaming reactive and dataflow graphs in Python
Stars: ✭ 231 (-7.6%)
Mutual labels:  kafka
Watermill
Building event-driven applications the easy way in Go.
Stars: ✭ 3,504 (+1301.6%)
Mutual labels:  kafka
Materialize
Materialize lets you ask questions of your live data, which it answers and then maintains for you as your data continue to change. The moment you need a refreshed answer, you can get it in milliseconds. Materialize is designed to help you interactively explore your streaming data, perform data warehousing analytics against live relational data, or just increase the freshness and reduce the load of your dashboard and monitoring tasks.
Stars: ✭ 3,341 (+1236.4%)
Mutual labels:  kafka
Syncclient
syncClient,数据实时同步中间件(同步mysql到kafka、redis、elasticsearch、httpmq)!
Stars: ✭ 227 (-9.2%)
Mutual labels:  kafka
Cp All In One
docker-compose.yml files for cp-all-in-one , cp-all-in-one-community, cp-all-in-one-cloud
Stars: ✭ 239 (-4.4%)
Mutual labels:  kafka
Gosiris
An actor framework for Go
Stars: ✭ 222 (-11.2%)
Mutual labels:  kafka
Devicehive Java Server
DeviceHive Java Server
Stars: ✭ 241 (-3.6%)
Mutual labels:  kafka
Bcmall
以教学为目的的电商系统。包含ToB复杂业务、互联网高并发业务、缓存应用;DDD、微服务指导。模型驱动、数据驱动。了解大型服务进化路线,编码技巧、学习Linux,性能调优。Docker/k8s助力、监控、日志收集、中间件学习。前端技术、后端实践等。主要技术:SpringBoot+JPA+Mybatis-plus+Antd+Vue3。
Stars: ✭ 188 (-24.8%)
Mutual labels:  kafka
Yivnet
Yivnet is a microservice game server base on go-kit
Stars: ✭ 237 (-5.2%)
Mutual labels:  kafka
Every Single Day I Tldr
A daily digest of the articles or videos I've found interesting, that I want to share with you.
Stars: ✭ 249 (-0.4%)
Mutual labels:  kafka
Data Accelerator
Data Accelerator for Apache Spark simplifies onboarding to Streaming of Big Data. It offers a rich, easy to use experience to help with creation, editing and management of Spark jobs on Azure HDInsights or Databricks while enabling the full power of the Spark engine.
Stars: ✭ 247 (-1.2%)
Mutual labels:  kafka
Video Stream Analytics
Stars: ✭ 240 (-4%)
Mutual labels:  kafka

Setup

The first thing to do is to run the landoop/fast-data-dev Docker image. For example:

docker run --rm -it -p 2181:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9092:9092 -p 9581:9581 -p 9582:9582 -p 9583:9583 -p 9584:9584 -e ADV_HOST=127.0.0.1 landoop/fast-data-dev:latest

Note: Please follow the instructions on fast-data-dev README to customize the container.

You can execute a bash shell at the running container as follow:

docker run --rm -it --net=host landoop/fast-data-dev bash

Note: Kafka utilities are now available.

Topics

You can create a new Kafka topic named my-topic as follows:

kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic my-topic

You can verify that the my-topic topic was successfully created by listing all available topics:

kafka-topics --list --zookeeper localhost:2181

You can add more partitions as follows:

kafka-topics --zookeeper localhost:2181 --alter --topic my-topic --partitions 16

You can delete a topic named my-topic as follows:

kafka-topics --zookeeper localhost:2181 --delete --topic my-topic

You can find more details about a topic named cc_payments as follows:

kafka-topics --describe --zookeeper localhost:2181 --topic cc_payments

You can see the under-replicated partitions for all topics as follows:

kafka-topics --zookeeper localhost:2181/kafka-cluster --describe --under-replicated-partitions

Producers

You can produce messages from standard input as follows:

kafka-console-producer --broker-list localhost:9092 --topic my-topic

You can produce new messages from an existing file named messages.txt as follows:

kafka-console-producer --broker-list localhost:9092 --topic test < messages.txt

You can produce Avro messages as follows:

kafka-avro-console-producer --broker-list localhost:9092 --topic my.Topic --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}' --property schema.registry.url=http://localhost:8081

You can enter a few new values from the console as follows:

{"f1": "value1"}

Consumers

Consume messages

You can begin a consumer from the beginning of the log as follows:

kafka-console-consumer --bootstrap-server localhost:9092 --topic my-topic --from-beginning

You can consume a single message as follows:

kafka-console-consumer --bootstrap-server localhost:9092 --topic my-topic  --max-messages 1

You can consume a single message from __consumer_offsets as follows:

  • kafka version 0.9.x.x ~ 0.10.x.x*
kafka-console-consumer --bootstrap-server localhost:9092 --topic __consumer_offsets --formatter 'kafka.coordinator.GroupMetadataManager$OffsetsMessageFormatter' --max-messages 1
  • kafka version 0.11.x.x+ *
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic __consumer_offsets --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" --max-messages 1

You can consume and specify a consumer group as follows:

kafka-console-consumer --topic my-topic --new-consumer --bootstrap-server localhost:9092 --consumer-property group.id=my-group

Consume Avro messages

You can consume 10 Avro messages from a topic named position-reports as follows:

kafka-avro-console-consumer --topic position-reports --new-consumer --bootstrap-server localhost:9092 --from-beginning --property schema.registry.url=localhost:8081 --max-messages 10

You can consume all existing Avro messages from a topic named position-reports as follows:

kafka-avro-console-consumer --topic position-reports --new-consumer --bootstrap-server localhost:9092 --from-beginning --property schema.registry.url=localhost:8081

Consumers admin operations

You can list all groups as follows:

kafka-consumer-groups --new-consumer --list --bootstrap-server localhost:9092

You can describe a Group named testgroup as follows:

kafka-consumer-groups --bootstrap-server localhost:9092 --describe --group testgroup

Config

You can set the retention for a topic as follows:

kafka-configs --zookeeper localhost:2181 --alter --entity-type topics --entity-name my-topic --add-config retention.ms=3600000

You can print all configuration overrides for a topic named my-topic as follows:

kafka-configs --zookeeper localhost:2181 --describe --entity-type topics --entity-name my-topic

You can delete a configuration override for retention.ms for a topic named my-topic as follows:

kafka-configs --zookeeper localhost:2181 --alter --entity-type topics --entity-name my-topic --delete-config retention.ms 

Performance

Although Kafka is pretty fast by design, it is good to be able to test its performance. You can check the Produce performance of Kafka as follows:

kafka-producer-perf-test --topic position-reports --throughput 10000 --record-size 300 --num-records 20000 --producer-props bootstrap.servers="localhost:9092"

ACLs

You can add a new consumer ACL to an existing topic as follows:

kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --consumer --topic topicA --group groupA

You can add a new producer ACL to an existing topic as follows:

kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --producer --topic topicA

You can list the ACLs of a topic named topicA as follows:

kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --list --topic topicA

Zookeeper

You can enter the zookeeper shell as follows:

zookeeper-shell localhost:2182 ls 
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].