All Projects → ivangfr → spring-cloud-stream-kafka-elasticsearch

ivangfr / spring-cloud-stream-kafka-elasticsearch

Licence: other
The goal of this project is to implement a "News" processing pipeline composed of five Spring Boot applications: producer-api, categorizer-service, collector-service, publisher-api and news-client.

Programming Languages

java
68154 projects - #9 most used programming language
shell
77523 projects
HTML
75241 projects
javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to spring-cloud-stream-kafka-elasticsearch

twjitm-core
采用Netty信息加载实现长连接实时通讯系统,客户端可以值任何场景,支持实时http通讯、webSocket通讯、tcp协议通讯、和udp协议通讯、广播协议等 通过http协议,rpc协议。 采用自定义网络数据包结构, 实现自定义网络栈。
Stars: ✭ 98 (+122.73%)
Mutual labels:  maven, zookeeper
sample-spring-cloud-stream
sample microservices communicating asynchronously using spring cloud stream, rabbitmq
Stars: ✭ 22 (-50%)
Mutual labels:  spring-cloud-stream, spring-cloud-sleuth
Highdsa
2018年本科毕设项目,已更新所有开发和部署文档。基于Dubbo、SSM、Shiro、ELK、ActiveMQ、Redis等实现的一套高可用、高性能、高可扩展的分布式系统架构,实现可支持业务的基础公共服务,API使用Restful风格对外暴露。已经实现的包括:发送邮件服务、FastDFS文件存储服务、ELK实时日志查询服务、Redis缓存服务、Mybatis数据库、阿里短信推送、Goeasy消息推送、Druid监控、ActiveMQ消息队列、shiro权限认证、cas单点登录、权限配置web系统、移动端后台系统。持续更新中......
Stars: ✭ 385 (+775%)
Mutual labels:  maven, zookeeper
sample-message-driven-microservices
sample spring cloud application that integrates with rabbitmq through spring cloud stream framework as shows how to setup message-driven microservices basing on publish-subscribe model, consumer groups
Stars: ✭ 28 (-36.36%)
Mutual labels:  spring-cloud-stream, spring-cloud-sleuth
Cookbook
🎉🎉🎉JAVA高级架构师技术栈==任何技能通过 “刻意练习” 都可以达到融会贯通的境界,就像烹饪一样,这里有一份JAVA开发技术手册,只需要增加自己练习的次数。🏃🏃🏃
Stars: ✭ 428 (+872.73%)
Mutual labels:  maven, zookeeper
vacomall
☀️☀️ 基于 dubbo 实现的分布式电商平台。
Stars: ✭ 42 (-4.55%)
Mutual labels:  maven, zookeeper
Mario
A zookeeper monitor platform.
Stars: ✭ 26 (-40.91%)
Mutual labels:  zookeeper
maven-buildtime-profiler
Maven Build Time Profiler
Stars: ✭ 41 (-6.82%)
Mutual labels:  maven
kubernetes-nifi-cluster
Apache Nifi cluster running in kubernetes
Stars: ✭ 81 (+84.09%)
Mutual labels:  zookeeper
kafka-kubernetes
Apache Kafka on Kubernetes
Stars: ✭ 71 (+61.36%)
Mutual labels:  zookeeper
nexus-repository-import-scripts
A few scripts for importing artifacts into Nexus Repository
Stars: ✭ 142 (+222.73%)
Mutual labels:  maven
maven-it-extension
Experimental JUnit Jupiter Extension for writing integration tests for Maven plugins/Maven extensions/Maven Core
Stars: ✭ 56 (+27.27%)
Mutual labels:  maven
dis-seckill-test
⭐⭐⭐SpringBoot+Zookeeper+Dubbo打造分布式高并发商品秒杀系统
Stars: ✭ 20 (-54.55%)
Mutual labels:  zookeeper
JavaYouth
主要是Java技术栈的文章,涉及到了源码、原理,面试等知识。如AQS,JVM,rpc,计网,os等等,后续可能会写mysql,redis,zk这些
Stars: ✭ 616 (+1300%)
Mutual labels:  zookeeper
aaocp
一个对用户行为日志进行分析的大数据项目
Stars: ✭ 53 (+20.45%)
Mutual labels:  zookeeper
lagom-java-maven-chirper-example
No description or website provided.
Stars: ✭ 17 (-61.36%)
Mutual labels:  maven
mini-rpc
Spring + Netty + Protostuff + ZooKeeper 实现了一个轻量级 RPC 框架,使用 Spring 提供依赖注入与参数配置,使用 Netty 实现 NIO 方式的数据传输,使用 Protostuff 实现对象序列化,使用 ZooKeeper 实现服务注册与发现。使用该框架,可将服务部署到分布式环境中的任意节点上,客户端通过远程接口来调用服务端的具体实现,让服务端与客户端的开发完全分离,为实现大规模分布式应用提供了基础支持
Stars: ✭ 221 (+402.27%)
Mutual labels:  zookeeper
ImagingKit
Java library for imaging tasks that integrates well with the java.awt.image environment
Stars: ✭ 16 (-63.64%)
Mutual labels:  maven
Corendon-LostLuggage
Java Application for automating the proces of retrieving lost luggages for the dutch airline company Corendon.
Stars: ✭ 27 (-38.64%)
Mutual labels:  maven
maven-learning-notes
For more notes, see notes-and-code-about-learning
Stars: ✭ 58 (+31.82%)
Mutual labels:  maven

spring-cloud-stream-kafka-elasticsearch

The goal of this project is to implement a "News" processing pipeline composed of five Spring Boot applications: producer-api, categorizer-service, collector-service, publisher-api and news-client.

Technologies used

  • Spring Cloud Stream to build highly scalable event-driven applications connected with shared messaging systems;

  • Spring Cloud Schema Registry that supports schema evolution so that the data can be evolved over time; besides, it lets you store schema information in a textual format (typically JSON) and makes that information accessible to various applications that need it to receive and send data in binary format;

  • Spring Data Elasticsearch to persist data in Elasticsearch;

  • Spring Cloud OpenFeign to write web service clients easily;

  • Thymeleaf as HTML template;

  • Zipkin to visualize traces between and within applications;

  • Eureka as service registration and discovery.

Note
In docker-swarm-environment repository, it is shown how to deploy this project into a cluster of Docker Engines in swarm mode.

Project Architecture

project diagram

Applications

  • producer-api

    Spring Boot Web Java application that creates news and pushes news events to producer.news topic in Kafka.

  • categorizer-service

    Spring Boot Web Java application that listens to news events in producer.news topic in Kafka, categorizes and pushes them to categorizer.news topic.

  • collector-service

    Spring Boot Web Java application that listens for news events in categorizer.news topic in Kafka, saves them in Elasticsearch and pushes the news events to collector.news topic.

  • publisher-api

    Spring Boot Web Java application that reads directly from Elasticsearch and exposes a REST API. It doesn’t listen from Kafka.

  • news-client

    Spring Boot Web java application that provides a User Interface to see the news. It implements a Websocket that consumes news events from the topic collector.news. So, news are updated on the fly on the main page. Besides, news-client communicates directly with publisher-api whenever search for a specific news or news update are needed.

    The Websocket operation is shown in the short gif below. News is created in producer-api and, immediately, it is shown in news-client.

    websocket operation

Generate NewsEvent

  • In a terminal, make sure you are in spring-cloud-stream-kafka-elasticsearch root folder

  • Run the following command to generate NewsEvent

    ./mvnw clean install --projects commons-news

    It will install commons-news-1.0.0.jar in you local Maven repository, so that it can be visible by all services.

Start Environment

  • Open a terminal and inside spring-cloud-stream-kafka-elasticsearch root folder run

    docker-compose up -d
  • Wait for Docker containers to be up and running. To check it, run

    docker-compose ps

Running Applications with Maven

Inside spring-cloud-stream-kafka-elasticsearch root folder, run the following Maven commands in different terminals

  • eureka-server

    ./mvnw clean spring-boot:run --projects eureka-server
  • producer-api

    ./mvnw clean spring-boot:run --projects producer-api -Dspring-boot.run.jvmArguments="-Dserver.port=9080"
  • categorizer-service

    ./mvnw clean spring-boot:run --projects categorizer-service -Dspring-boot.run.jvmArguments="-Dserver.port=9081"
  • collector-service

    ./mvnw clean spring-boot:run --projects collector-service -Dspring-boot.run.jvmArguments="-Dserver.port=9082"
  • publisher-api

    ./mvnw clean spring-boot:run --projects publisher-api -Dspring-boot.run.jvmArguments="-Dserver.port=9083"
  • news-client

    ./mvnw clean spring-boot:run --projects news-client

Running Applications as Docker containers

Build Application’s Docker Image

  • In a terminal, make sure you are in spring-cloud-stream-kafka-elasticsearch root folder

  • In order to build the application’s docker images, run the following script

    ./docker-build.sh

Application’s Environment Variables

  • producer-api

    Environment Variable Description

    KAFKA_HOST

    Specify host of the Kafka message broker to use (default localhost)

    KAFKA_PORT

    Specify port of the Kafka message broker to use (default 29092)

    SCHEMA_REGISTRY_HOST

    Specify host of the Schema Registry to use (default localhost)

    SCHEMA_REGISTRY_PORT

    Specify port of the Schema Registry to use (default 8081)

    EUREKA_HOST

    Specify host of the Eureka service discovery to use (default localhost)

    EUREKA_PORT

    Specify port of the Eureka service discovery to use (default 8761)

    ZIPKIN_HOST

    Specify host of the Zipkin distributed tracing system to use (default localhost)

    ZIPKIN_PORT

    Specify port of the Zipkin distributed tracing system to use (default 9411)

  • categorizer-service

    Environment Variable Description

    KAFKA_HOST

    Specify host of the Kafka message broker to use (default localhost)

    KAFKA_PORT

    Specify port of the Kafka message broker to use (default 29092)

    SCHEMA_REGISTRY_HOST

    Specify host of the Schema Registry to use (default localhost)

    SCHEMA_REGISTRY_PORT

    Specify port of the Schema Registry to use (default 8081)

    EUREKA_HOST

    Specify host of the Eureka service discovery to use (default localhost)

    EUREKA_PORT

    Specify port of the Eureka service discovery to use (default 8761)

    ZIPKIN_HOST

    Specify host of the Zipkin distributed tracing system to use (default localhost)

    ZIPKIN_PORT

    Specify port of the Zipkin distributed tracing system to use (default 9411)

  • collector-service

    Environment Variable Description

    ELASTICSEARCH_HOST

    Specify host of the Elasticsearch search engine to use (default localhost)

    ELASTICSEARCH_NODES_PORT

    Specify nodes port of the Elasticsearch search engine to use (default 9300)

    ELASTICSEARCH_REST_PORT

    Specify rest port of the Elasticsearch search engine to use (default 9200)

    KAFKA_HOST

    Specify host of the Kafka message broker to use (default localhost)

    KAFKA_PORT

    Specify port of the Kafka message broker to use (default 29092)

    SCHEMA_REGISTRY_HOST

    Specify host of the Schema Registry to use (default localhost)

    SCHEMA_REGISTRY_PORT

    Specify port of the Schema Registry to use (default 8081)

    EUREKA_HOST

    Specify host of the Eureka service discovery to use (default localhost)

    EUREKA_PORT

    Specify port of the Eureka service discovery to use (default 8761)

    ZIPKIN_HOST

    Specify host of the Zipkin distributed tracing system to use (default localhost)

    ZIPKIN_PORT

    Specify port of the Zipkin distributed tracing system to use (default 9411)

  • publisher-api

    Environment Variable Description

    ELASTICSEARCH_HOST

    Specify host of the Elasticsearch search engine to use (default localhost)

    ELASTICSEARCH_NODES_PORT

    Specify nodes port of the Elasticsearch search engine to use (default 9300)

    ELASTICSEARCH_REST_PORT

    Specify rest port of the Elasticsearch search engine to use (default 9200)

    EUREKA_HOST

    Specify host of the Eureka service discovery to use (default localhost)

    EUREKA_PORT

    Specify port of the Eureka service discovery to use (default 8761)

    ZIPKIN_HOST

    Specify host of the Zipkin distributed tracing system to use (default localhost)

    ZIPKIN_PORT

    Specify port of the Zipkin distributed tracing system to use (default 9411)

  • news-client

    Environment Variable Description

    KAFKA_HOST

    Specify host of the Kafka message broker to use (default localhost)

    KAFKA_PORT

    Specify port of the Kafka message broker to use (default 29092)

    SCHEMA_REGISTRY_HOST

    Specify host of the Schema Registry to use (default localhost)

    SCHEMA_REGISTRY_PORT

    Specify port of the Schema Registry to use (default 8081)

    EUREKA_HOST

    Specify host of the Eureka service discovery to use (default localhost)

    EUREKA_PORT

    Specify port of the Eureka service discovery to use (default 8761)

    ZIPKIN_HOST

    Specify host of the Zipkin distributed tracing system to use (default localhost)

    ZIPKIN_PORT

    Specify port of the Zipkin distributed tracing system to use (default 9411)

Run Application’s Docker Container

  • In a terminal, make sure you are inside spring-cloud-stream-kafka-elasticsearch root folder

  • Run following script

    ./start-apps.sh

Applications URLs

  • Eureka

    Eureka can be accessed at http://localhost:8761

    eureka with apps
  • Zipkin

    Zipkin can be accessed at http://localhost:9411

    The figure below shows an example of the complete flow a news passes through. It goes since producer-api, where the news is created, until news-client.

    zipkin sample
  • Kafka Topics UI

    Kafka Topics UI can be accessed at http://localhost:8085

  • Kafka Manager

    Kafka Manager can be accessed at http://localhost:9000

    The figure below shows the Kafka topics consumers. As we can see, the consumers are updated as the lag is 0

    kafka manager consumers

    Configuration

    • First, you must create a new cluster. Click on Cluster (dropdown button on the header) and then on Add Cluster

    • Type the name of your cluster in Cluster Name field, for example: MyCluster

    • Type zookeeper:2181 in Cluster Zookeeper Hosts field

    • Enable checkbox Poll consumer information (Not recommended for large # of consumers if ZK is used for offsets tracking on older Kafka versions)

    • Click on Save button at the bottom of the page.

  • Schema Registry UI

    Schema Registry UI can be accessed at http://localhost:8001

    schema registry
  • Elasticsearch REST API

    Check ES is up and running

    curl localhost:9200

    Check indexes in ES

    curl "localhost:9200/_cat/indices?v"

    Check news index mapping

    curl "localhost:9200/news/_mapping?pretty"

    Simple search

    curl "localhost:9200/news/_search?pretty"

Shutdown

  • To stop applications

    • If they were started with Maven, go to the terminals where they are running and press Ctrl+C

    • If they were started as Docker containers, go to a terminal and, inside spring-cloud-stream-kafka-elasticsearch root folder, run the script below

      ./stop-apps.sh
  • To stop and remove docker-compose containers, network and volumes, go to a terminal and, inside spring-cloud-stream-kafka-elasticsearch root folder, run the following command

    docker-compose down -v

Cleanup

To remove the Docker images created by this project, go to a terminal and, inside spring-cloud-stream-kafka-elasticsearch root folder, run the script below

./remove-docker-images.sh
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].