All Projects → harbby → Sylph

harbby / Sylph

Licence: apache-2.0
Stream computing platform for bigdata

Programming Languages

java
68154 projects - #9 most used programming language

Projects that are alternatives of or similar to Sylph

Calcite
Apache Calcite
Stars: ✭ 2,816 (+677.9%)
Mutual labels:  sql, big-data
seatunnel-example
seatunnel plugin developing examples.
Stars: ✭ 27 (-92.54%)
Mutual labels:  spark-streaming, flink
Clickhouse
ClickHouse® is a free analytics DBMS for big data
Stars: ✭ 21,089 (+5725.69%)
Mutual labels:  sql, big-data
Presto
The official home of the Presto distributed SQL query engine for big data
Stars: ✭ 12,957 (+3479.28%)
Mutual labels:  sql, big-data
Alchemy
给flink开发的web系统。支持页面上定义udf,进行sql和jar任务的提交;支持source、sink、job的管理;可以管理openshift上的flink集群
Stars: ✭ 264 (-27.07%)
Mutual labels:  sql, flink
Presto Go Client
A Presto client for the Go programming language.
Stars: ✭ 183 (-49.45%)
Mutual labels:  sql, big-data
open-stream-processing-benchmark
This repository contains the code base for the Open Stream Processing Benchmark.
Stars: ✭ 37 (-89.78%)
Mutual labels:  spark-streaming, flink
Pulsar Flink
Elastic data processing with Apache Pulsar and Apache Flink
Stars: ✭ 126 (-65.19%)
Mutual labels:  sql, flink
bandar-log
Monitoring tool to measure flow throughput of data sources and processing components that are part of Data Ingestion and ETL pipelines.
Stars: ✭ 20 (-94.48%)
Mutual labels:  big-data, spark-streaming
litemall-dw
基于开源Litemall电商项目的大数据项目,包含前端埋点(openresty+lua)、后端埋点;数据仓库(五层)、实时计算和用户画像。大数据平台采用CDH6.3.2(已使用vagrant+ansible脚本化),同时也包含了Azkaban的workflow。
Stars: ✭ 36 (-90.06%)
Mutual labels:  spark-streaming, flink
Spark With Python
Fundamentals of Spark with Python (using PySpark), code examples
Stars: ✭ 150 (-58.56%)
Mutual labels:  sql, big-data
Flink
Apache Flink is an open source project of The Apache Software Foundation (ASF). The Apache Flink project originated from the Stratosphere research project.
Stars: ✭ 17,781 (+4811.88%)
Mutual labels:  big-data, flink
Quicksql
A Flexible, Fast, Federated(3F) SQL Analysis Middleware for Multiple Data Sources
Stars: ✭ 1,821 (+403.04%)
Mutual labels:  sql, flink
Flink Sql Cookbook
The Apache Flink SQL Cookbook is a curated collection of examples, patterns, and use cases of Apache Flink SQL. Many of the recipes are completely self-contained and can be run in Ververica Platform as is.
Stars: ✭ 189 (-47.79%)
Mutual labels:  sql, flink
Calcite Avatica
Mirror of Apache Calcite - Avatica
Stars: ✭ 130 (-64.09%)
Mutual labels:  sql, big-data
fdp-modelserver
An umbrella project for multiple implementations of model serving
Stars: ✭ 47 (-87.02%)
Mutual labels:  spark-streaming, flink
Maha
A framework for rapid reporting API development; with out of the box support for high cardinality dimension lookups with druid.
Stars: ✭ 101 (-72.1%)
Mutual labels:  sql, big-data
Flinkstreamsql
基于开源的flink,对其实时sql进行扩展;主要实现了流与维表的join,支持原生flink SQL所有的语法
Stars: ✭ 1,682 (+364.64%)
Mutual labels:  sql, flink
cassandra.realtime
Different ways to process data into Cassandra in realtime with technologies such as Kafka, Spark, Akka, Flink
Stars: ✭ 25 (-93.09%)
Mutual labels:  spark-streaming, flink
Trino
Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
Stars: ✭ 4,581 (+1165.47%)
Mutual labels:  sql, big-data

Sylph Build Status

Welcome to Sylph !

Sylph is Streaming Job Manager.

Sylph uses SQL Query to describe calculations and bind multiple source(input)/sink(output) to visually develop and deploy streaming applications. Through Web IDE makes it easy to develop, deploy, monitor streaming applications and analyze streaming application behavior at any time.
Sylph has rich source/sink support and flexible extensions to visually develop and deploy stream analysis applications and visualized streaming application lifecycle management.

The Sylph core is to build distributed applications through workflow descriptions. Support for

  • Spark-Streaming (Spark1.x)
  • Structured-Streaming (Spark2.x)
  • Flink Streaming

License

Copyright (C) 2018 The Sylph Authors

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

StreamingSql

create function get_json_object as 'ideal.sylph.runner.flink.udf.UDFJson';

create source table topic1(
    _topic varchar,
    _key varchar,
    _partition integer,
    _offset bigint,
    _message varchar
) with (
    type = 'kafka08',
    kafka_topic = 'event_topic',
    auto.offset.reset = latest,
    kafka_broker = 'localhost:9092',
    kafka_group_id = 'test1',
    zookeeper.connect = 'localhost:2181'
);

-- 定义数据流输出位置
create sink table event_log(
    key varchar,
    user_id varchar,
    offset bigint
) with (
    type = 'kudu',
    kudu.hosts = 'localhost:7051',
    kudu.tableName = 'impala::test_kudu.log_events',
    kudu.mode = 'INSERT',
    batchSize = 5000
);

insert into event_log
select _key,get_json_object(_message, 'user_id') as user_id,_offset 
from topic1

UDF UDAF UDTF

The registration of the custom function is consistent with the hive

create function get_json_object as 'ideal.sylph.runner.flink.udf.UDFJson';

StreamETL

Support flink-stream spark-streaming spark-structured-streaming(spark2.2x)

loading...

Building

sylph builds use Gradle and requires Java 8.
Also if you want read a chinese deploy docs,中文部署文档 may can help you.

# Build and install distributions
./gradlew clean assemble dist

Running Sylph in your IDE

After building Sylph for the first time, you can load the project into your IDE and run the server. Me recommend using IntelliJ IDEA.

After opening the project in IntelliJ, double check that the Java SDK is properly configured for the project:

  • Open the File menu and select Project Structure
  • In the SDKs section, ensure that a 1.8 JDK is selected (create one if none exist)
  • In the Project section, ensure the Project language level is set to 8.0 as Sylph makes use of several Java 8 language features
  • HADOOP_HOME(2.6.x+) SPARK_HOME(2.4.x+) FLINK_HOME(1.7.x+)

Sylph comes with sample configuration that should work out-of-the-box for development. Use the following options to create a run configuration:

  • Main Class: ideal.sylph.main.SylphMaster
  • VM Options: -Dconfig=etc/sylph/sylph.properties -Dlogging.config=etc/sylph/logback.xml
  • ENV Options: FLINK_HOME= HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
  • Working directory: sylph-dist/build
  • Use classpath of module: sylph-main

Useful mailing lists

  1. [email protected] - For discussions about code, design and features
  2. [email protected] - For discussions about code, design and features
  3. [email protected] - For discussions about code, design and features

Getting Help

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].