openmrs-fhir-analyticsA collection of tools for extracting FHIR resources and analytics services on top of that data.
Stars: ✭ 55 (+19.57%)
KartothekA consistent table management library in python
Stars: ✭ 144 (+213.04%)
parquet-usqlA custom extractor designed to read parquet for Azure Data Lake Analytics
Stars: ✭ 13 (-71.74%)
parquet-extraA collection of Apache Parquet add-on modules
Stars: ✭ 30 (-34.78%)
SchemerSchema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.
Stars: ✭ 97 (+110.87%)
meepo异构存储数据迁移
Stars: ✭ 29 (-36.96%)
Bigdata PlaygroundA complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
Stars: ✭ 177 (+284.78%)
PystoreFast data store for Pandas time-series data
Stars: ✭ 325 (+606.52%)
Amazon S3 Find And ForgetAmazon S3 Find and Forget is a solution to handle data erasure requests from data lakes stored on Amazon S3, for example, pursuant to the European General Data Protection Regulation (GDPR)
Stars: ✭ 115 (+150%)
waspWASP is a framework to build complex real time big data applications. It relies on a kind of Kappa/Lambda architecture mainly leveraging Kafka and Spark. If you need to ingest huge amount of heterogeneous data and analyze them through complex pipelines, this is the framework for you.
Stars: ✭ 19 (-58.7%)
PetastormPetastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet format. It supports ML frameworks such as Tensorflow, Pytorch, and PySpark and can be used from pure Python code.
Stars: ✭ 1,108 (+2308.7%)
HybridBackendEfficient training of deep recommenders on cloud.
Stars: ✭ 30 (-34.78%)
qsvCSVs sliced, diced & analyzed.
Stars: ✭ 438 (+852.17%)
OapOptimized Analytics Package for Spark* Platform
Stars: ✭ 343 (+645.65%)
Awkward 0.xManipulate arrays of complex data structures as easily as Numpy.
Stars: ✭ 216 (+369.57%)
graphiqueGraphQL service for arrow tables and parquet data sets.
Stars: ✭ 28 (-39.13%)
ParquetviewerSimple windows desktop application for viewing & querying Apache Parquet files
Stars: ✭ 145 (+215.22%)
Devops Python Tools80+ DevOps & Data CLI Tools - AWS, GCP, GCF Python Cloud Function, Log Anonymizer, Spark, Hadoop, HBase, Hive, Impala, Linux, Docker, Spark Data Converters & Validators (Avro/Parquet/JSON/CSV/INI/XML/YAML), Travis CI, AWS CloudFormation, Elasticsearch, Solr etc.
Stars: ✭ 406 (+782.61%)
GafferA large-scale entity and relation database supporting aggregation of properties
Stars: ✭ 1,642 (+3469.57%)
DaFlowApache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple categories of transformation rules.
Stars: ✭ 24 (-47.83%)
Parquet IndexSpark SQL index for Parquet tables
Stars: ✭ 109 (+136.96%)
RatatoolA tool for data sampling, data generation, and data diffing
Stars: ✭ 279 (+506.52%)
Bigdata File ViewerA cross-platform (Windows, MAC, Linux) desktop application to view common bigdata binary format like Parquet, ORC, AVRO, etc. Support local file system, HDFS, AWS S3, Azure Blob Storage ,etc.
Stars: ✭ 86 (+86.96%)
odbc2parquetA command line tool to query an ODBC data source and write the result into a parquet file.
Stars: ✭ 95 (+106.52%)
columnifyMake record oriented data to columnar format.
Stars: ✭ 28 (-39.13%)
Rumble⛈️ Rumble 1.11.0 "Banyan Tree"🌳 for Apache Spark | Run queries on your large-scale, messy JSON-like data (JSON, text, CSV, Parquet, ROOT, AVRO, SVM...) | No install required (just a jar to download) | Declarative Machine Learning and more
Stars: ✭ 58 (+26.09%)
dbddbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.
Stars: ✭ 30 (-34.78%)
albisAlbis: High-Performance File Format for Big Data Systems
Stars: ✭ 20 (-56.52%)
ChoetlETL Framework for .NET / c# (Parser / Writer for CSV, Flat, Xml, JSON, Key-Value, Parquet, Yaml, Avro formatted files)
Stars: ✭ 372 (+708.7%)
centurionKotlin Bigdata Toolkit
Stars: ✭ 320 (+595.65%)
miniparquetLibrary to read a subset of Parquet files
Stars: ✭ 38 (-17.39%)
Vscode Data PreviewData Preview 🈸 extension for importing 📤 viewing 🔎 slicing 🔪 dicing 🎲 charting 📊 & exporting 📥 large JSON array/config, YAML, Apache Arrow, Avro, Parquet & Excel data files
Stars: ✭ 245 (+432.61%)
experimentsCode examples for my blog posts
Stars: ✭ 21 (-54.35%)
Parquetjsfully asynchronous, pure JavaScript implementation of the Parquet file format
Stars: ✭ 200 (+334.78%)
parquet2Fastest and safest Rust implementation of parquet. `unsafe` free. Integration-tested against pyarrow
Stars: ✭ 157 (+241.3%)
Parquet RsApache Parquet implementation in Rust
Stars: ✭ 144 (+213.04%)
PucketBucketing and partitioning system for Parquet
Stars: ✭ 29 (-36.96%)
Eel SdkBig Data Toolkit for the JVM
Stars: ✭ 140 (+204.35%)
SparkApache Spark is a fast, in-memory data processing engine with elegant and expressive development API's to allow data workers to efficiently execute streaming, machine learning or SQL workloads that require fast iterative access to datasets.This project will have sample programs for Spark in Scala language .
Stars: ✭ 55 (+19.57%)
Parquet4sRead and write Parquet in Scala. Use Scala classes as schema. No need to start a cluster.
Stars: ✭ 125 (+171.74%)
Elasticsearch loaderA tool for batch loading data files (json, parquet, csv, tsv) into ElasticSearch
Stars: ✭ 300 (+552.17%)
Parquet GoGo package to read and write parquet files. parquet is a file format to store nested data structures in a flat columnar data format. It can be used in the Hadoop ecosystem and with tools such as Presto and AWS Athena.
Stars: ✭ 114 (+147.83%)
Parquet.jlJulia implementation of Parquet columnar file format reader
Stars: ✭ 93 (+102.17%)
KglabGraph-Based Data Science: an abstraction layer in Python for building knowledge graphs, integrated with popular graph libraries – atop Pandas, RDFlib, pySHACL, RAPIDS, NetworkX, iGraph, PyVis, pslpython, pyarrow, etc.
Stars: ✭ 98 (+113.04%)
IcebergIceberg is a table format for large, slow-moving tabular data
Stars: ✭ 393 (+754.35%)
Parquet MrApache Parquet
Stars: ✭ 1,278 (+2678.26%)
hadoop-etl-udfsThe Hadoop ETL UDFs are the main way to load data from Hadoop into EXASOL
Stars: ✭ 17 (-63.04%)
IMCtermiteEnables extraction of measurement data from binary files with extension 'raw' used by proprietary software imcFAMOS/imcSTUDIO and facilitates its storage in open source file formats
Stars: ✭ 20 (-56.52%)
QuiltQuilt is a self-organizing data hub for S3
Stars: ✭ 1,007 (+2089.13%)
SkaleHigh performance distributed data processing engine
Stars: ✭ 390 (+747.83%)
RoapiCreate full-fledged APIs for static datasets without writing a single line of code.
Stars: ✭ 253 (+450%)