centurionKotlin Bigdata Toolkit
Stars: ✭ 320 (+60%)
Parquet.jlJulia implementation of Parquet columnar file format reader
Stars: ✭ 93 (-53.5%)
PucketBucketing and partitioning system for Parquet
Stars: ✭ 29 (-85.5%)
albisAlbis: High-Performance File Format for Big Data Systems
Stars: ✭ 20 (-90%)
Rumble⛈️ Rumble 1.11.0 "Banyan Tree"🌳 for Apache Spark | Run queries on your large-scale, messy JSON-like data (JSON, text, CSV, Parquet, ROOT, AVRO, SVM...) | No install required (just a jar to download) | Declarative Machine Learning and more
Stars: ✭ 58 (-71%)
parquet2Fastest and safest Rust implementation of parquet. `unsafe` free. Integration-tested against pyarrow
Stars: ✭ 157 (-21.5%)
Parquet GoGo package to read and write parquet files. parquet is a file format to store nested data structures in a flat columnar data format. It can be used in the Hadoop ecosystem and with tools such as Presto and AWS Athena.
Stars: ✭ 114 (-43%)
IMCtermiteEnables extraction of measurement data from binary files with extension 'raw' used by proprietary software imcFAMOS/imcSTUDIO and facilitates its storage in open source file formats
Stars: ✭ 20 (-90%)
IcebergIceberg is a table format for large, slow-moving tabular data
Stars: ✭ 393 (+96.5%)
Elasticsearch loaderA tool for batch loading data files (json, parquet, csv, tsv) into ElasticSearch
Stars: ✭ 300 (+50%)
miniparquetLibrary to read a subset of Parquet files
Stars: ✭ 38 (-81%)
dbddbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.
Stars: ✭ 30 (-85%)
DrillApache Drill is a distributed MPP query layer for self describing data
Stars: ✭ 1,619 (+709.5%)
experimentsCode examples for my blog posts
Stars: ✭ 21 (-89.5%)
Node ParquetNodeJS module to access apache parquet format files
Stars: ✭ 46 (-77%)
SparkApache Spark is a fast, in-memory data processing engine with elegant and expressive development API's to allow data workers to efficiently execute streaming, machine learning or SQL workloads that require fast iterative access to datasets.This project will have sample programs for Spark in Scala language .
Stars: ✭ 55 (-72.5%)
KartothekA consistent table management library in python
Stars: ✭ 144 (-28%)
hadoop-etl-udfsThe Hadoop ETL UDFs are the main way to load data from Hadoop into EXASOL
Stars: ✭ 17 (-91.5%)
columnifyMake record oriented data to columnar format.
Stars: ✭ 28 (-86%)
KglabGraph-Based Data Science: an abstraction layer in Python for building knowledge graphs, integrated with popular graph libraries – atop Pandas, RDFlib, pySHACL, RAPIDS, NetworkX, iGraph, PyVis, pslpython, pyarrow, etc.
Stars: ✭ 98 (-51%)
ChoetlETL Framework for .NET / c# (Parser / Writer for CSV, Flat, Xml, JSON, Key-Value, Parquet, Yaml, Avro formatted files)
Stars: ✭ 372 (+86%)
PystoreFast data store for Pandas time-series data
Stars: ✭ 325 (+62.5%)
openmrs-fhir-analyticsA collection of tools for extracting FHIR resources and analytics services on top of that data.
Stars: ✭ 55 (-72.5%)
Bigdata File ViewerA cross-platform (Windows, MAC, Linux) desktop application to view common bigdata binary format like Parquet, ORC, AVRO, etc. Support local file system, HDFS, AWS S3, Azure Blob Storage ,etc.
Stars: ✭ 86 (-57%)
RatatoolA tool for data sampling, data generation, and data diffing
Stars: ✭ 279 (+39.5%)
Parquet4sRead and write Parquet in Scala. Use Scala classes as schema. No need to start a cluster.
Stars: ✭ 125 (-37.5%)
RoapiCreate full-fledged APIs for static datasets without writing a single line of code.
Stars: ✭ 253 (+26.5%)
PetastormPetastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet format. It supports ML frameworks such as Tensorflow, Pytorch, and PySpark and can be used from pure Python code.
Stars: ✭ 1,108 (+454%)
HybridBackendEfficient training of deep recommenders on cloud.
Stars: ✭ 30 (-85%)
Parquet RsApache Parquet implementation in Rust
Stars: ✭ 144 (-28%)
meepo异构存储数据迁移
Stars: ✭ 29 (-85.5%)
Gcs ToolsGCS support for avro-tools, parquet-tools and protobuf
Stars: ✭ 57 (-71.5%)
graphiqueGraphQL service for arrow tables and parquet data sets.
Stars: ✭ 28 (-86%)
Amazon S3 Find And ForgetAmazon S3 Find and Forget is a solution to handle data erasure requests from data lakes stored on Amazon S3, for example, pursuant to the European General Data Protection Regulation (GDPR)
Stars: ✭ 115 (-42.5%)
parquet-usqlA custom extractor designed to read parquet for Azure Data Lake Analytics
Stars: ✭ 13 (-93.5%)
QuiltQuilt is a self-organizing data hub for S3
Stars: ✭ 1,007 (+403.5%)
DaFlowApache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple categories of transformation rules.
Stars: ✭ 24 (-88%)
waspWASP is a framework to build complex real time big data applications. It relies on a kind of Kappa/Lambda architecture mainly leveraging Kafka and Spark. If you need to ingest huge amount of heterogeneous data and analyze them through complex pipelines, this is the framework for you.
Stars: ✭ 19 (-90.5%)
odbc2parquetA command line tool to query an ODBC data source and write the result into a parquet file.
Stars: ✭ 95 (-52.5%)
Parquet IndexSpark SQL index for Parquet tables
Stars: ✭ 109 (-45.5%)
Devops Python Tools80+ DevOps & Data CLI Tools - AWS, GCP, GCF Python Cloud Function, Log Anonymizer, Spark, Hadoop, HBase, Hive, Impala, Linux, Docker, Spark Data Converters & Validators (Avro/Parquet/JSON/CSV/INI/XML/YAML), Travis CI, AWS CloudFormation, Elasticsearch, Solr etc.
Stars: ✭ 406 (+103%)
Eel SdkBig Data Toolkit for the JVM
Stars: ✭ 140 (-30%)
parquet-extraA collection of Apache Parquet add-on modules
Stars: ✭ 30 (-85%)
SkaleHigh performance distributed data processing engine
Stars: ✭ 390 (+95%)
qsvCSVs sliced, diced & analyzed.
Stars: ✭ 438 (+119%)
SchemerSchema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.
Stars: ✭ 97 (-51.5%)
OapOptimized Analytics Package for Spark* Platform
Stars: ✭ 343 (+71.5%)
Bigdata PlaygroundA complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
Stars: ✭ 177 (-11.5%)
ParquetviewerSimple windows desktop application for viewing & querying Apache Parquet files
Stars: ✭ 145 (-27.5%)
GafferA large-scale entity and relation database supporting aggregation of properties
Stars: ✭ 1,642 (+721%)