Vscode Data PreviewData Preview ðļ extension for importing ðĪ viewing ð slicing ðŠ dicing ðē charting ð & exporting ðĨ large JSON array/config, YAML, Apache Arrow, Avro, Parquet & Excel data files
Awkward 0.xManipulate arrays of complex data structures as easily as Numpy.
Parquetjsfully asynchronous, pure JavaScript implementation of the Parquet file format
Bigdata PlaygroundA complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
ParquetviewerSimple windows desktop application for viewing & querying Apache Parquet files
KartothekA consistent table management library in python
Eel SdkBig Data Toolkit for the JVM
GafferA large-scale entity and relation database supporting aggregation of properties
Parquet4sRead and write Parquet in Scala. Use Scala classes as schema. No need to start a cluster.
Amazon S3 Find And ForgetAmazon S3 Find and Forget is a solution to handle data erasure requests from data lakes stored on Amazon S3, for example, pursuant to the European General Data Protection Regulation (GDPR)
Parquet GoGo package to read and write parquet files. parquet is a file format to store nested data structures in a flat columnar data format. It can be used in the Hadoop ecosystem and with tools such as Presto and AWS Athena.
KglabGraph-Based Data Science: an abstraction layer in Python for building knowledge graphs, integrated with popular graph libraries â atop Pandas, RDFlib, pySHACL, RAPIDS, NetworkX, iGraph, PyVis, pslpython, pyarrow, etc.
SchemerSchema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.
Bigdata File ViewerA cross-platform (Windows, MAC, Linux) desktop application to view common bigdata binary format like Parquet, ORC, AVRO, etc. Support local file system, HDFS, AWS S3, Azure Blob Storage ,etc.
PetastormPetastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet format. It supports ML frameworks such as Tensorflow, Pytorch, and PySpark and can be used from pure Python code.
Rumbleâïļ Rumble 1.11.0 "Banyan Tree"ðģ for Apache Spark | Run queries on your large-scale, messy JSON-like data (JSON, text, CSV, Parquet, ROOT, AVRO, SVM...) | No install required (just a jar to download) | Declarative Machine Learning and more
Gcs ToolsGCS support for avro-tools, parquet-tools and protobuf
Node ParquetNodeJS module to access apache parquet format files
QuiltQuilt is a self-organizing data hub for S3
PucketBucketing and partitioning system for Parquet
Devops Python Tools80+ DevOps & Data CLI Tools - AWS, GCP, GCF Python Cloud Function, Log Anonymizer, Spark, Hadoop, HBase, Hive, Impala, Linux, Docker, Spark Data Converters & Validators (Avro/Parquet/JSON/CSV/INI/XML/YAML), Travis CI, AWS CloudFormation, Elasticsearch, Solr etc.
IcebergIceberg is a table format for large, slow-moving tabular data
SkaleHigh performance distributed data processing engine
ChoetlETL Framework for .NET / c# (Parser / Writer for CSV, Flat, Xml, JSON, Key-Value, Parquet, Yaml, Avro formatted files)
OapOptimized Analytics Package for Spark* Platform
PystoreFast data store for Pandas time-series data
Elasticsearch loaderA tool for batch loading data files (json, parquet, csv, tsv) into ElasticSearch
RatatoolA tool for data sampling, data generation, and data diffing
RoapiCreate full-fledged APIs for static datasets without writing a single line of code.
DrillApache Drill is a distributed MPP query layer for self describing data
dbddbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.
meepoåžæååĻæ°æŪčŋį§ŧ
graphiqueGraphQL service for arrow tables and parquet data sets.
parquet2Fastest and safest Rust implementation of parquet. `unsafe` free. Integration-tested against pyarrow
parquet-usqlA custom extractor designed to read parquet for Azure Data Lake Analytics
SparkApache Spark is a fast, in-memory data processing engine with elegant and expressive development API's to allow data workers to efficiently execute streaming, machine learning or SQL workloads that require fast iterative access to datasets.This project will have sample programs for Spark in Scala language .
DaFlowApache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple categories of transformation rules.
Parquet.jlJulia implementation of Parquet columnar file format reader
waspWASP is a framework to build complex real time big data applications. It relies on a kind of Kappa/Lambda architecture mainly leveraging Kafka and Spark. If you need to ingest huge amount of heterogeneous data and analyze them through complex pipelines, this is the framework for you.
hadoop-etl-udfsThe Hadoop ETL UDFs are the main way to load data from Hadoop into EXASOL
odbc2parquetA command line tool to query an ODBC data source and write the result into a parquet file.
IMCtermiteEnables extraction of measurement data from binary files with extension 'raw' used by proprietary software imcFAMOS/imcSTUDIO and facilitates its storage in open source file formats
columnifyMake record oriented data to columnar format.
albisAlbis: High-Performance File Format for Big Data Systems