All Projects → JahstreetOrg → Spark On Kubernetes Helm

JahstreetOrg / Spark On Kubernetes Helm

Spark on Kubernetes infrastructure Helm charts repo

Projects that are alternatives of or similar to Spark On Kubernetes Helm

Almond
A Scala kernel for Jupyter
Stars: ✭ 1,354 (+1371.74%)
Mutual labels:  spark, jupyter
Justenoughscalaforspark
A tutorial on the most important features and idioms of Scala that you need to use Spark's Scala APIs.
Stars: ✭ 538 (+484.78%)
Mutual labels:  spark, jupyter
Sparkmonitor
Monitor Apache Spark from Jupyter Notebook
Stars: ✭ 154 (+67.39%)
Mutual labels:  spark, jupyter
Helm Chart
A store of Helm chart tarballs for deploying JupyterHub and BinderHub on a Kubernetes cluster
Stars: ✭ 123 (+33.7%)
Mutual labels:  jupyter, helm
Sparkmagic
Jupyter magics and kernels for working with remote Spark clusters
Stars: ✭ 954 (+936.96%)
Mutual labels:  spark, jupyter
Spark Jupyter Aws
A guide on how to set up Jupyter with Pyspark painlessly on AWS EC2 clusters, with S3 I/O support
Stars: ✭ 259 (+181.52%)
Mutual labels:  spark, jupyter
Enterprise gateway
A lightweight, multi-tenant, scalable and secure gateway that enables Jupyter Notebooks to share resources across distributed clusters such as Apache Spark, Kubernetes and others.
Stars: ✭ 412 (+347.83%)
Mutual labels:  spark, jupyter
Stock Analysis Engine
Backtest 1000s of minute-by-minute trading algorithms for training AI with automated pricing data from: IEX, Tradier and FinViz. Datasets and trading performance automatically published to S3 for building AI training datasets for teaching DNNs how to trade. Runs on Kubernetes and docker-compose. >150 million trading history rows generated from +5000 algorithms. Heads up: Yahoo's Finance API was disabled on 2019-01-03 https://developer.yahoo.com/yql/
Stars: ✭ 605 (+557.61%)
Mutual labels:  jupyter, helm
Spark Scala Tutorial
A free tutorial for Apache Spark.
Stars: ✭ 907 (+885.87%)
Mutual labels:  spark, jupyter
Elasticsearch Spark Recommender
Use Jupyter Notebooks to demonstrate how to build a Recommender with Apache Spark & Elasticsearch
Stars: ✭ 707 (+668.48%)
Mutual labels:  spark, jupyter
Kamu Cli
Next generation tool for decentralized exchange and transformation of semi-structured data
Stars: ✭ 69 (-25%)
Mutual labels:  spark, jupyter
Vagrant Projects
Vagrant projects for various use-cases with Spark, Zeppelin, IPython / Jupyter, SparkR
Stars: ✭ 34 (-63.04%)
Mutual labels:  spark, jupyter
Ds Cheatsheets
List of Data Science Cheatsheets to rule the world
Stars: ✭ 9,452 (+10173.91%)
Mutual labels:  spark, jupyter
Jupyterlab Topbar
JupyterLab Top Bar extension
Stars: ✭ 86 (-6.52%)
Mutual labels:  jupyter
Udacity Data Engineering
Udacity Data Engineering Nano Degree (DEND)
Stars: ✭ 89 (-3.26%)
Mutual labels:  spark
Kube Tools
Kubernetes tools for GitHub Actions CI
Stars: ✭ 86 (-6.52%)
Mutual labels:  helm
Helm Www
The Helm website for docs, blog and project info.
Stars: ✭ 85 (-7.61%)
Mutual labels:  helm
Sci Pype
A Machine Learning API with native redis caching and export + import using S3. Analyze entire datasets using an API for building, training, testing, analyzing, extracting, importing, and archiving. This repository can run from a docker container or from the repository.
Stars: ✭ 90 (-2.17%)
Mutual labels:  jupyter
Helm Charts
Kubernetes Helm Charts for the Center for Open Science
Stars: ✭ 88 (-4.35%)
Mutual labels:  helm
Flint
Webex Bot SDK for Node.js (deprecated in favor of https://github.com/webex/webex-bot-node-framework)
Stars: ✭ 85 (-7.61%)
Mutual labels:  spark

CircleCI

Spark on Kubernetes Cluster Helm Chart

This repo contains the Helm chart for the fully functional and production ready Spark on Kubernetes cluster setup integrated with the Spark History Server, JupyterHub and Prometheus stack.

Refer the design concept for the implementation details.

Getting Started

Initialize Helm (for Helm 2.x)

In order to use Helm charts for the Spark on Kubernetes cluster deployment first we need to initialize Helm client.

kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --upgrade --service-account tiller --tiller-namespace kube-system
kubectl get pods --namespace kube-system -w
# Wait until Pod `tiller-deploy-*` moves to Running state
Install Livy

The basic Spark on Kubernetes setup consists of the only Apache Livy server deployment, which can be installed with the Livy Helm chart.

helm repo add jahstreet https://jahstreet.github.io/helm-charts
helm repo update
kubectl create namespace livy
helm upgrade --install livy --namespace livy jahstreet/livy \
    --set rbac.create=true # If you are running RBAC-enabled Kubernetes cluster
kubectl get pods --namespace livy -w
# Wait until Pod `livy-0` moves to Running state

For more advanced Spark cluster setups refer the Documentation page.

Run Spark Job

Now when Livy is up and running we can submit Spark job via Livy REST API.

kubectl exec --namespace livy livy-0 -- \
    curl -s -k -H 'Content-Type: application/json' -X POST \
      -d '{
            "name": "SparkPi-01",
            "className": "org.apache.spark.examples.SparkPi",
            "numExecutors": 2,
            "file": "local:///opt/spark/examples/jars/spark-examples_2.11-2.4.5.jar",
            "args": ["10000"],
            "conf": {
                "spark.kubernetes.namespace": "livy"
            }
          }' "http://localhost:8998/batches" | jq
# Record BATCH_ID from the response
Track running job

To track the running Spark job we can use all the available Kubernetes tools and the Livy REST API.

# Watch running Spark Pods
kubectl get pods --namespace livy -w --show-labels
# Check Livy batch status
kubectl exec --namespace livy livy-0 -- curl -s http://localhost:8998/batches/$BATCH_ID | jq

To configure Ingress for direct access to Livy UI and Spark UI refer the Documentation page.

Spark on Kubernetes Cluster Design Concept

Motivation

Running Spark on Kubernetes is available since Spark v2.3.0 release on February 28, 2018. Now it is v2.4.5 and still lacks much comparing to the well known Yarn setups on Hadoop-like clusters.

Corresponding to the official documentation user is able to run Spark on Kubernetes via spark-submit CLI script. And actually it is the only in-built into Apache Spark Kubernetes related capability along with some config options. Debugging proposal from Apache docs is too poor to use it easily and available only for console based tools. Schedulers integration is not available either, which makes it too tricky to setup convenient pipelines with Spark on Kubernetes out of the box. Yarn based Hadoop clusters in turn has all the UIs, Proxies, Schedulers and APIs to make your life easier.

On the other hand the usage of Kubernetes clusters in opposite to Yarn ones has definite benefits (July 2019 comparison):

  • Pricing. Comparing the similar cluster setups on Azure Cloud shows that AKS is about 35% cheaper than HDInsight Spark.
  • Scaling. Kubernetes cluster in Cloud support elastic autoscaling with many cool related features alongside, eg: Nodepools. Scaling of Hadoop clusters is far not as fast though, can be done either manually or automatically (on July 2019 was in preview).
  • Integrations. You can run any workloads in Kubernetes cluster wrapped into the Docker container. But do you know anyone who has ever written Yarn App in the modern world?
  • Support. You don't have a full control over the cluster setup provided by Cloud and usually there are no latest versions of software available for months after the release. With Kubernetes you can build image on your own.
  • Other Kuebernetes pros. CI/CD with Helm, Monitoring stacks ready for use in-one-button-click, huge popularity and community support, good tooling and of course HYPE.

All that makes much sense to try to improve Spark on Kubernetes usability to take the whole advantage of modern Kubernetes setups in use.

Design concept

The heart of all the problems solution is Apache Livy. Apache Livy is a service that enables easy interaction with a Spark cluster over a REST interface. It is supported by Apache Incubator community and Azure HDInsight team, which uses it as a first class citizen in their Yarn cluster setup and does many integrations with it. Watch Spark Summit 2016, Cloudera and Microsoft, Livy concepts and motivation for the details.

The cons is that Livy is written for Yarn. But Yarn is just Yet Another resource manager with containers abstraction adaptable to the Kubernetes concepts. Livy is fully open-sourced as well, its codebase is RM aware enough to make Yet Another One implementation of it's interfaces to add Kubernetes support. So why not!? Check the WIP PR with Kubernetes support proposal for Livy.

The high-level architecture of Livy on Kubernetes is the same as for Yarn.

Livy schema

Livy server just wraps all the logic concerning interaction with Spark cluster and provides simple REST interface.

[EXPAND] For example, to submit Spark Job to the cluster you just need to send `POST /batches` with JSON body containing Spark config options, mapped to `spark-submit` script analogous arguments.

$SPARK_HOME/bin/spark-submit \
    --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
    --deploy-mode cluster \
    --name SparkPi \
    --class org.apache.spark.examples.SparkPi \
    --conf spark.executor.instances=5 \
    --conf spark.kubernetes.container.image=<spark-image> \
    local:///path/to/examples.jar
 
# Has the similar effect as calling Livy via REST API
 
curl -H 'Content-Type: application/json' -X POST \
  -d '{
        "name": "SparkPi",
        "className": "org.apache.spark.examples.SparkPi",
        "numExecutors": 5,
        "conf": {
          "spark.kubernetes.container.image": "<spark-image>"
        },
        "file": "local:///path/to/examples.jar"
      }' "http://livy.endpoint.com/batches"

Under the hood Livy parses POSTed configs and does spark-submit for you, bypassing other defaults configured for the Livy server.

After the job submission Livy discovers Spark Driver Pod scheduled to the Kubernetes cluster with Kubernetes API and starts to track its state, cache Spark Pods logs and details descriptions making that information available through Livy REST API, builds routes to Spark UI, Spark History Server, Monitoring systems with Kubernetes Ingress resources, Nginx Ingress Controller in particular and displays the links on Livy Web UI.

Providing REST interface for Spark Jobs orchestration Livy allows any number of integrations with Web/Mobile apps and services, easy way of setting up flows via jobs scheduling frameworks.

Livy has in-built lightweight Web UI, which makes it really competitive to Yarn in terms of navigation, debugging and cluster discovery.

Livy home Livy sessions Livy logs Livy diagnostics

Livy supports interactive sessions with Spark clusters allowing to communicate between Spark and application servers, thus enabling the use of Spark for interactive web/mobile applications. Using that feature Livy integrates with Jupyter Notebook through Sparkmagic kernel out of box giving user elastic Spark exploratory environment in Scala and Python. Just deploy it to Kubernetes and use!

Livy schema

On top of Jupyter it is possible to set up JupyterHub, which is a multi-user Hub that spawns, manages, and proxies multiple instances of the single-user Jupyter notebook servers. Follow the video PyData 2018, London, JupyterHub from the Ground Up with Kubernetes - Camilla Montonen to learn the details of the implementation. JupyterHub provides a way to setup auth through Azure AD with AzureAdOauthenticator plugin as well as many other Oauthenticator plugins.

Jupyterhub architecture

Monitoring setup of Kubernetes cluster itself can be done with Prometheus Operator stack with Prometheus Pushgateway and Grafana Loki using a combined Helm chart, which allows to do the work in one-button-click. Learn more about the stack from videos:

The overall monitoring architecture solves pull and push model of metrics collection from the Kubernetes cluster and the services deployed to it. Prometheus Alertmanager gives an interface to setup alerting system.

Prometheus architecture Prometheus operator schema

With the help of JMX Exporter or Pushgateway Sink we can get Spark metrics inside the monitoring system. Grafana Loki provides out-of-box logs aggregation for all Pods in the cluster and natively integrates with Grafana. Using Grafana Azure Monitor datasource and Prometheus Federation feature you can setup complex global monitoring architecture for your infrastructure.

Global monitoring

References:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].