All Projects → GoogleCloudPlatform → mlops-with-vertex-ai

GoogleCloudPlatform / mlops-with-vertex-ai

Licence: Apache-2.0 License
An end-to-end example of MLOps on Google Cloud using TensorFlow, TFX, and Vertex AI

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
HCL
1544 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to mlops-with-vertex-ai

vertex-ai-samples
Sample code and notebooks for Vertex AI, the end-to-end machine learning platform on Google Cloud
Stars: ✭ 270 (+74.19%)
Mutual labels:  gcp, google-cloud-platform, mlops, vertex-ai
vertex-edge
A tool for training models to Vertex on Google Cloud Platform.
Stars: ✭ 24 (-84.52%)
Mutual labels:  gcp, google-cloud-platform, mlops
terraform-gcp-labs
Terraform templates for GCP provider ☁️
Stars: ✭ 27 (-82.58%)
Mutual labels:  gcp, google-cloud-platform
argon
Campaign Manager 360 and Display & Video 360 Reports to BigQuery connector
Stars: ✭ 31 (-80%)
Mutual labels:  gcp, google-cloud-platform
tfx-kubeflow-pipelines
Kubeflow pipelines built on top of Tensorflow TFX library
Stars: ✭ 17 (-89.03%)
Mutual labels:  tfx, mlops
iris3
An upgraded and improved version of the Iris automatic GCP-labeling project
Stars: ✭ 38 (-75.48%)
Mutual labels:  gcp, google-cloud-platform
kane
Google Pub/Sub client for Elixir
Stars: ✭ 92 (-40.65%)
Mutual labels:  gcp, google-cloud-platform
terraform-splunk-log-export
Deploy Google Cloud log export to Splunk using Terraform
Stars: ✭ 26 (-83.23%)
Mutual labels:  gcp, google-cloud-platform
drf-angular-docker-tutorial
Dockerized Django Back-end API using DRF with Angular Front-end Tutorial
Stars: ✭ 53 (-65.81%)
Mutual labels:  gcp, google-cloud-platform
associate-cloud-engineer
Resources on preparing for Google Cloud Associate Cloud Engineer certification
Stars: ✭ 142 (-8.39%)
Mutual labels:  gcp, google-cloud-platform
terraformit-gcp
Generating tf files and tfstate from existing GCP resources.
Stars: ✭ 48 (-69.03%)
Mutual labels:  gcp, google-cloud-platform
Google-Cloud-Study-Jams
Resources for 30 Days of Google Cloud program workshops and events conducted by GDSC VJTI
Stars: ✭ 13 (-91.61%)
Mutual labels:  gcp, google-cloud-platform
zorya
Google Cloud Instance Scheduler helping to reduce costs by 60% on average for non-production environments.
Stars: ✭ 127 (-18.06%)
Mutual labels:  gcp, google-cloud-platform
Cloud-Service-Providers-Free-Tier-Overview
Comparing the free tier offers of the major cloud providers like AWS, Azure, GCP, Oracle etc.
Stars: ✭ 226 (+45.81%)
Mutual labels:  gcp, google-cloud-platform
blockchain-etl-streaming
Streaming Ethereum and Bitcoin blockchain data to Google Pub/Sub or Postgres in Kubernetes
Stars: ✭ 57 (-63.23%)
Mutual labels:  gcp, google-cloud-platform
gcp-get-secret
A simple command line utility to get secrets from the Google Secret Manager into your environment
Stars: ✭ 35 (-77.42%)
Mutual labels:  gcp, google-cloud-platform
course-material
Course Material for in28minutes courses on Java, Spring Boot, DevOps, AWS, Google Cloud, and Azure.
Stars: ✭ 544 (+250.97%)
Mutual labels:  gcp, google-cloud-platform
GoogleCloudLogging
Swift (Darwin) library for logging application events in Google Cloud.
Stars: ✭ 24 (-84.52%)
Mutual labels:  gcp, google-cloud-platform
SimpleCSPM
GCP CSPM using Google Sheets
Stars: ✭ 24 (-84.52%)
Mutual labels:  gcp, google-cloud-platform
plantuml-libs
A set of PlantUML libraries and a NPM cli tool to design diagrams which focus on several technologies/approaches: Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), C4 Model or even EventStorming and more.
Stars: ✭ 75 (-51.61%)
Mutual labels:  gcp, google-cloud-platform

MLOps with Vertex AI

This example implements the end-to-end MLOps process using Vertex AI platform and Smart Analytics technology capabilities. The example use Keras to implement the ML model, TFX to implement the training pipeline, and Model Builder SDK to interact with Vertex AI.

MLOps lifecycle

Getting started

  1. Setup your MLOps environment on Google Cloud.

  2. Start your AI Notebook instance.

  3. Open the JupyterLab then open a new Terminal

  4. Clone the repository to your AI Notebook instance:

    git clone https://github.com/GoogleCloudPlatform/mlops-with-vertex-ai.git
    cd mlops-with-vertex-ai
    
  5. Install the required Python packages:

    pip install tfx==1.2.0 --user
    pip install -r requirements.txt
    

    NOTE: You can ignore the pip dependencies issues. These will be fixed when upgrading to subsequent TFX version.


  6. Upgrade the gcloud components:

    sudo apt-get install google-cloud-sdk
    gcloud components update
    

Dataset Management

The Chicago Taxi Trips dataset is one of public datasets hosted with BigQuery, which includes taxi trips from 2013 to the present, reported to the City of Chicago in its role as a regulatory agency. The task is to predict whether a given trip will result in a tip > 20%.

The 01-dataset-management notebook covers:

  1. Performing exploratory data analysis on the data in BigQuery.
  2. Creating Vertex AI Dataset resource using the Python SDK.
  3. Generating the schema for the raw data using TensorFlow Data Validation.

ML Development

We experiment with creating a Custom Model using 02-experimentation notebook, which covers:

  1. Preparing the data using Dataflow.
  2. Implementing a Keras classification model.
  3. Training the Keras model with Vertex AI using a pre-built container.
  4. Upload the exported model from Cloud Storage to Vertex AI.
  5. Extract and visualize experiment parameters from Vertex AI Metadata.
  6. Use Vertex AI for hyperparameter tuning.

We use Vertex TensorBoard and Vertex ML Metadata to track, visualize, and compare ML experiments.

In addition, the training steps are formalized by implementing a TFX pipeline. The 03-training-formalization notebook covers implementing and testing the pipeline components interactively.

Training Operationalization

The 04-pipeline-deployment notebook covers executing the CI/CD steps for the training pipeline deployment using Cloud Build. The CI/CD routine is defined in the pipeline-deployment.yaml file, and consists of the following steps:

  1. Clone the repository to the build environment.
  2. Run unit tests.
  3. Run a local e2e test of the TFX pipeline.
  4. Build the ML container image for pipeline steps.
  5. Compile the pipeline.
  6. Upload the pipeline to Cloud Storage.

Continuous Training

After testing, compiling, and uploading the pipeline definition to Cloud Storage, the pipeline is executed with respect to a trigger. We use Cloud Functions and Cloud Pub/Sub as a triggering mechanism. The Cloud Function listens to the Pub/Sub topic, and runs the training pipeline given a message sent to the Pub/Sub topic. The Cloud Function is implemented in src/pipeline_triggering.

The 05-continuous-training notebook covers:

  1. Creating a Cloud Pub/Sub topic.
  2. Deploying a Cloud Function.
  3. Triggering the pipeline.

The end-to-end TFX training pipeline implementation is in the src/pipelines directory, which covers the following steps:

  1. Receive hyper-parameters using hyperparam_gen custom python component.
  2. Extract data from BigQuery using BigQueryExampleGen component.
  3. Validate the raw data using StatisticsGen and ExampleValidator component.
  4. Process the data using on Dataflow Transform component.
  5. Train a custom model with Vertex AI using Trainer component.
  6. Evaluate and validate the custom model using ModelEvaluator component.
  7. Save the blessed to model registry location in Cloud Storage using Pusher component.
  8. Upload the model to Vertex AI using vertex_model_pusher custom python component.

Model Deployment

The 06-model-deployment notebook covers executing the CI/CD steps for the model deployment using Cloud Build. The CI/CD routine is defined in build/model-deployment.yaml file, and consists of the following steps:

  1. Test model interface.
  2. Create an endpoint in Vertex AI.
  3. Deploy the model to the endpoint.
  4. Test the Vertex AI endpoint.

Prediction Serving

We serve the deployed model for prediction. The 07-prediction-serving notebook covers:

  1. Use the Vertex AI endpoint for online prediction.
  2. Use the Vertex AI uploaded model for batch prediction.
  3. Run the batch prediction using Vertex Pipelines.

Model Monitoring

After a model is deployed in for prediction serving, continuous monitoring is set up to ensure that the model continue to perform as expected. The 08-model-monitoring notebook covers configuring Vertex AI Model Monitoring for skew and drift detection:

  1. Set skew and drift threshold.
  2. Create a monitoring job for all the models under and endpoint.
  3. List the monitoring jobs.
  4. List artifacts produced by monitoring job.
  5. Pause and delete the monitoring job.

Metadata Tracking

You can view the parameters and metrics logged by your experiments, as well as the artifacts and metadata stored by your Vertex Pipelines in Cloud Console.

Disclaimer

This is not an official Google product but sample code provided for an educational purpose.


Copyright 2021 Google LLC.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].