LambdaML
LambdaML is a machine learning system built in serverless infrastructure (Amazon AWS Lambda). Serverless compute service lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. Different from VM-based cloud compute, compute instances in serverless infrastructure cannot communicate with each other. To solve this problem, LambdaML implements various communication patterns using external storage.
Video Tutorials
We provide several video tutorials on YouTube.
- Introduction to LambdaML
- Programming Interface
- Deploying LambdaML with S3
- Deploying LambdaML with ElastiCache
- Deploying LambdaML with DynamoDB
- Deploying LambdaML with Hybrid Parameter Server
Dependencies
- awscli (version 1)
- botocore
- boto3
- numpy
- torch=1.0.1
- thrift
- redis
- grpcio
Environment setup
- Create a Lambda layer with PyTorch 1.0.1.
- Compress the whole project and upload to Lambda.
- Create a VPC and a security group in AWS.
Programming Interface
LambdaML leverages external storage services, e.g., S3, Elasticache, and DynamoDB, to implement communication between serverless compute instances. We provide both storage interfaces and communication primitives.
Storage
The storage layer offers basic operations to manipulate external storage.
- S3 (storage/s3/s3_type.py). storage operations: list/save/load/delete/clear/...
- Elasticache (storage/memcached/memcached_type.py). storage operations: list/save/load/delete/clear/...
- DynamoDB (storage/dynamo/dynamo_type.py). storage operations: list/save/load/delete/clear/...
Communication primitive
The communication layer provides popular communication primitives.
- S3 communicator (communicator/s3_comm.py). primitives: async/reduce/reduce_scatter.
- Elasticache communicator (communicator/memcached_comm.py). primitives: async/reduce/reduce_scatter.
- DynamoDB communicator (communicator/dynamo_comm.py). primitives: async/reduce/reduce_scatter.
Hybrid framework.
In addition to storage services, LambdaML also implements a hybrid architecture --- one VM acts as a parameter server and serverless instances communicate with the VM.
- Launch parameter server. see thrift_ps/start_service.py
- Communication interfaces: ping/register/pull/push/delete.
Usage
The general usage of LambdaML:
- Partition the dataset and upload to S3.
- Create a trigger Lambda function and an execution Lambda function.
- Set configurations (e.g., dataset location) and hyperparameters (e.g., learning rate).
- Set VPC and security group.
- Execute the trigger function.
- See the logs in CloudWatch.
See examples for more details.
Contact
If you have any question or suggestion, feel free to contact [email protected] and [email protected].
Reference
Jiawei Jiang, Shaoduo Gan, Yue Liu, Fanlin Wang, Gustavo Alonso, Ana Klimovic, Ankit Singla, Wentao Wu, Ce Zhang. Towards Demystifying Serverless Machine Learning Training. SIGMOD 2021.