All Projects → Financial-Times → ec2-powercycle

Financial-Times / ec2-powercycle

Licence: other
Lambda function to stop and start EC2 instances based on tag

Programming Languages

shell
77523 projects
python
139335 projects - #7 most used programming language
Dockerfile
14818 projects

EC2-POWERCYCLE

Circle CI

AWS Lambda function to stop and start EC2 instances based on resource tag using crontab-like expressions

Table of Contents

Usage
Testing and development
Creating a Lambda Deployment Package
Build environment
Serverless build pipeline
Identity and Access Management policy
Creating and scheduling Lambda function

Usage

Lambda function looks for EC2 instances and Auto Scaling Groups that has a resource tag ec2Powewrcycle attached to it.

Tag value is simple JSON document that describes start and stop schedule in crontab-like expressions.
In case of ASGs, the tag may also contain information about the scaling state of the group (min and desired instances in the group). If it doesn't, then the min and the desired instances are defaulted both to 1.

Examples

  1. EC2 instance stop/start schedule: Mon - Fri, 8.00am - 5.55pm

    asgLifecycle: { "start": "0 8 * * 1-5", "stop": "55 17 * * 1-5" }
  2. Auto Scaling Group stop/start schedule: Mon-Fri, 9:00am - 11:00pm. Min no of instances in ASG is 2 and desired no is 3

    asgLifecycle: { "start": "0 9 * * 1-5", "stop": "00 23 * * 1-5", "min": 2, "desired": 3 }
  3. Auto Scaling Group without scaling state specified. This will default to Min no of instance in ASG to 1 and the desired no to 1

    asgLifecycle: { "start": "0 8 * * 1-5", "stop": "55 17 * * 1-5" }
  4. Auto Scaling Group without min state specified. This will default to Min no of instance in ASG to 1.

    asgLifecycle: { "start": "0 8 * * 1-5", "stop": "55 17 * * 1-5", "desired": 3 }

As of commit 00389de the stop/start schedule can be defined as an URL to publicly accessible JSON document. This feature can be handy when managing schedule for large number of nodes.

ec2Powercycle: https://raw.githubusercontent.com/Financial-Times/ec2-powercycle/master/json/dev-schedule.json

Testing and development

To run ec2Powercycle job local dev environment you need to install all dependencies such as boto3 and croniter. Full list of dependencies can be found in the file ec2_powercycle.py

You also need to set up AWS credetials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION) in order to interact with AWS API.

To run the job first change to python directory inside the repository, then call the hander() function.

cd /path/to/repository
python -c "from ec2_powercycle import * ; handler()"

Function can be executed in so-called dryrun mode with the following command.

python -c "from ec2_powercycle import * ; handler({ \"DryRun\": \"True\" })"

In dryrun mode function doesn't stop/start instances.

Creating a Lambda Deployment Package

EC2-POWERCYCLE uses 3rd party library called Croniter which must be installed before deployment package is created.

Installing Croniter and Requests into lib/ directory

pip3 install croniter requests -t lib/

Creating zip archive

The following command is run in the root of the ec2-powercycle repository. The command bundles ec2-powercycle business logic, its dependencies and the README.md which can be uploaded to Lambda or S3 bucket.

zip -r ../ec2-powercycle-0.0.1.zip ./*.py lib/ README.md

Build environment

This repository ships with Dockerfile that can be used for packaging and deployment automation.

Building Docker image

The following command is run in the root of the repository and it creates a Docker container called ec2-powercycle with tag value 1.

 sudo docker build -t ec2powercycle .

Launching Docker image

When Docker image is running it first executes the packaging script package.sh, then deployment script lambda-deploy-latest.sh that pushes ec2-powercycle.zip package into Lambda.

To run lambda-deploy-latest.sh in headless mode you can provide AWS credentials as Docker environment variables.

sudo docker run --env "AWS_ACCESS_KEY_ID=<access_key_id>" \
--env "AWS_SECRET_ACCESS_KEY=<access_key_secret>" \
--env "AWS_DEFAULT_REGION=<aws_region_for_s3_bucket>" \
--env "AWS_LAMBDA_FUNCTION=<lambda_function_name>" \
-it ec2powercycle

Launching Docker image without environment variable will run post-to-lambda.sh in interactive mode that prompts user for AWS credentials.

sudo docker run -it ec2-powercycle

Serverless build pipeline

Circleci is a hosted CI service that integrates nicely with Github and AWS.

Release process

Build pipeline currently has a single workflow with a mandatory build task and optionally one of two possible deployment tasks: Development (red lines) and Production (green lines).

Build pipeline

The Development task is run every time the master branch is updated. Development task creates a deployment package, deploys it to Lambda and invokes the function against DEV alias.

Once you have completed Development work and wish to "promote" your code to Production you can trigger Production task by creating a Git tag with prefix release- and pushing the tag to repository.

Use the following commands to create a tag and push it to repository.

git tag -a release-12 -m "Repoint LIVE alias to release-12 tag"
git push origin release-12

Adding AWS credentials into Circleci

To enable Circleci build job to deploy deployment package to Lambda the build job must be configured with AWS credentials.

  • Go to Circleci Dashboad and click the cog icon associated with build job
  • Under the Permissions category click AWS Permissions
  • Fill out Access Key ID and Secret Access Key fields

Identity and Access Management policy

When creating Lambda function you will be asked to associate IAM role with the function.

IAM policy for Lambda function

The following policy example enables Lambda function to access the following AWS services:

  • CloudWatch - Full access to Amazon CloudWatch for logging and job scheduling
  • EC2 - Access to query status and stop/start instances when resource tag ec2Powercycle is attached to the instance and environment tag does not equal p (p=production)
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogStreams"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:Describe*",
        "autoscaling:Describe*"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:StartInstances",
        "ec2:StopInstances"
      ],
      "Condition": {
        "StringLike": {
          "ec2:ResourceTag/ec2Powercycle": "*"
        },
        "StringNotEqualsIgnoreCase": {
          "ec2:ResourceTag/environment": "p"
        }
      },
      "Resource": "arn:aws:ec2:*:*:instance/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "autoscaling:UpdateAutoScalingGroup"
      ],
      "Condition": {
        "StringLike": {
          "autoscaling:ResourceTag/ec2Powercycle": "*"
        },
        "StringNotEqualsIgnoreCase": {
          "autoscaling:ResourceTag/environment": "p"
        }
      },
      "Resource": "*"
    }
  ]
}

IAM policy for build pipeline

The following policy enables build and deployment job to update Lambda function, invoke it and update aliases.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:CreateAlias",
                "lambda:GetFunction",
                "lambda:InvokeFunction",
                "lambda:List*",
                "lambda:PublishVersion",
                "lambda:UpdateAlias",
                "lambda:UpdateFunctionCode"
            ],
            "Resource": [
                "arn:aws:lambda:*:*:function:ec2-powercycle"
            ]
        }
    ]
}

Creating and scheduling Lambda function

Once deployment package has been created we can create a Lambda function and use CloudWatch to set the function to run periodically.

Creating Lambda function

  1. Log on to AWS console and go to Lambda configuration menu

  2. Click Create a Lambda function

  3. In Select blueprint menu choose one of the blueprints (e.g. s3-get-object-python) click Remove button on the next screen to remove triggers. Then click Next.

  4. on Configure function page provide the following details:

    • Name*: ec2-powercycle
    • Description: Optional description of the function
    • Runtime*: Python 3.7
  5. In Lambda function code section select Upload a .ZIP file to upload ec2powercycle.zip package to Lambda

  6. In Lambda function handler and role section set handler name ec2_powercycle.handler

  7. Select the role that has the above IAM policy attached to it

  8. Set Timeout value 1 min

  9. Click Next and Create function

Scheduling Lambda function

  1. In Lambda configuration menu open the ec2-powercycle Lambda job
  2. Go to Triggers tab
  3. Click Add trigger
  4. Select Event source type: CloudWatchEvents - Schedule and provide the following details
    • Rule name: whatever unique name
    • Rule description:optional description of the rule
    • Schedule expression: rate(15 minutes)
  5. Click Submit to create schedule
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].