All Projects → D10S0VSkY-OSS → Stack-Lifecycle-Deployment

D10S0VSkY-OSS / Stack-Lifecycle-Deployment

Licence: MIT license
OpenSource self-service infrastructure solution that defines and manages the complete lifecycle of resources used and provisioned into a cloud! It is a terraform UI with rest api for terraform automation

Programming Languages

CSS
56736 projects
python
139335 projects - #7 most used programming language
HTML
75241 projects
javascript
184084 projects - #8 most used programming language
shell
77523 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to Stack-Lifecycle-Deployment

Full Stack Fastapi Postgresql
Full stack, modern web application generator. Using FastAPI, PostgreSQL as database, Docker, automatic HTTPS and more.
Stars: ✭ 7,635 (+8576.14%)
Mutual labels:  celery, fastapi
Terrahub
Terraform Automation and Orchestration Tool (Open Source)
Stars: ✭ 148 (+68.18%)
Mutual labels:  infrastructure, continuous-deployment
Tplan
😃 T计划 是一个集成了任务队列、进程管理、爬虫部署、服务可视化监控、数据展示、在线编码、远程部署的通用系统。
Stars: ✭ 59 (-32.95%)
Mutual labels:  celery, fastapi
vue-element-admin-fastapi
vue-element-admin-fastapi
Stars: ✭ 145 (+64.77%)
Mutual labels:  celery, fastapi
fastapi-celery-redis-rabbitmq
A simple docker-compose app for orchestrating a fastapi application, a celery queue with rabbitmq(broker) and redis(backend)
Stars: ✭ 81 (-7.95%)
Mutual labels:  celery, fastapi
FastAPI Tortoise template
FastAPI - Tortoise ORM - Celery - Docker template
Stars: ✭ 144 (+63.64%)
Mutual labels:  celery, fastapi
Devtron
Software Delivery Workflow For Kubernetes
Stars: ✭ 130 (+47.73%)
Mutual labels:  continuous-deployment, kubectl
kube-applier
kube-applier enables automated deployment and declarative configuration for your Kubernetes cluster.
Stars: ✭ 27 (-69.32%)
Mutual labels:  infrastructure, continuous-deployment
guane-intern-fastapi
FastAPI-PostgreSQL-Celery-RabbitMQ-Redis bakcend with Docker containerization
Stars: ✭ 54 (-38.64%)
Mutual labels:  celery, fastapi
fastapi
基于Fastapi开发,集成Celery-redis分布式任务队列、JWT 用户系统、ElasticSearch和encode orm的基础项目模板,大家可以根据自己的需求在本模板上进行修改
Stars: ✭ 75 (-14.77%)
Mutual labels:  celery, fastapi
headless-wordpress
Headless Wordpress - AWS - Easy Setup
Stars: ✭ 42 (-52.27%)
Mutual labels:  infrastructure, stack
datagov-deploy
Main repository for the data.gov service
Stars: ✭ 156 (+77.27%)
Mutual labels:  infrastructure, stack
fastapi-framework
A FastAPI Framework for things like Database, Redis, Logging, JWT Authentication, Rate Limits and Sessions
Stars: ✭ 26 (-70.45%)
Mutual labels:  fastapi
recruitr
Online Code Judging Tool
Stars: ✭ 25 (-71.59%)
Mutual labels:  celery
fastapi-auth0
FastAPI authentication and authorization using auth0.com
Stars: ✭ 104 (+18.18%)
Mutual labels:  fastapi
hatrack
Fast, multi-reader, multi-writer, lockless data structures for parallel programming
Stars: ✭ 55 (-37.5%)
Mutual labels:  stack
lupyne
Pythonic search engine based on PyLucene.
Stars: ✭ 61 (-30.68%)
Mutual labels:  fastapi
fast-api-sqlalchemy-template
Dockerized web application on FastAPI, sqlalchemy1.4, PostgreSQL
Stars: ✭ 25 (-71.59%)
Mutual labels:  fastapi
K8sSymfonyReact
We've found a ship, a captain, a composer and an orchestra. 🎵
Stars: ✭ 13 (-85.23%)
Mutual labels:  continuous-deployment
rips-old
Rust IP Stack - A userspace IP stack written in Rust (Work in progress)
Stars: ✭ 32 (-63.64%)
Mutual labels:  stack

Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

Stack Lifecycle Deployment

OpenSource solution that defines and manages the complete lifecycle of resources used and provisioned into a cloud!
Explore the docs »

Table of Contents
  1. About SLD
  2. Getting Started
  3. Usage
  4. Custom settings
  5. Architecture
  6. Roadmap
  7. Contributing
  8. License
  9. Contact
  10. Acknowledgements
  11. Built With

About SLD

SLD helps to accelerate deployment, weighting and making IaaC reusable, generating dynamic forms and maintaining different variables in each environment with the same code. With SLD you can schedule infrastructure deployments like its destruction, manage users by roles and separate stacks by squad and environment

Product Name Screen Shot

Product Name Screen Shot

Main features:

  • Fast API async
  • Dashboard / UI
  • Distributed tasks routing by squad
  • Infrastructure as code (IaC) based in terraform code
  • Dynamic html form from terraform variables
  • Re-deploy infrastructure keeping the above parameters
  • Distributed architecture based microservices
  • Task decouple and event driven pattern
  • Resilient, rollback deployment and retry if failure

SLD is the easy way to use your terrafrom code!

Getting Started

Prerequisites

You need docker and docker-compse or kind ( recomended ).

Installation

  1. Clone the SLD repo

    git clone https://github.com/D10S0VSkY-OSS/Stack-Lifecycle-Deployment.git
  2. Deploy SLD in k8s with kind

    cd Stack-Lifecycle-Deployment/play-with-sld/kubernetes 
    sh kplay.sh start

    Result:

    Starting SLD for play
    Creating cluster "kind" ...
    ✓ Ensuring node image (kindest/node:v1.20.2) 🖼
    ✓ Preparing nodes 📦 📦  
    ✓ Writing configuration 📜 
    ✓ Starting control-plane 🕹️ 
    ✓ Installing CNI 🔌 
    ✓ Installing StorageClass 💾 
    ✓ Joining worker nodes 🚜 
    Set kubectl context to "kind-kind"
    You can now use your cluster with:
    
    kubectl cluster-info --context kind-kind
  3. Create init user

    sh kplay.sh init

    Result:

    kind ok
    docker ok
    kubectl ok
    jq ok
    curl ok
    
    init SLD
    #################################################
    #  Now, you can play with SLD 🕹️                #
    #################################################
    API: http://localhost:5000/docs
    DASHBOARD: http://localhost:5000/
    ---------------------------------------------
    username: admin
    password: Password08@
    ---------------------------------------------
    

    List endopints

    sh kplay.sh list

    Result:

    kind ok
    docker ok
    kubectl ok
    
    List endpoints
    API: http://localhost:8000/docs
    DASHBOARD: http://localhost:5000/

Usage

  1. Sign-in to DASHBOARD:

    sign-in

    Click the dashboard link:

    sign-in

  2. Add Cloud account

    sign-in

    fill in the form with the required data. in our example we will use

    • Squad: squad1
    • Environment: develop

    by default workers are running as squad1 and squad2 for play purpose, but you can change it and scale when you want

    When you add an account to a provider ( aws, gcp, azure ) one squad is created, you must create a worker for the name of the created squad, if you don't do it the deployment will remain in a "PENDING" state Read Workers

    finally add:

    • Access_key_id
    • Secret_access_key
    • Default_region ( default eu-west-1) In case you use assume role, fill in the rest of the data.
  3. Add terraform module or stack

    sign-in

    • Name: Add the name with a valid prefix according to the cloud provider.

    Prefixs supported: aws_ , gcp_, azure_

    You can pass user and password as https://username:[email protected]/aws_vpc For ssh you can pass it as a secret in the deployment to the user sld

    • Branch: Add the branch you want to deploy by default is master
    • Squad Access: Assign who you want to have access to this stack by squad

    '*' = gives access to all, you can allow access to one or many squads separated by commas: squad1,squad2

    • tf version: indicates the version of terraform required by the module or stack

    https://releases.hashicorp.com/terraform/

    • Description: Describe the module or stack to help others during implementation.
  4. Deploy your first stack!!!

    List stacks for deploy

    sign-in

    Choose deploy

    sign-in

    SLD will generate a dynamic form based on the stack variables, fill in the form and press the Deploy button

    sign-in

    Important! assign the same squad and environment that we previously created when adding the account (See Add Cloud account)

    Now, the status of the task will change as the deployment progresses.

    sign-in

    You can control the implementation life cycle sign-in You can destroy, re-implement that SLD will keep the old values ​​or you can also edit those values ​​at will. sign-in And finally you can manage the life cycle programmatically, handle the destruction / creation of the infrastructure, a good practice for the savings plan!!! sign-in

Custom settings

Storage backend

SLD uses its own remote backend, so you don't need to configure any backend in terraform. The following example shows a backend config

        terraform {
          backend "http" {
            address = "http://remote-state:8080/terraform_state/aws_vpc-squad1-develop-vpc_core"
            lock_address = "http://remote-state:8080/terraform_lock/aws_vpc-squad1-develop-vpc_core"
            lock_method = "PUT"
            unlock_address = "http://remote-state:8080/terraform_lock/aws_vpc-squad1-develop-vpc_core"
            unlock_method = "DELETE"
          }
        }
        

At the moment SLD supports MongoDB, S3 and local backend (for testing purposes only) To configure MongoDB as a backend, you must pass the following variables as parameters to the remote-state service:

# docker-compose.yaml
    environment:                                                                                                     
      SLD_STORE: mongodb                                                                                             
      SLD_MONGODB_URL: "mongodb:27017/"
      MONGODB_USER: admin
      MONGODB_PASSWD: admin
# k8s yaml
    env:
    - name: SLD_STORE
      value: mongodb
    - name: SLD_MONGODB_URL
      value: "mongodb:27017/"
    - name: MONGODB_USER
      value: admin
    - name: MONGODB_PASSWD
      value: admin

To configure S3 you can pass the access and secret keys of aws, in case SLD is running in AWS it is recommended to use roles

    env:
    - name: SLD_STORE
      value: "S3"
    - name: SLD_BUCKET
      value: "s3-sld-backend-cloud-tf-state"
    - name: AWS_ACCESS_KEY
      value: ""
    - name: AWS_SECRET_ACCESS_KEY
      value: ""

Data remote state

To be able to use the outputs of other stacks you can configure it as follows the key alwys is the same like "Task Name"

stack-name squad account env deploy name
aws_vpc squad1 develop vpc_core
data "terraform_remote_state" "vpc_core" {
  backend = "http"
  config = {
    address = "http://remote-state:8080/terraform_state/aws_vpc-squad1-develop-vpc_core"
  }
}

Test example:

echo "data.terraform_remote_state.vpc_core.outputs"|terraform console

Workers

The workers in sld are responsible for executing the infrastructure deployment. You can use one or more workers for each account or several accounts at the same time. It all depends on the degree of parallelism and segregation that you consider

# Example k8s worker for account squad1, change this for each of your accounts
# Stack-Lifecycle-Deployment/play-with-sld/kubernetes/k8s/sld-worker-squad1.yml
# Add replicas for increment paralelism
# Add more squad accounts if you want to group accounts in the same worker:
# command: ["celery", "--app", "tasks.celery_worker", "worker", "--loglevel=info", "-c", "1", "-E", "-Q", "squad1,"another_squad_account"]

apiVersion: apps/v1
kind: Deployment
metadata:
  name: stack-deploy-worker-squad1
  labels:
    name: stack-deploy-worker-squad1
spec:
  replicas: 1 
  selector:
    matchLabels:
      name: stack-deploy-worker-squad1
  template:
    metadata:
      labels:
        name: stack-deploy-worker-squad1
    spec:
      subdomain: primary
      containers:
        - name: stack-deploy-worker-squad1
          image: d10s0vsky/sld-api:latest
          imagePullPolicy: Always
          env:
          - name: TF_WARN_OUTPUT_ERRORS
            value: "1"
          resources:
            limits:
              memory: 600Mi
              cpu: 1
            requests:
              memory: 300Mi
              cpu: 500m
          command: ["celery", "--app", "tasks.celery_worker", "worker", "--loglevel=info", "-c", "1", "-E", "-Q", "squad1"]

  # Example docker-compose worker for account squad1, change this for each of your accounts
  # Stack-Lifecycle-Deployment/play-with-sld/docker/docker-compose.yml

  worker:
    image: d10s0vsky/sld-api:latest
    entrypoint: ["celery", "--app", "tasks.celery_worker", "worker", "--loglevel=info", "-c", "1", "-E", "-Q", "squad1"]
    environment:
      BROKER_USER: admin
      BROKER_PASSWD: admin
    depends_on:
      - rabbit
      - redis
      - db
      - remote-state

Users roles

SLD has three preconfigured roles for users to easily manage this.

roles scope description
yoda global Global scope, can see all squads and are full admin
darth_vader one or many squad Limit the scope of the squad, can see the assigned squads and you are a full manager of only those squads
stormtrooper one or many squad Limits squad range, can see assigned squads and can only deploy assigned deployment on belong squad
R2-D2 all, one or many squad This role is only for identification and must be associated with the previous ones, its use case is for bot users who access the api

Architecture

sign-in

Roadmap

  • Support storage backend gcp cloud storage and azure blob storage
  • LDAP and SSO authentication
  • Slack integration
  • FluenD / elasticSearch integration
  • InfluxDB integration
  • Prometheus
  • Estimate pricing by stack
  • Anomaly detection
  • Advance metrics and logs
  • Resource size recommendation based on metrics
  • Shift Left Security deployment
  • Multi tenancy
  • Topology graphs
  • Mutal TLS
  • Added workers automatically by squad
  • Onboarding resources
  • Add more cloud and on-prem providers

Contributing

Contributions are what makes the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

[email protected]

Stack Lifecycle Deployment

Acknowledgements

Built With

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].