All Projects → fjudith → saltstack-kubernetes

fjudith / saltstack-kubernetes

Licence: Apache-2.0 License
Deploy the lowest-cost production ready Kubernetes cluster using terraform and saltstack.

Programming Languages

Jinja
831 projects
SaltStack
118 projects
shell
77523 projects
HCL
1544 projects

Projects that are alternatives of or similar to saltstack-kubernetes

Minikube
Run Kubernetes locally
Stars: ✭ 22,673 (+48140.43%)
Mutual labels:  cluster, cncf
K8s Tew
Kubernetes - The Easier Way
Stars: ✭ 269 (+472.34%)
Mutual labels:  cluster, cncf
Kudo
Kubernetes Universal Declarative Operator (KUDO)
Stars: ✭ 849 (+1706.38%)
Mutual labels:  cluster, cncf
Ckss Certified Kubernetes Security Specialist
This repository is a collection of resources to prepare for the Certified Kubernetes Security Specialist (CKSS) exam.
Stars: ✭ 333 (+608.51%)
Mutual labels:  cluster, cncf
kubernetes the easy way
Automating Kubernetes the hard way with Vagrant and scripts
Stars: ✭ 22 (-53.19%)
Mutual labels:  cluster, cncf
endurox-go
Application Server for Go (ASG)
Stars: ✭ 32 (-31.91%)
Mutual labels:  cluster
core
augejs is a progressive Node.js framework for building applications. https://github.com/augejs/augejs.github.io
Stars: ✭ 18 (-61.7%)
Mutual labels:  cluster
racompass
An advanced GUI for Redis. Modern. Efficient. Fast. A faster and robust Redis management tool. For developers that need to manage data with confidence.It supports Redis modules now!
Stars: ✭ 26 (-44.68%)
Mutual labels:  cluster
k8s-lemp
LEMP stack in a Kubernetes cluster
Stars: ✭ 74 (+57.45%)
Mutual labels:  cluster
init ec2
init EC2 cluster, for free-password-login(ubuntu and root). for hostname, for hosts file.
Stars: ✭ 11 (-76.6%)
Mutual labels:  cluster
dlaCluster
Python code for simple diffusion limited aggregation (DLA) simulation. The code provided creates a .gif for cluster growth and calculates fractal dimensionality of the cluster. User can vary the radius of the cluster.
Stars: ✭ 23 (-51.06%)
Mutual labels:  cluster
coredns
CoreDNS is a DNS server that chains plugins
Stars: ✭ 8,962 (+18968.09%)
Mutual labels:  cncf
pacman.store
Pacman Mirror via IPFS for ArchLinux, Endeavouros and Manjaro
Stars: ✭ 65 (+38.3%)
Mutual labels:  cluster
devops
开发运维管理系统
Stars: ✭ 49 (+4.26%)
Mutual labels:  saltstack
pulseha
PulseHA is a active-passive high availability cluster daemon that uses GRPC and is written in GO.
Stars: ✭ 15 (-68.09%)
Mutual labels:  cluster
graphite-formula
docs.saltstack.com/en/latest/topics/development/conventions/formulas.html
Stars: ✭ 16 (-65.96%)
Mutual labels:  saltstack
siddhi-operator
Operator allows you to run stream processing logic directly on a Kubernetes cluster
Stars: ✭ 16 (-65.96%)
Mutual labels:  cncf
AMapMarker-master
提供一种高德地图自定义marker的解决方案以及改善高德官方点聚合功能
Stars: ✭ 63 (+34.04%)
Mutual labels:  cluster
kubernetes-marketplace
Marketplace of Kubernetes applications available for quick and easy installation in to Civo Kubernetes clusters
Stars: ✭ 136 (+189.36%)
Mutual labels:  cluster
enduser-public
🔚👩🏾‍💻👨🏽‍💻👩🏼‍💻CNCF End User Community
Stars: ✭ 75 (+59.57%)
Mutual labels:  cncf

CII Best Practices FOSSA Status


Saltstack-Kubernetes is an open source Kubernetes cluster deployment platform which aims to evaluate and run Cloud Native Applications like those registered in the CNCF landscape. Server provisionning is managed using Terraform with a primarly target on low-cost Cloud providers like Scaleway and Hetzner. Kubernetes cluster deployment is managed using Saltstack to deploy the various software binaries, configuration files and cloud native applications required to operate.


Solution design

The solution design carries the following requirements:

  1. Cloud provider agnostic: Works similarly on any clouds
  2. Networking privacy: All intra-cluster communications are TLS encrypted, pod network is encrypted, Firewall is enabled by default.
  3. Cluster security: Node security and RBAC are enabled by default
  4. Public endpoint: Leverage two servers stanting as edge gateway and allow the use of a single redudant Public IP address
  5. Secure admin network: Establish a private Mesh VPN between all servers
  6. Composable CRI: Support various Container Runtime Interface plugins (see: Features)
  7. Composable CNI: Support various Container Network Interface plugins (see: Features)
  8. Converged Storage: Persistent storage provided by cluster nodes
  9. API driven DNS: DNS records are managed just-in-time during the deployment
  10. Stable: Only leverage stable versions of software components

Major components versions

Cloud provider DNS provider Kubernetes version Container runtime Container network
  • hetzner
  • scaleway
  • cloudflare
  • 1.16.8
  • 1.17.4
  • 1.18.10
  • 1.19.7
  • 1.22.3
  • docker 19.03.13
  • containerd v1.4.11
  • cri-o 1.15
  • cni 0.7.5
  • calico 3.16.1
  • canal 3.2.1 (flannel 0.9.1)
  • flannel 0.1.0
  • weave 2.6.5
  • Cillium 1.30.0
  • Default: bold

Quick start

Pre-requisits

Before starting check that following requirements are met:

  • Register a public domain name
  • Associate the domain name with Cloudflare (Free)
  • Register with the cloud provider of your choice. Expect 100$ for a full month (i.e Scaleway, Hetzner)
  • Setup the terraform/terraform.tfvars with your appropriate credentials and configuration using this Example
  • Setup the srv/pillar/cluster_config.sls with your appropriate credentials and configuration using this Example
    • Use this guide to customize the various credentials.
  • Install the required tools (i.e. terraform, jq, wireguard-tools, etc.)
  • Create the SSH key required to send commands to the servers.

Notice: The configuration files are recorded in the .gitignore file to avoid the accidental uploads on the Web.

Server creation

Once the requirements are met, use the following command lines instanciate the server and the appropriate dns records.

cd terrafrom/
terraform init
terraform plan
terraform apply

14 servers are instanciated by default. Terraform task parallelism is constrained in order to contraint the load on the cloud provider API.

At the end of the process a similar output should be displayed, listing all the generated servers and associated IP adresses.

Outputs:

hostnames = [
    edge01,
    edge02,
    etcd01,
    etcd02,
    etcd03,
    master01,
    master02,
    master03,
    node01,
    node02,
    node03,
    node04,
    node05,
    node06
]

...

vpn_ips = [
    172.17.4.251,
    172.17.4.252,
    172.17.4.51,
    172.17.4.52,
    172.17.4.53,
    172.17.4.101,
    172.17.4.102,
    172.17.4.103,
    172.17.4.201,
    172.17.4.202,
    172.17.4.203,
    172.17.4.204,
    172.17.4.205,
    172.17.4.206
]

Kubernetes cluster deployment

The Kubernetes cluster deployment is acheived by connecting to the salt-master server (i.e edge01) to execute the salt states.

This can be acheived using the following one-liner...

ssh [email protected] -C "salt-run state.orchestrate _orchestrate"

... Or by opening first a SSH session to get benefit of the salt state output coloring.

ssh [email protected]

root@edge01 ~ # salt-run state.orchestrate _orchestrate

Accessing

Replace example.com" with the "public-domain" value from the salt pillar.

Retrieve the admin user token stored in the salt pillar (i.e /srv/pillar/cluster_config.sls).

Install kubectl.

Download the Kubernetes cluster CA certificate.

export CLUSTER_DOMAIN="example.com"

mkdir -p ~/.kube/ssl/${CLUSTER_DOMAIN}
scp root@edge01.${CLUSTER_DOMAIN}:/etc/kubernetes/ssl/ca.pem ~/.kube/ssl/${CLUSTER_DOMAIN}/

Create the kubectl configuration file.

export CLUSTER_TOKEN=mykubernetestoken
export CLUSTER_NAME="example"
export KUBECONFIG="~/.kube/config"

kubectl config set-cluster ${CLUSTER_NAME} \
--server=https://kubernetes.${CLUSTER_DOMAIN}:6443 \
--certificate-authority=~/.kube/ssl/${CLUSTER_DOMAIN}/ca.pem

kubectl config set-credentials admin-${CLUSTER_NAME} \
--token=${CLUSTER_TOKEN}

kubectl config set-context ${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=admin-${CLUSTER_NAME}

kubectl config use-context ${CLUSTER_NAME}

Kubernetes cluster access

Check the Kubernetes cluster component health.

kubectl get componentstatus

NAME                 STATUS    MESSAGE              ERROR
etcd-2               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}

Check the Kubernetes cluster nodes status.

kubectl get nodes

NAME       STATUS   ROLES          AGE   VERSION
edge01     Ready    ingress,node   32d   v1.22.3
edge02     Ready    ingress,node   32d   v1.22.3
master01   Ready    master         32d   v1.22.3
master02   Ready    master         32d   v1.22.3
master03   Ready    master         32d   v1.22.3
node01     Ready    node           32d   v1.22.3
node02     Ready    node           32d   v1.22.3
node03     Ready    node           32d   v1.22.3
node04     Ready    node           32d   v1.22.3
node05     Ready    node           32d   v1.22.3
node06     Ready    node           32d   v1.22.3

Retreive the URLs protected by the Kube-APIserver.

kubectl cluster-info

Kubernetes control plane is running at https://kubernetes.example.com:6443
Elasticsearch is running at https://kubernetes.example.com:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging:db/proxy
Kibana is running at https://kubernetes.example.com:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
CoreDNS is running at https://kubernetes.example.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Kubectl Proxy

The URLs returned by kubectl cluster-info are protected by a mutual TLS authentification. Meaning that direct access from your Web Browser is denied until you register the appropriate certificate and private key in it.

Prefer the kubectl proxy command which enables the access to URL protected by the Kube-APIServer. Once launched. URLs are available from the localhost on the HTTP port 8001.

e.g. http://localhost:8001/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Kubernetes Dashboard


Credits

This project is vastly inspired by the following projects:

License

FOSSA Status

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].