All Projects → redkubes → Otomi Core

redkubes / Otomi Core

Licence: apache-2.0
Otomi Container Platform, a suite of integrated best of breed open source tools combined with automation & self service, all wrapped together and made available as an enterprise ready and single deployable solution

Projects that are alternatives of or similar to Otomi Core

Devops Exercises
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions
Stars: ✭ 20,905 (+24786.9%)
Mutual labels:  devops, prometheus, containers
Netdata
Real-time performance monitoring, done right! https://www.netdata.cloud
Stars: ✭ 57,056 (+67823.81%)
Mutual labels:  devops, prometheus, containers
Sparrow
运维管理平台(python3+flask+pycharts+apscheduler+gunicorn),模块化结构设计,底层依托mysql、influxdb、elasticsearch、zabbix、k8s等数据源数据
Stars: ✭ 723 (+760.71%)
Mutual labels:  devops, containers
Iceci
IceCI is a continuous integration system designed for Kubernetes from the ground up.
Stars: ✭ 29 (-65.48%)
Mutual labels:  devops, containers
Giropops Monitoring
Full stack tools for monitoring containers and other stuff. ;)
Stars: ✭ 1,019 (+1113.1%)
Mutual labels:  prometheus, containers
Conprof
Continuous profiling for performance analysis of CPU, memory over time.
Stars: ✭ 571 (+579.76%)
Mutual labels:  prometheus, containers
Kubeadm Ha
Kubernetes high availiability deploy based on kubeadm, loadbalancer included (English/中文 for v1.15 - v1.20+)
Stars: ✭ 614 (+630.95%)
Mutual labels:  prometheus, istio
Centos7 S2i Nodejs
DEPRECATED OpenShift S2I builder images for Node.js ✨
Stars: ✭ 34 (-59.52%)
Mutual labels:  devops, containers
Dokku
A docker-powered PaaS that helps you build and manage the lifecycle of applications
Stars: ✭ 22,155 (+26275%)
Mutual labels:  devops, containers
Origin
Conformance test suite for OpenShift
Stars: ✭ 8,046 (+9478.57%)
Mutual labels:  devops, containers
Citrix Adc Metrics Exporter
Export metrics from Citrix ADC (NetScaler) to Prometheus
Stars: ✭ 67 (-20.24%)
Mutual labels:  devops, prometheus
Kubesphere
The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️
Stars: ✭ 8,315 (+9798.81%)
Mutual labels:  devops, istio
Swagger Stats
API Observability. Trace API calls and Monitor API performance, health and usage statistics in Node.js Microservices.
Stars: ✭ 559 (+565.48%)
Mutual labels:  devops, prometheus
Grpc By Example Java
A collection of useful/essential gRPC Java Examples
Stars: ✭ 709 (+744.05%)
Mutual labels:  prometheus, containers
Urlooker
enterprise-level websites monitoring system
Stars: ✭ 469 (+458.33%)
Mutual labels:  devops, prometheus
Helm Charts
Prometheus community Helm charts
Stars: ✭ 962 (+1045.24%)
Mutual labels:  prometheus, mustache
Devops Resources
DevOps resources - Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP
Stars: ✭ 1,194 (+1321.43%)
Mutual labels:  devops, containers
Tsuru
Open source and extensible Platform as a Service (PaaS).
Stars: ✭ 3,761 (+4377.38%)
Mutual labels:  devops, containers
Dogvscat
Sample Docker Swarm cluster stack of tools
Stars: ✭ 377 (+348.81%)
Mutual labels:  prometheus, containers
Kapo
Wrap any command in a status socket
Stars: ✭ 45 (-46.43%)
Mutual labels:  devops, containers

ExternalDNS

Otomi Core

Otomi Core is the Heart of the Otomi Container Platform. Otomi Container Platform offers an out-of-the-box enterprise container management platform (on top of Kubernetes) to increase developer efficiency and reduce complexity. It is a turnkey cloud native solution that integrates upstream Kubernetes with proven open source components. Otomi is made available as a single deployable package with curated industry proven applications and policies for better governance and security. With carefully crafted sane defaults at every step, it minimizes configuration effort and time to market. Otomi automates most (if not all) of your cluster operations and includes application lifecycle management at its core. It is open source and transparent, allowing customization but also extensibility. Incorporating Open Source standards and best practices, Otomi aims to bring new features and stability with every iteration.

Important features:

  • Single Sign On: Bring your own IDP or use Keycloak
  • Multi Tenancy: Create admins and teams to allow self service of deployments
  • Automatic Ingress Configuration: Easily configure ingress for team services or core apps, allowing access within minutes.
  • Input/output validation: Configuration and output manifests are checked statically for validity and best practices.
  • Policy enforcement: Manifests are checked both statically and on the cluster at runtime for obedience to OPA policies.
  • Automatic Vulnerability Scanning: All configured team service containers get scanned in Harbor.
  • and many more (for a full list see otomi.io)

This repo is also built as an image and published on docker hub at otomi/core. Other parts of the platform:

  • Otomi Tasks: tasks used by core to glue all it's pieces together
  • Otomi Clients: clients used by the tasks, generated from vendors' openapi specs

This readme is aimed at development. If you wish to contribute please read our Contributor Code of Conduct and Contribution Guidelines.

To get up and running with the platform please follow the online documentation for Otomi Container Platform. It lists all the prerequisites and tooling expected, so please read up before continuing here.

Development

Editing source files

Most of the code is in go templates: helmfile's *.gotmpl and helm chart's templates/*.yaml. Please become familiar with it's intricacies by reading our special section on go templating.

For editing the values-schema.yaml please refer to the meta-schema documentation.

For working with bats and adding tests to bin/tests/* please refer to the online bats documentation

You can define OPA policies in policies/*.rego files that are used both for statical analysis (also at build time), as well as by gatekeeper (at run time) to check whether manifests are conformant.

1. Validating changes

For the next steps you will need to export ENV_DIR to point to your values folder, and source the aliases:

# assuming you created otomi-values repo next to this:
export ENV_DIR=$PWD/../otomi-values
. bin/aliases

Input

Start by validating the configuration values against the values-schema.yaml with:

# all clusters
otomi validate-values
# For the next step you will also need to export`CLOUD` and `CLUSTER`, as it is only validating a configured target cluster:
otomi validate-values: 1

Any changes made to the meta-schema will then also be automatically validated.

Output

You can check whether resulting manifests are conform our specs with:

# all clusters
otomi validate-templates
# For the next step you will also need to export`CLOUD` and `CLUSTER`, as it is only validating a configured target cluster:
export CLOUD=google CLUSTER=demo
otomi validate-templates 1

This will check whether any CRs are matching their CRDs, but also check for k8s manifest best practices using kubeval.

And to run the policy checks run the following:

# all clusters
otomi check-policies
# For the next step you will also need to export`CLOUD` and `CLUSTER`, as it is only validating a configured target cluster:
otomi check-policies 1

2. Diffing changes

To test changes in code against running clusters you will need to export at least ENV_DIR, CLOUD and CLUSTER and source the aliases:

After changing code you can do a diff to see everything still works and what has changed in the output manifests:

otomi diff
# or target one release:
otomi diff -l name=prometheus-operator

3. Deploying changes

It is preferred that deployment is done from the values repo, as it is tied to the clusters listed there only, and thus has a smaller blast radius. When you feel that you are in control and want fast iteration you can connect to a values repo directly by exporting ENV_DIR. It is mandatory and won't work without it. The CLI will also check that you are targeting kubectl's current-context as a failsafe mechanism.

To deploy everything in the stack:

# target your cluster
export CLOUD=google && CLUSTER=demo
# and deploy
otomi deploy

NOTICE: when on GKE this may sometimes result in an access token refresh error as the full path to the gcloud binary is referenced from GKE's token refresh mechanism in .kube/config, which is mounted from the host, but inaccessible from within the container. (See bug report: https://issuetracker.google.com/issues/171493249). Retrying the command usuall works, so do that to work around it for now.

It is also possible to target individual helmfile releases from the stack:

otomi apply -l name=prometheus-operator

This will first do a diff and then a sync. But if you expect the helm bookkeeping to not match the current state (because resources were manipulated without helm), then do a sync:

otomi sync -l name=prometheus-operator
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].