All Projects → versioneye → ops_contrib

versioneye / ops_contrib

Licence: MIT License
Infrastructure code to setup your own VersionEye instance.

Programming Languages

shell
77523 projects

Projects that are alternatives of or similar to ops contrib

litemall-dw
基于开源Litemall电商项目的大数据项目,包含前端埋点(openresty+lua)、后端埋点;数据仓库(五层)、实时计算和用户画像。大数据平台采用CDH6.3.2(已使用vagrant+ansible脚本化),同时也包含了Azkaban的workflow。
Stars: ✭ 36 (-56.63%)
Mutual labels:  vagrant
ansible virtualization
Ansible Collection: Virtualization roles
Stars: ✭ 31 (-62.65%)
Mutual labels:  vagrant
kubernetes-cluster
No description or website provided.
Stars: ✭ 17 (-79.52%)
Mutual labels:  vagrant
usergrid-docker
Build and run Usergrid 2.1 using Docker
Stars: ✭ 41 (-50.6%)
Mutual labels:  vagrant
fastdata-cluster
Fast Data Cluster (Apache Cassandra, Kafka, Spark, Flink, YARN and HDFS with Vagrant and VirtualBox)
Stars: ✭ 20 (-75.9%)
Mutual labels:  vagrant
tsharkVM
tshark + ELK analytics virtual machine
Stars: ✭ 51 (-38.55%)
Mutual labels:  vagrant
docker-swarm-mode-getting-started
Repository for my Pluralsight course Getting Started with Docker Swarm Mode
Stars: ✭ 40 (-51.81%)
Mutual labels:  vagrant
vgm
Vagrant Manager – command-line tool to simplify management of vagrant boxes
Stars: ✭ 16 (-80.72%)
Mutual labels:  vagrant
vagrant-ids
An Ubuntu 16.04 build containing Suricata, PulledPork, Bro, and Splunk
Stars: ✭ 21 (-74.7%)
Mutual labels:  vagrant
halcyon-vagrant-kubernetes
Vagrant deployment mechanism for halcyon-kubernetes.
Stars: ✭ 12 (-85.54%)
Mutual labels:  vagrant
vvv-utilities
Official VVV extensions
Stars: ✭ 22 (-73.49%)
Mutual labels:  vagrant
super-duper-vault-train
🚄▼▼▼▼▼▼
Stars: ✭ 19 (-77.11%)
Mutual labels:  vagrant
sitecore-packer
Packer templates for Sitecore development with IIS, SOLR and SQL Server on Windows
Stars: ✭ 19 (-77.11%)
Mutual labels:  vagrant
vim-vagrant
basic vim/vagrant integration
Stars: ✭ 55 (-33.73%)
Mutual labels:  vagrant
packer-ubuntu
No description or website provided.
Stars: ✭ 29 (-65.06%)
Mutual labels:  vagrant
jumbo
🐘 A local Hadoop cluster bootstrapper using Vagrant, Ansible, and Ambari.
Stars: ✭ 17 (-79.52%)
Mutual labels:  vagrant
docker-atlassian
A docker-compose orchestration for JIRA Software and Confluence based on docker containers.
Stars: ✭ 13 (-84.34%)
Mutual labels:  vagrant
wordpress
The WordPress project layout used by many of Seravo's customers, suitable also for local development with Vagrant and git deployment
Stars: ✭ 95 (+14.46%)
Mutual labels:  vagrant
AEM-UP
🚀 AEM Author, Dispatcher and Publisher in one VM managed via Vagrant and provisioned via Ansible
Stars: ✭ 18 (-78.31%)
Mutual labels:  vagrant
ansible-docker-vagrant-example
An example to demonstrate the power of Ansible, Docker and Vagrant
Stars: ✭ 22 (-73.49%)
Mutual labels:  vagrant

ops_contrib

This repo contains scripts to install & operate VersionEye as on prem. installation. Everybody can contribute!

The software for VersionEye is shipped in multiple Docker images. VersionEye is a distributed system which is a composition of at least 8 Docker images. The Docker images and their relations to each other are described in docker compose files. This repository describes how to fetch, start, stop and monitor the VersionEye Docker images.

Table of Contents

Starting point

Clone this repository and cd into it:

git clone https://github.com/versioneye/ops_contrib.git && cd ops_contrib

Some of the commands and files below are found on the root of this repository, thus cloning the repository is the easier way to get access to them. Alternatively you can download the files or use the repository archive.

There are 2 ways of running the VersionEye software. The simplest is to run the Vagrant box in the next section. That is perfect for a quick start to try out the software. For production environments we recommend to setup the Docker containers natively. In that case you can skip the Vagrant section.

Vagrant

There is a Vagrantfile in this directory which describes a Vagrant box for VersionEye. Vagrant is a cool technology to describe and manage VMs. If you don't have it yet, please download it from here. By default Vagrant is using VirtualBox as VM provider. You can download VirtualBox from here. This setup is tested with Vagrant version 1.8.5 and VirtualBox version 5.0.16 r105871.

Open a console and navigate to the root of this git repository and run simply this command:

vagrant up

That will create a new virtual machine image in VirtualBox and install the VersionEye Docker images on it. Dependening on your internet connection it can take a couple minutes. If everything is done you can reach the VersionEye application under http://127.0.0.1:7070.

But keep it mind that this Vagrant setup is just for development and testing. It's not a production setup! If you shut down the Vagrant box, you might lose data!

If you don't want to use Vagrant and you are interested in running the Docker containers natively on your machine then keep reading. The following sections describe how to start, stop and monitor the VersionEye Docker images natively.

System requirements

We recommend a minimum resource configuration of:

  • 2 vCPUS
  • 8GB of RAM
  • 25GB of storage

This setup will allow you to get VersionEye of the ground successfully. It's the equivalent to an AWS t2-large. Some customers are using VersionEye to monitor 1500 internal software projects. They are running the software with this hardware setup:

  • 4 vCPUS
  • 16 GB of RAM
  • 100 GB of storage

For more detailed requirements analysis please contact the VersionEye team at [email protected]

Network configuration

The VersionEye host will need the following ports open:

Port Protocol Description
8080 HTTP Web application
9090 HTTP API endpoint
22 SSH Host management

If you configure Nginx in front of the Web Application and API you can configure the following ports instead:

Port Protocol Description
80 HTTP Web application & API Endpoint
433 HTTPS Web application & API Endpoint over SSL
22 SSH Host management

You might still want to leave 8080 and 9090 open if you still want direct access to the those services.

Environment dependencies

The scripts in this repository are all tested with Docker for Linux on Ubuntu 14.04. This installation guide requires that you have the following libraries installed:

  • jq
  • docker
  • docker-compose

Installing jq

On Ubuntu you can install it by running the following command on the terminal:

apt-get install jq

Alternatively you can also check the official jq docs

Installing docker and docker-compose

Follow these guides to install docker and docker-compose:

Make sure you've tested the docker dependencies before moving to the net next. On Ubuntu you can test them by running:

sudo docker run hello-world

and:

docker-compose --version

Start backend services for VersionEye

VersionEye is currently using this backend systems:

  • MongoDB
  • RabbitMQ
  • ElasticSearch
  • Memcached

These are all available as Docker images from Docker Hub. This repository contains a file versioneye-base.yml for Docker Compose. You can start all backend systems like this:

Start the docker containers:

sudo docker-compose -f versioneye-base.yml up -d

That will start all 4 Docker containers in deamon mode. To stop backend services you can run:

docker-compose -f versioneye-base.yml stop

The MongoDB & ElasticSearch containers are not persistent by default! If the Docker containers are getting stopped/killed the data is lost. For persistence you need to comment in the mount volumes in the versioneye-base.yml file and adjust the paths to a directory on the host system. Especially the MongoDB container should be adjusted to be persistent:

mongodb:
  image: versioneye/mongodb:3.4
  container_name: mongodb
  restart: always
  volumes:
   - <PERSISTENT_PATH_ON_HOST_SYSTEM>:/data

For example:

mongodb:
  image: versioneye/mongodb:3.4
  container_name: mongodb
  restart: always
  volumes:
   - /mnt/mongodb:/data

The ElasticSearch container should be adjusted in the same fashion. The other containers in versioneye-base.yml can be adjusted the same way, but are not critical.

Start the VersionEye containers

The next command will start the VersionEye containers. That includes the web application, the API and some background services:

./versioneye-update

This script will:

  • Fetch the newest versions for the Docker images from the VersionEye API
  • Set some environment variables
  • Pull down the Docker images from Docher Hub
  • Start the Docker containers with docker-compose

If everything goes well you can access the VersionEye web application on http://localhost:8080 and should see something like this:

VersionEye Enterprise Landing Page

Stop the VersionEye containers

With this command the VersionEye containers can be stopped:

./versioneye-stop

That will stop the VersionEye containers, but not the backend services.

Clean up unused Docker images

The script ./versioneye-update will always download the newest Docker images from VersionEye. But it doesn't remove old Docker images. This command is removing ALL Docker images which are currently not active running:

docker rmi `docker images -aq`

If VersionEye is the only application running on the Host, then this command can be added to the last line of the ./versioneye-update script.

Automated updates

We are publishing new Docker images almost every day! If you want to keep your instance always up-to-date then it's recommended to run the ./versioneye-update script once a day via a cron job. If you are logged in as admin to the VersionEye server, run this coammand:

crontab -e

That will open the crontab file for root with the default editor. Then add this line to the end of the file and save the file:

1 0 * * * cd /opt/ops_contrib/ && ./versioneye-update >/dev/null 2>&1

That will run the ./versioneye-update script every day 1 minute after midnight. If the absolute path to the ./versionye-update script is not correct then you need to adjust it!

Use Nginx as proxy

By default the VersionEye Web App is running on port 8080 and the API on port 9090. It makes sense to use a webserver in front of it on port 80, which does forward the requests to port 8080 and 9090. Beside that the webserver can be used for SSL termination. On Ubuntu the Nginx webserver can be installed like this:

apt-get install nginx

Assuming this repository is checked out into /opt/ops_contrib, the Nginx can be re configured as proxy for VersionEye by copying this 2 files to the right location:

sudo cp /opt/ops_contrib/nginx/ansible/roles/nginx/files/nginx.conf /etc/nginx/nginx.conf
sudo cp /opt/ops_contrib/nginx/ansible/roles/nginx/files/default.conf /etc/nginx/conf.d/default.conf

After that the Nginx needs to be restarted:

sudo service nginx restart

Now the VersionEye web app should be available on port 80.

SSL

By default the web application is running on port 8080 and the API on port 9090. Any webserver can be used as proxy for those ports and any webserver in front of it can be used for SSL termination. By default we are using Nginx for this job. Here is described how to setup Nginx with SSL certificates from letsencrypt.

Here are some Ansible playbooks which is automate this steps and contain a role for setting up Nginx with an SSL certificate.

Configure cron jobs for crawling

The Docker image versioneye/crawlj contains the crawlers which enable you to crawl internal Maven repositories such as Sonatype Nexus, JFrog Artifactory or Apache Archiva. Inside of the Docker container the crawlers are triggered by a cron job. The crontab for that can be found here. If you want to trigger the crawlers on a different schedule you have to mount another crontab file into the Docker container to /mnt/crawl_j/crontab_enterprise.

Importing site certificate into Java Runtime

The crawlj container(s) are running on Java. When the Java process attempts to connect to a server that has an invalid or self signed certificate, such as an Maven repository server (Artifactory or Sonatype) in a development environment, there might be the following exception:

javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target

To make the Java runtime trust the certificate, it needs to be imported into the JRE certificate store. Here is a detailed tutorial for importing the site certificate into the crawlj container: Import site certs

Importing site certificate into Ruby Runtime

The VersionEye Web App, API and background tasks are running on a Ruby runtime. If the Web App should access an LDAP server with a self signed certificate then that certificate has to be made available for the Ruby runtime. Ruby is using the native openssl C library for SSL connections. If a self signed certificate should be made available for the Ruby runtime then the certificate has to be placed as *.crt file in the /usr/local/share/ca-certificates directory inside of the Docker container.

It is recommended to hold the certificates in a directory on the Host system. For example in /certs. That directory can be mounted into the Docker container. Here is example for the versioneye/rails_app Docker container:

rails_app:
  image: versioneye/rails_app:${VERSION_RAILS_APP}
  container_name: rails_app
  restart: always
  environment:
    TZ: Europe/London
  ports:
   - "8080:8080"
  volumes:
   - /certs:/usr/local/share/ca-certificates
  external_links:
   - rabbitmq:rm
   - memcached:mc
   - elasticsearch:es
   - mongodb:db

In the volumes seciton above the directory /certs is mounted to /usr/local/share/ca-certificates inside of the Docker container. This configuration can be applied to this Docker containers:

  • versioneye/rails_app
  • versioneye/rails_api
  • versioneye/tasks

This change in the configuration requires a restart of the Docker containers.

Timezone

Each Docker container has his own time zone settings. By default Ubuntu machines are running with UTC time zone. In the docker-compose files we set the time zone explicitly to London timezone. That's this part:

  environment:
    TZ: Europe/London

If you want to use a different time zone you need to adjust the TZ variable for all Docker containers. Assuming you want to run the MongoDB container in Berlin time zone, then the configuration for that container in the versioneye-base.yml would look like this:

mongodb:
  image: versioneye/mongodb:3.4
  container_name: mongodb
  restart: always
  environment:
    TZ: Europe/Berlin

The last 2 lines in the code example above set the time zone. After each change in versioneye-base.yml and docker-compose.yml the Docker containers need to be re started.

The changes above let the whole application run in a certain time zone. However, MongoDB is storing Dates always in UTC. Log in to VersionEye as admin and pick the same time zone in the Global Settings as set in the docker compose files. With that tha application will convert UTC date/times from MongoDB to the selected target time zone.

Logging

The VersionEye Docker containers are using rotating log files with 10 MB per file and max 10 files. That way the hard disk will not run full with log files. By default the log files are not persistent. If you want to have the log files persistent on the Host system you have to adjust the volumes in the docker-compose.yml file. To make the logs from the web application persistent the volumes section could be adjusted like this:

rails_app:
  image: versioneye/rails_app:${VERSION_RAILS_APP}
  container_name: rails_app
  restart: always
  ports:
   - "8080:8080"
  volumes:
   - /mnt/logs:/app/log
  external_links:
   - rabbitmq:rm
   - memcached:mc
   - elasticsearch:es
   - mongodb:db

Make sure that /mnt/logs is an existing directoroy on the Host system or adjust the path to an existing directory.

If you make this changes to the docker-compose.yml file you have to restart the Docker containers.

Monitoring

With this command the running containers can be monitored.

./docker-mon

That will display in real time how much CPU, RAM and IO each containers is using.

RabbitMQ Management Plugin

By default the RabbitMQ container is running without a UI. But if the management plugin is enabled a Web UI can be used to watch and control the queues. To do that you need to get a shell on the running rabbitmq container:

docker exec -it rabbitmq bash

Then run this command to enable the management plugin:

rabbitmq-plugins enable rabbitmq_management

and leave the container with exit. Now leave the Host server and build up an SSH tunnel from your local machine to the Host and the running container:

ssh -f <USER>@<HOST_IP> -L 15672:<IP_OF_DOCKER_CONTAINER>:15672 -N

For example:

ssh -f [email protected] -L 15672:172.17.0.4:15672 -N

Now open a browser on your machine and navigate to http://localhost:15672/. Now you should be able to see the RabbitMQ UI.

Backup your data

The primary database for this application is MongoDB. If you run the MongoDB container with a persistent volume your MongoDB config in the versioneye-base might look like this:

mongodb:
  image: versioneye/mongodb:3.2.8
  container_name: mongodb
  restart: always
  volumes:
   - /mnt/mongodb:/data

In the above configuration we use /mnt/mongodb on the host system to persist the data for MongoDB. To create a dump get a shell on the running MongoDB container like this:

docker exec -it mongodb bash

Then navigate to the /data directory and create a dump with this command:

mongodump --db veye_enterprise

That will create a complete database dump which will be persisted in /mnt/mongodb/dump on the host. From there you can zip it and copy it to somewhere else.

Restore your data

Assume you created a dump of one of your VersionEye instances and now you would like to restore the data on another VersionEye instance. If your Docker container is persisting the data under /mnt/mongodb on the host than simply compy your dump into that directory. Get a shell on the running MongoDB container:

docker exec -it mongodb bash

Navigate to /data and run the restore process:

mongorestore --db veye_enterprise dump/veye_enterprise/

Assuming that your MongoDB is empty, this will restore all the data from the previous backup.

Support

For commercial support send a message to [email protected].

License

ops_contrib is licensed under the MIT license!

Copyright (c) 2016 VersionEye GmbH

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].