All Projects → dennyzhang → Cheatsheet Docker A4

dennyzhang / Cheatsheet Docker A4

📖 Docker CheatSheets In A4

Projects that are alternatives of or similar to Cheatsheet Docker A4

Devops Guide
DevOps Guide - Development to Production all configurations with basic notes to debug efficiently.
Stars: ✭ 4,119 (+12771.88%)
Mutual labels:  cheatsheet, containers
Docker Compose Wait
A simple script to wait for other docker images to be started while using docker-compose
Stars: ✭ 945 (+2853.13%)
Mutual labels:  containers
Trivy
Scanner for vulnerabilities in container images, file systems, and Git repositories, as well as for configuration issues
Stars: ✭ 9,673 (+30128.13%)
Mutual labels:  containers
Active Directory Exploitation Cheat Sheet
A cheat sheet that contains common enumeration and attack methods for Windows Active Directory.
Stars: ✭ 870 (+2618.75%)
Mutual labels:  cheatsheet
Analysissummary
Vorlesung Analysis für Informatiker WS16/17 an der TUM
Stars: ✭ 10 (-68.75%)
Mutual labels:  cheatsheet
Awesome Cheatsheets
超级速查表 - 编程语言、框架和开发工具的速查表,单个文件包含一切你需要知道的东西 ⚡
Stars: ✭ 7,930 (+24681.25%)
Mutual labels:  cheatsheet
Ansible Role Docker
Ansible Role - Docker
Stars: ✭ 845 (+2540.63%)
Mutual labels:  containers
Opencv Cheat Sheet
Opencv cheat sheet for C++
Stars: ✭ 30 (-6.25%)
Mutual labels:  cheatsheet
Swift Cheatsheet
A quick reference cheat sheet for common, high level topics in Swift.
Stars: ✭ 914 (+2756.25%)
Mutual labels:  cheatsheet
Felix
Project Calico's per-host agent Felix, responsible for programming routes and security policy.
Stars: ✭ 871 (+2621.88%)
Mutual labels:  containers
Analyze Local Images
deprecated tool for interacting with Clair locally
Stars: ✭ 12 (-62.5%)
Mutual labels:  containers
Sdn Handbook
SDN网络指南(SDN Handbook)
Stars: ✭ 856 (+2575%)
Mutual labels:  containers
Magit Cheatsheet
Stars: ✭ 13 (-59.37%)
Mutual labels:  cheatsheet
Linuxkit
A toolkit for building secure, portable and lean operating systems for containers
Stars: ✭ 7,166 (+22293.75%)
Mutual labels:  containers
Practical Clean Ddd
A simplified and effortless approach to get started with Domain-driven Design, Clean Architecture, CQRS, and Microservices patterns
Stars: ✭ 28 (-12.5%)
Mutual labels:  containers
Docker Gitlab
Dockerized GitLab
Stars: ✭ 7,084 (+22037.5%)
Mutual labels:  containers
Brmgr
Manage bridge devices and provide DHCP and DNS services to connected interfaces.
Stars: ✭ 11 (-65.62%)
Mutual labels:  containers
Caprover
Scalable PaaS (automated Docker+nginx) - aka Heroku on Steroids
Stars: ✭ 7,964 (+24787.5%)
Mutual labels:  containers
Roboconf Platform
The core modules and the platform
Stars: ✭ 30 (-6.25%)
Mutual labels:  containers
Iceci
IceCI is a continuous integration system designed for Kubernetes from the ground up.
Stars: ✭ 29 (-9.37%)
Mutual labels:  containers
  • Docker CheatSheet :Cloud: :PROPERTIES: :type: kubernetes :export_file_name: cheatsheet-docker-A4.pdf :END:

#+BEGIN_HTML

linkedin
github
slack



PRs Welcome #+END_HTML

  • PDF Link: [[https://github.com/dennyzhang/cheatsheet-docker-A4/blob/master/cheatsheet-docker-A4.pdf][cheatsheet-docker-A4.pdf]], Category: [[https://cheatsheet.dennyzhang.com/category/cloud/][Cloud]]
  • Blog URL: https://cheatsheet.dennyzhang.com/cheatsheet-docker-A4
  • Related posts: [[https://cheatsheet.dennyzhang.com/kubernetes-yaml-templates][Kubernetes Yaml]], [[https://github.com/topics/denny-cheatsheets][#denny-cheatsheets]]

File me [[https://github.com/dennyzhang/cheatsheet.dennyzhang.com/issues][Issues]] or star [[https://github.com/dennyzhang/cheatsheet.dennyzhang.com][this repo]]. ** Docker Trouble Shooting | Name | Summary | |--------------------------------------------------------+----------------------------------------------------------------------------------------| | [[https://www.jfrog.com/jira/browse/RTFACT-9957][Docker push: manifest invalid]] | Re-push a new version of the same docker tag may fail, due to permission | | Docker pull: missing signature key | Docker push again to resolve the issue | | [[https://stackoverflow.com/questions/45089842/docker-cp-error-response-from-daemon-not-a-directory][Docker cp: Error response from daemon: not a directory]] | container folder is in a symbol link | | Find process id by container name | =docker top $container_id=, or =docker top $container_name= | | List resource usage by containers | =docker stats= | | Get dockerd storage driver | =docker info=, then check =Storage Driver= | | [[https://medium.com/techlogs/docker-how-to-check-your-containers-cpu-usage-8121515a3b8][docker-containerd-shim]] | The Docker four components: =Docker engine=, =containerd=, =containerd-shm= and =runC= | ** Docker Basic | Name | Summary | |-----------------------------+-------------------------------------------------------------------------| | Install docker on Ubuntu | =apt-get install docker.io= | | [[https://docs.docker.com/install/linux/docker-ce/centos/][Install docker on CentOS]] | Use docker repo https://download.docker.com/linux/centos/docker-ce.repo | | Install docker in Debian 10 | [[https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-debian-10][Link: How To Install and Use Docker on Debian 10]] | | Install old docker version | [[https://github.com/dennyzhang/cheatsheet-docker-A4/blob/master/install-old-docker.md.sh][GitHub: install-old-docker.md]] | ** Docker start service | Name | Summary | |--------------------------------+------------------------------------------------------------------------------------------------------------| | Start a ubuntu test env | =docker run ubuntu:16.04 /bin/echo hello world= | | Start a ubuntu 18.04 test env | =docker run ubuntu:18.04 /bin/echo hello world= | | Start a container run and stop | =docker run --rm ubuntu:18.04 /bin/echo hello world= | | [[https://hub.docker.com/_/debian][Start a debian9 test env]] | =docker run debian:9 /bin/echo hello world= | | Start a centos test env | =docker run centos:centos6 /bin/echo hello world= | | [[https://github.com/jenkinsci/docker/blob/master/README.md][Start a jenkins server]] | =docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts= | | Start a nginx server | =docker run -t -d -p 8080:80 --name nginx-test nginx= | | Start a mysql server | =docker run -e MYSQL_ROOT_PASSWORD=password123 -e MYSQL_DATABASE=wordpress -d mysql:5.7= | | Start a nexus server | =docker run -d -p 8082:8081 --name nexus -v /data/nexus-data:/nexus-data sonatype/docker-nexus3= | | Start a sshd server | =docker run -t -d --privileged -p 5022:22 denny/sshd:latest /usr/sbin/sshd -D= | | Start a ftp server | =docker run -t -d -p 21:21 -p 20:20 -e USERNAME=${username} -e PASSWORD=${password} denny/proftproftpd:v1= | ** Container Runtime | Name | Summary | |---------------------+--------------------------------------------------------------------------------| | dockerd | | | containerd | | | [[https://cri-o.io/][cri-o]] | From Redhat | | [[https://github.com/rkt/rkt][rkt]] | a pod-native container engine for Linux from CoreOS. Stopped maintainance now | | Amazon ACS | supports DC/OS, Swarm, Kubernetes | | CoreOS Fleet | | | Cloud Foundry Diego | Not actively maintained any more | | Reference | [[https://cheatsheet.dennyzhang.com/cheatsheet-docker-A4][CheatSheet: Docker]], [[https://cheatsheet.dennyzhang.com/cheatsheet-crio-A4][CheatSheet: CRI-O]], [[https://cheatsheet.dennyzhang.com/cheatsheet-rkt-A4][CheatSheet: rkt]], [[https://cheatsheet.dennyzhang.com/cheatsheet-containerd-A4][CheatSheet: containerd]] | ** Container Basic | Name | Summary | |-------------------------------------------+---------------------------------------------------------| | Start docker container | =docker run -p 4000:80 imgname= | | Start docker container in detached mode | =docker run -d -p 4000:80 imgname= | | Start container with entrypoint changed | =docker run -t -d --entrypoint=/bin/sh "$docker_image"= | | Enter a running container | =docker exec -it sh= | | Upload local file to container filesystem | =docker cp /tmp/foo.txt mycontainer:/foo.txt= | | Download container file local filesystem | =docker cp mycontainer:/foo.txt /tmp/foo.txt= | | Stop container | =docker stop = | | Remove container | =docker rm = | | Remove all containers | =docker rm $(docker ps -a -q)= | | Force shutdown of one given container | =docker kill = | | Login to docker hub | =docker login= | | Tag | =docker tag username/repo:tag= | | Docker push a tagged image to repo | =docker push username/repo:tag= | | Run image from a given tag | =docker run username/repo:tag= | | Create docker image | =docker build -t denny/image:test .= | #+BEGIN_HTML #+END_HTML ** Docker Cleanup | Name | Summary | |--------------------------------+----------------------------------------------------------| | Remove unused docker images | [[https://github.com/dennyzhang/cheatsheet-docker-A4/blob/master/delete-unused-images.sh#L13][delete-unused-images.sh]] | | Delete all containers | [[https://github.com/dennyzhang/cheatsheet-docker-A4/blob/master/delete-all-containers.sh#L13][delete-all-containers.sh]] | | Remove exited containers | =docker rm $(docker ps --filter status=exited -qa)= | | Docker prune images | =docker image prune -f= | | Docker prune volumes | =docker volume prune -f= | | Remove the specified image | =docker rmi = | | Remove all docker images | =docker rmi $(docker images -q)= | | Remove orphaned docker volumes | =docker volume rm $(docker volume ls -qf dangling=true)= | | Remove dead containers | =docker rm $(docker ps --filter status=dead -qa)= | ** Dockerfile | Name | Summary | |----------------------------------+----------------------------------------------------------------------------------| | Change entrypoint to run nothing | =entrypoint: ["tail", "-f", "/dev/null"]= | | [[https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes/683651#683651][Set timezone in Dockerfile]] | =RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone= | | Define multiple line command | [[https://github.com/dennyzhang/cheatsheet-docker-A4/blob/master/Dockerfile-example-multiline][GitHub: Dockerfile-example-multiline]] | ** Docker Compose | Name | Summary | |-----------------------+---------------------------------------------------------------------------------------------| | Change restart policy | =restart: always=, [[https://docs.docker.com/compose/compose-file/#restart_policy][Link: Compose file version 3 reference]] | | Mount file as volume | =$PWD/httpd/httpd.conf:/usr/local/apache2/conf/httpd.conf:ro= [[https://github.com/dennyzhang/cheatsheet-docker-A4/blob/master/sample-mount-file.yml][GitHub: sample-mount-file.yml]] | | Start compose env | =docker-compose up=, =docker-compose up -d= | | Stop compose env | =docker-compose down=, =docker-compose down -v= | | Check logs | =docker-compose logs= | ** Docker Containers | Name | Summary | |---------------------------------------------+---------------------------------------------------------------| | Start docker container | =docker run -p 4000:80 = | | Start docker container in detached mode | =docker run -d -p 4000:80 imgname= | | Start docker container and remove when exit | =docker run -rm -it sh= | | Enter a running container | =docker exec -it [container-id] sh= | | Stop container | =docker stop = | | List all containers | =docker ps=, =docker ps -a= | | Remove container | =docker rm =, =docker rm $(docker ps -a -q)= | | Force shutdown of one given container | =docker kill = | | Login to docker hub | =docker login= | | Run image from a given tag | =docker run username/repo:tag= | | Tail container logs | =docker logs --tail 5 $container_name= | | Check container healthcheck status | =docker inspect --format '{{.State.Health}}' $container_name= | | List containers by labels | =docker ps --filter "label=org.label-schema.group"= | ** Docker Images | Name | Summary | |------------------------------------+-----------------------------------------| | List all images | =docker images=, =docker images -a= | | Create docker image | =docker build -t denny/image: .= | | Docker push a tagged image to repo | =docker push denny/image:= | | Show the history of an image | =docker history <image_name>= | | Export image to file | =docker save <image_name> > my_img.tar= | | Load image to local registry | =docker load -i my_img.tar= | | Tag | =docker tag username/repo:tag= | ** Docker Socket file | Name | Summary | |------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------| | Run container mounting socket file | =docker run -v /var/run/docker.sock:/var/run/docker.sock -it alpine sh= | | A different docker socket file | =export DOCKER_HOST=unix:///my/docker.sock= | | List containers | =curl -XGET --unix-socket /var/run/docker.sock http://localhost/containers/json= | | Stop container | =curl -XPOST --unix-socket /var/run/docker.sock http://localhost/containers/<container_id>/stop= | | Start container | =curl -XPOST --unix-socket /var/run/docker.sock http://localhost/containers/<container_id>/start= | | List events | =curl --unix-socket /var/run/docker.sock http://localhost/events= | | Create container | =curl -XPOST --unix-socket /var/run/docker.sock -d '{"Image":"nginx:alpine"}' -H 'Content-Type: application/json' http://localhost/containers/create= | | Links | [[https://docs.docker.com/develop/sdk/][Link: Develop with Docker Engine SDKs and API]] | #+BEGIN_HTML #+END_HTML ** Docker Conf | Name | Summary | |----------------+-------------------------------------------------------| | Docker files | =/var/lib/docker=, =/var/lib/docker/devicemapper/mnt= | | Docker for Mac | =~/Library/Containers/com.docker.docker/Data/= | ** Ubuntu docker: Install missing packages | Name | Summary | |--------------------------+--------------------------------------------------| | Pull ubuntu docker image | =docker pull ubuntu= | | man: command not found | =apt-get update=, =apt-get install man= | | ping: command not found | =apt-get update=, =apt-get install iputils-ping= | | dig: command not found | =apt-get install dnsutils= | ** Check Status | Name | Summary | |------------------------------------+---------------------------------------------------------------| | Tail container logs | =docker logs --tail 5 $container_name= | | Check container healthcheck status | =docker inspect --format '{{.State.Health}}' $container_name= | | List containers | =docker ps= | | List all containers | =docker ps -a= | | List containers by labels | =docker ps --filter "label=org.label-schema.group"= | | List all images | =docker images -a= | ** Resource Reference | Name | Summary | |-----------------------+---------------------------------------------------------------------| | Docker SDK | https://docs.docker.com/develop/sdk/examples/ | | Docker REST API | https://docs.docker.com/engine/api/v1.27/#tag/Container | | Docker Hub auto build | https://docs.docker.com/docker-hub/builds/#build-statuses-explained | ** More Resources

License: Code is licensed under [[https://www.dennyzhang.com/wp-content/mit_license.txt][MIT License]]. #+BEGIN_HTML

linkedin <img align="bottom"src="https://www.dennyzhang.com/wp-content/uploads/sns/github.png" alt="github" /> slack #+END_HTML

| Summary | Comment | |--------------------------------------------------------------------------------------------------+--------------------------------------------------------| | tail /var/log/docker.log | | | docker run -d -t -p 3128:443 denny/chefserver:v1 /usr/sbin/sshd -D | | | docker run -t -i dennylocal/elasticsearch-mdm:v1 /bin/bash | | | docker commit -m "initial" -a "Denny[email protected]" 8c0be19ecd87 denny/chefserver:v1 | | | docker run -d -t --privileged --name sandbox -p 7022:22 denny/dennysandbox:latest /bin/bash | start with a name | | docker inspect $container_name . grep IPAddress | Get container's IP | | docker inspect $container_name | Get detail info for a given container | |--------------------------------------------------------------------------------------------------+--------------------------------------------------------| | docker save | Save an image to a tar archive | | docker load | Load an image from a tar archive | | docker rm $(docker ps -a -q) | remove all containers | | docker push denny/XXX | push an image to docker hub | | docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py | mount the host directory of /src/webapp to /opt/webapp | | docker build -t XXX/mdm:v1 --rm=true . | | | docker run -t -P -i XXX/mdm:v1 /bin/bash | | | https://status.docker.com | docker service status | ** manually start docker start-stop-daemon --start --exec /usr/bin/docker --pidfile /var/run/docker-ssd.pid --make-pidfile -- daemon -p /var/run/docker.pid

/usr/bin/docker -d ** TODO How large disk does docker use? https://docs.docker.com/userguide/dockervolumes/

Two primary ways you can manage data in Docker: Data volumes, and Data volume containers.

  • Data volumes are designed to persist data, independent of the container's life cycle. ** TODO [#A] how to set port forwarding onfly: looks like it doesn't support this ** TODO Why my docker image is >600 MB, while ubuntu 14.04 is 188 MB #+BEGIN_EXAMPLE macs-air:sandbox-test mac$ docker images docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE 8118ced6048d 3 minutes ago 642.4 MB 9801d16e6777 33 minutes ago 410.1 MB XXX/mdm v2 1095634005de About an hour ago 621.5 MB XXX/mdm v1 3e2418a2e608 30 hours ago 417 MB ubuntu 14.04 2d24f826cb16 2 weeks ago 188.3 MB macs-air:sandbox-test mac$ docker images -a docker images -a REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE 8118ced6048d 3 minutes ago 642.4 MB 6c33d3f87edf 3 minutes ago 642.4 MB 0cf12fca34f2 3 minutes ago 642.4 MB 103af91cd39f 3 minutes ago 642.4 MB 6f319238a116 3 minutes ago 642.1 MB ce40acdab604 3 minutes ago 642.1 MB c91fc2e9d563 4 minutes ago 621.5 MB ac6f52a9123a 4 minutes ago 621.5 MB a7cae9dbe74e 4 minutes ago 621.5 MB 9801d16e6777 33 minutes ago 410.1 MB 37c48e0cf08b 36 minutes ago 255.4 MB 230d29b0c491 36 minutes ago 255.4 MB 6a32d937179e 37 minutes ago 255.4 MB f73c1ad275cf 37 minutes ago 255.4 MB cd17661eee48 37 minutes ago 255.1 MB a40862d66601 38 minutes ago 208.9 MB 23fdfd795f42 40 minutes ago 188.3 MB 5c83290fadd8 40 minutes ago 188.3 MB 62613585bbb7 40 minutes ago 188.3 MB XXX/mdm v2 1095634005de About an hour ago 621.5 MB c4931b5b9b19 About an hour ago 621.5 MB 783586e31bf8 About an hour ago 621.5 MB 7464cae37e74 About an hour ago 444.5 MB b8c63120077f About an hour ago 423.9 MB bd9d07ddaedb About an hour ago 423.9 MB 2f1c4927f244 About an hour ago 413.4 MB 46bce28312c9 About an hour ago 413.4 MB cf7c4229feb4 About an hour ago 413.4 MB 6d350b163e56 About an hour ago 413.4 MB 639768c629a2 About an hour ago 413.4 MB 592ce84c9821 About an hour ago 413.4 MB e363f9241135 11 hours ago 413.4 MB 1ead0ee488f8 11 hours ago 413.4 MB 771faf7c579a 11 hours ago 413.4 MB c94ba7c29f21 11 hours ago 413.4 MB 141d9606510d 11 hours ago 413.4 MB ffdfb69480e1 11 hours ago 413.4 MB 127e86022743 11 hours ago 374.8 MB b69103f1b7a6 11 hours ago 220.3 MB b750f0b5b1df 11 hours ago 208.9 MB fb44a8a71885 11 hours ago 188.3 MB XXX/mdm v1 3e2418a2e608 30 hours ago 417 MB 71d449bc0252 32 hours ago 417 MB 6271cc67a45f 32 hours ago 235.7 MB f7c8c8c92180 32 hours ago 215.1 MB 32d35259ec14 32 hours ago 215.1 MB 679c4ce72e12 32 hours ago 204.6 MB 65a05ff6ff98 32 hours ago 204.6 MB a81d5a840c5e 32 hours ago 204.6 MB da3dc9aa88ea 32 hours ago 204.6 MB 849953a4d24a 32 hours ago 204.6 MB ffe4981287bc 32 hours ago 204.6 MB 83cec26d130b 32 hours ago 204.5 MB bd30865240be 32 hours ago 204.5 MB 763e9222a24f 32 hours ago 204.5 MB f1c759c3a4b5 5 days ago 165.9 MB 54085386062b 5 days ago 11.42 MB ubuntu 14.04 2d24f826cb16 2 weeks ago 188.3 MB 117ee323aaa9 2 weeks ago 188.3 MB 1c8294cc5160 2 weeks ago 188.3 MB fa4fd76b09ce 2 weeks ago 188.1 MB 511136ea3c5a 21 months ago 0 B macs-air:sandbox-test mac$ #+END_EXAMPLE ** TODO What does -P means for docker: docker run -d -P training/webapp python app.py sudo docker run -d -P training/webapp python app.py Let's review what our command did. We've specified two flags: -d and -P. We've already seen the -d flag which tells Docker to run the container in the background. The -P flag is new and tells Docker to map any required network ports inside our container to our host. This lets us view our web application.

We've specified an image: training/webapp. This image is a pre-built image we've created that contains a simple Python Flask web application.

Lastly, we've specified a command for our container to run: python app.py. This launches our web application. https://docs.docker.com/userguide/usingdocker/ ** TODO [#A] Run docker container in docker server: no more loopback devices available. for i in {0..30} do mknod -m0660 /dev/loop$i b 7 $i done

http://stackoverflow.com/questions/26239116/run-docker-inside-a-docker-container

https://github.com/openshift/origin/issues/101 http://schnell18.iteye.com/blog/2203452 https://github.com/docker/docker/issues/7058 https://github.com/docker/docker/issues/10880

有人碰到过类似的问题, https://github.com/jpetazzo/dind/issues/19 .这 个thread很长,但是大意是容器中的loop device回环设备用尽了,需要手工再 创建一些.

#+BEGIN_EXAMPLE [email protected]:~# docker -d INFO[0000] +job serveapi(unix:///var/run/docker.sock) INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) WARN[0000] Udev sync is not supported. This will lead to unexpected behavior, data loss and errors ERRO[0000] There are no more loopback devices available. FATA[0000] Shutting down daemon due to errors: error intializing graphdriver: loopback mounting failed #+END_EXAMPLE

#+BEGIN_EXAMPLE [email protected]:~# service docker start

  • Starting Docker: docker ...done. [email protected]:~# tail /var/log/docker.log time="2015-05-20T17:53:17Z" level=info msg="Listening for HTTP on unix (/var/run/docker.sock)" time="2015-05-20T17:53:17Z" level=warning msg="Udev sync is not supported. This will lead to unexpected behavior, data loss and errors" time="2015-05-20T17:53:17Z" level=error msg="There are no more loopback devices available." time="2015-05-20T17:53:17Z" level=fatal msg="Shutting down daemon due to errors: error intializing graphdriver: loopback mounting failed" Warning: '--restart' is deprecated, it will be removed soon. See usage. time="2015-05-20T18:00:28Z" level=info msg="+job serveapi(unix:///var/run/docker.sock)" time="2015-05-20T18:00:28Z" level=info msg="Listening for HTTP on unix (/var/run/docker.sock)" time="2015-05-20T18:00:28Z" level=warning msg="Udev sync is not supported. This will lead to unexpected behavior, data loss and errors" time="2015-05-20T18:00:28Z" level=error msg="There are no more loopback devices available." time="2015-05-20T18:00:28Z" level=fatal msg="Shutting down daemon due to errors: error intializing graphdriver: loopback mounting failed" #+END_EXAMPLE *** web page: docker in docker http://schnell18.iteye.com/blog/2203452 **** webcontent :noexport: #+begin_example 准备做个jenkins的CI服务器,用docker包装,然后在其中嵌套运行docker容器作为构建的环境. 在docker中嵌套运行docker碰到一下错误: Bash代码 收藏代码 [[email protected] /]# INFO[0000] +job serveapi(unix:///var/run/docker.sock) WARN[0000] WARNING: Udev sync is not supported. This will lead to unexpected behavior, data loss and errors ERRO[0000] There are no more loopback devices available. FATA[0000] loopback mounting failed 有人碰到过类似的问题, https://github.com/jpetazzo/dind/issues/19 .这个thread很长,但是大意是容器中的loop device回环设备用尽了,需要手工再创建一些.示例代码如下: Bash代码 收藏代码 #!/bin/bash ensure_loop(){ num="$1" dev="/dev/loop$num" if test -b "$dev"; then echo "$dev is a usable loop device." return 0 fi

echo "Attempting to create $dev for docker ..." if ! mknod -m660 $dev b 7 $num; then echo "Failed to create $dev!" 1>&2 return 3 fi

return 0 }

LOOP_A=$(losetup -f) LOOP_A=${LOOP_A#/dev/loop} LOOP_B=$(expr $LOOP_A + 1)

ensure_loop $LOOP_A ensure_loop $LOOP_B

在启动docker后台进程之前,加上上述代码后,又报如下错误: Bash代码 收藏代码 [[email protected] /]# sh start_docker_daemon.sh Attempting to create /dev/loop4 for docker ... Attempting to create /dev/loop5 for docker ... [[email protected] /]# INFO[0000] +job serveapi(unix:///var/run/docker.sock) WARN[0000] WARNING: Udev sync is not supported. This will lead to unexpected behavior, data loss and errors FATA[0000] exec: "mkfs.ext4": executable file not found in $PATH

这个问题比较简单,是因为这个容器里没有装mkfs.ext4这个工具,它包含在e2fsprogs这个RPM包中.更正这个错误只要在Dockerfile中安装RPM包时多加一个e2fsprogs. 完整Dockerfile如下: Dockerfile代码 收藏代码 FROM centos:7 MAINTAINER [email protected] ENV REFRESHED_AT 2015-04-14

e2fsprogs contains mkfs.ext4

RUN yum install -y git curl docker java-1.6.0-openjdk e2fsprogs

ENV JENKINS_HOME /opt/jenkins/data ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org

RUN mkdir -p $JENKINS_HOME/plugins RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war-stable/latest/jenkins.war

RUN for plugin in chucknorris greenballs scm-api git-client git ws-cleanup ;
do
curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi
-L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi ;
done

ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh RUN chmod +x /usr/local/bin/dockerjenkins.sh

VOLUME /var/lib/docker

EXPOSE 8080

ENTRYPOINT [ "/usr/local/bin/dockerjenkins.sh" ] #+end_example ** TODO [#A] docker whether to use multiple container image #+BEGIN_EXAMPLE [7/12/15, 11:56:33 AM] jacobzeng-曾瑞林ruiling: 求分散的image [7/12/15, 11:57:49 AM] denny: 什么叫作分散的image? [7/12/15, 12:02:39 PM] denny: 我的思路是:

  1. 搭一个private docker hub registration server 分别部署在美国和中国.这样都可以就近去取这个2,3G的文件,而不是从docker官网去拿
  2. sandbox solution中会自动起一个squid反向代理,这样大文件的下载就可以快不少.

我初步估计,应该可以把整体测试时间缩短到之前的1/3或1/2 [7/12/15, 12:03:22 PM] jacobzeng-曾瑞林ruiling: 就是非all in one [7/12/15, 12:03:30 PM] jacobzeng-曾瑞林ruiling: 嗯 [7/12/15, 12:03:49 PM] denny: 多个服务多个container, 是这样意思吗? [7/12/15, 12:06:18 PM] jacobzeng-曾瑞林ruiling: 嗯,多image,各自是各自的image,好维护,好下载,好扩展,不只是开发环境,部署也可以直接用 [7/12/15, 12:08:10 PM] denny: 这个思路的优缺点是. pros:

  1. 每个service有自己的container image, 测试和部署起来简单方便
  2. 每个service被隔离开来,逻辑清楚简单.

Cons:

  1. 每个image都是2到3G左右,下载起来会比较慢
  2. 每个image的制作和维护需要花费不少时间
  3. 每个container之间应用层通讯,需要做端口映射.从目前技术来看,多半需要人工改iptable [7/12/15, 12:09:07 PM] denny: 这个问题,之前与Chenxue讨论过.

基于上面的论证,我倾向,一个image + chef脚本的思路. [7/12/15, 12:09:16 PM] denny: 你怎么看 [7/12/15, 12:11:09 PM] jacobzeng-曾瑞林ruiling: 1.是问题,2我觉得是我们可以花时间的事情3的话可以预先配置,也可以走container之间内部通信 [7/12/15, 12:11:19 PM] denny: 或者,我们做个折衷:

  • 线上用image + chef
  • 线下用多个image [7/12/15, 12:11:48 PM] denny: 1的问题,可以通过自己搭建一个docker registration来做 [7/12/15, 12:12:01 PM] denny: 可能时间会短一些. [7/12/15, 12:12:22 PM] jacobzeng-曾瑞林ruiling: 只是提个思路,我是觉得可以多尝试 [7/12/15, 12:13:27 PM] jacobzeng-曾瑞林ruiling: 现在这样用感觉没有利用到docker轻量级的优势 [7/12/15, 12:13:35 PM] jacobzeng-曾瑞林ruiling: 离开下 [7/12/15, 12:13:36 PM] denny: 第2点,可以通过一些自动化脚本来做.当然会孽缘不少时间.

我主要担心的是第3点. [7/12/15, 12:18:01 PM] denny: 目前对docker的使用,主要是看重它多快速复制相同的env,避免各种奇怪的环境问题.

可以选一个项目做下POC #+END_EXAMPLE ** TODO docker fail rm aufs directory http://stackoverflow.com/questions/30984569/error-error-creating-aufs-mount-to-when-building-dockerfile #+BEGIN_EXAMPLE [email protected]:/tmp# mv /var/lib/docker/aufs /tmp/ mv: cannot move '/var/lib/docker/aufs' to '/tmp/aufs': Device or resource busy [email protected]:/tmp# rm -rf /var/lib/docker/aufs rm: cannot remove '/var/lib/docker/aufs': Device or resource busy #+END_EXAMPLE ** TODO when docker build image: how to enable --privileged *** DONE Update latest docker image fail at tomcat7: docker need use --privileged CLOSED: [2015-07-19 Sun 13:05]

tmux to docker server of osc

touch /var/run/tomcat7.pid chown tomcat7 /var/run/tomcat7.pid /var/lib/tomcat7/logs/catalina.out start-stop-daemon --start -b -u tomcat7 -g tomcat7 -c tomcat7 -d /tmp/tomcat7-tmp -p /var/run/tomcat7.pid -x /bin/bash -- -c ' set -a; JAVA_HOME="/usr/lib/jvm/java-8-oracle-amd64"; source "/etc/default/tomcat7"; CATALINA_HOME="/usr/share/tomcat7"; CATALINA_BASE="/var/lib/tomcat7"; JAVA_OPTS="-Xmx128M -Djava.awt.headless=true -XX:+UseConcMarkSweepGC"; CATALINA_PID="/var/run/tomcat7.pid"; CATALINA_TMPDIR="/tmp/tomcat7-tmp"; LANG="en_US.UTF-8"; JSSE_HOME="/usr/lib/jvm/java-8-oracle-amd64/jre/"; cd "/var/lib/tomcat7"; "/usr/share/tomcat7/bin/catalina.sh" start'

  • status=0
  • set +a -e
  • return 0
  • sleep 5
  • start-stop-daemon --test --start --pidfile /var/run/tomcat7.pid --user tomcat7 --exec /usr/lib/jvm/java-8-oracle-amd64/bin/java

#+BEGIN_EXAMPLE [email protected]:/# ps -ef | grep tomcat root 5960 1 0 04:08 ? 00:00:00 grep --color=auto tomcat [email protected]:/# start-stop-daemon --test --start --pidfile /var/run/tomcat7.pid --user tomcat7 --exec /usr/lib/jvm/java-8-oracle-amd64/bin/java Would start /usr/lib/jvm/java-8-oracle-amd64/bin/java .

[email protected]:/# bash -xe /etc/init.d/tomcat7 start

  • set -e
  • PATH=/bin:/usr/bin:/sbin:/usr/sbin
  • NAME=tomcat7
  • DESC='Tomcat servlet engine'
  • DEFAULT=/etc/default/tomcat7
  • JVM_TMP=/tmp/tomcat7-tomcat7-tmp ++ id -u
  • '[' 0 -ne 0 ']'
  • '[' -r /etc/default/locale ']'
  • . /etc/default/locale ++ LANG=en_US.UTF-8 ++ LC_ALL=en_US.UTF-8
  • export LANG
  • . /lib/lsb/init-functions +++ run-parts --lsbsysinit --list /lib/lsb/init-functions.d ++ for hook in '$(run-parts --lsbsysinit --list /lib/lsb/init-functions.d 2>/dev/null)' ++ '[' -r /lib/lsb/init-functions.d/20-left-info-blocks ']' ++ . /lib/lsb/init-functions.d/20-left-info-blocks ++ for hook in '$(run-parts --lsbsysinit --list /lib/lsb/init-functions.d 2>/dev/null)' ++ '[' -r /lib/lsb/init-functions.d/50-ubuntu-logging ']' ++ . /lib/lsb/init-functions.d/50-ubuntu-logging +++ LOG_DAEMON_MSG= ++ FANCYTTY= ++ '[' -e /etc/lsb-base-logging.sh ']' ++ true
  • '[' -r /etc/default/rcS ']'
  • . /etc/default/rcS ++ UTC=yes
  • TOMCAT7_USER=tomcat7
  • TOMCAT7_GROUP=tomcat7
  • OPENJDKS=
  • find_openjdks
  • for jvmdir in '/usr/lib/jvm/java-7-openjdk-*'
  • '[' -d /usr/lib/jvm/java-7-openjdk-amd64 -a /usr/lib/jvm/java-7-openjdk-amd64 '!=' /usr/lib/jvm/java-7-openjdk-common ']'
  • OPENJDKS=/usr/lib/jvm/java-7-openjdk-amd64
  • for jvmdir in '/usr/lib/jvm/java-6-openjdk-*'
  • '[' -d '/usr/lib/jvm/java-6-openjdk-' -a '/usr/lib/jvm/java-6-openjdk-' '!=' /usr/lib/jvm/java-6-openjdk-common ']'
  • JDK_DIRS='/usr/lib/jvm/default-java /usr/lib/jvm/java-7-openjdk-amd64 /usr/lib/jvm/java-6-openjdk /usr/lib/jvm/java-6-sun /usr/lib/jvm/java-7-oracle'
  • for jdir in '$JDK_DIRS'
  • '[' -r /usr/lib/jvm/default-java/bin/java -a -z '' ']'
  • JAVA_HOME=/usr/lib/jvm/default-java
  • for jdir in '$JDK_DIRS'
  • '[' -r /usr/lib/jvm/java-7-openjdk-amd64/bin/java -a -z /usr/lib/jvm/default-java ']'
  • for jdir in '$JDK_DIRS'
  • '[' -r /usr/lib/jvm/java-6-openjdk/bin/java -a -z /usr/lib/jvm/default-java ']'
  • for jdir in '$JDK_DIRS'
  • '[' -r /usr/lib/jvm/java-6-sun/bin/java -a -z /usr/lib/jvm/default-java ']'
  • for jdir in '$JDK_DIRS'
  • '[' -r /usr/lib/jvm/java-7-oracle/bin/java -a -z /usr/lib/jvm/default-java ']'
  • export JAVA_HOME
  • CATALINA_HOME=/usr/share/tomcat7
  • CATALINA_BASE=/var/lib/tomcat7
  • TOMCAT7_SECURITY=no
  • '[' -z '' ']'
  • JAVA_OPTS='-Djava.awt.headless=true -Xmx128M'
  • '[' -f /etc/default/tomcat7 ']'
  • . /etc/default/tomcat7 ++ TOMCAT7_USER=tomcat7 ++ TOMCAT7_GROUP=tomcat7 ++ JAVA_HOME=/usr/lib/jvm/java-8-oracle-amd64 ++ CATALINA_HOME=/usr/share/tomcat7 ++ CATALINA_BASE=/var/lib/tomcat7 ++ JAVA_OPTS='-Xmx128M -Djava.awt.headless=true' ++ JAVA_OPTS='-Xmx128M -Djava.awt.headless=true -XX:+UseConcMarkSweepGC' ++ TOMCAT7_SECURITY=no ++ JVM_TMP=/tmp/tomcat7-tmp ++ AUTHBIND=no ++ CATALINA_OPTS= ++ JAVA_ENDORSED_DIRS=/usr/share/tomcat6/lib/endorsed
  • '[' '!' -f /usr/share/tomcat7/bin/bootstrap.jar ']'
  • POLICY_CACHE=/var/lib/tomcat7/work/catalina.policy
  • '[' -z '' ']'
  • CATALINA_TMPDIR=/tmp/tomcat7-tmp
  • '[' -n '' ']'
  • SECURITY=
  • '[' no = yes ']'
  • CATALINA_PID=/var/run/tomcat7.pid
  • CATALINA_SH=/usr/share/tomcat7/bin/catalina.sh
  • '[' -z '' -a -r /usr/lib/jvm/java-8-oracle-amd64/jre/lib/jsse.jar ']'
  • JSSE_HOME=/usr/lib/jvm/java-8-oracle-amd64/jre/
  • case "$1" in
  • '[' -z /usr/lib/jvm/java-8-oracle-amd64 ']'
  • '[' '!' -d /var/lib/tomcat7/conf ']'
  • log_daemon_msg 'Starting Tomcat servlet engine' tomcat7
  • '[' -z 'Starting Tomcat servlet engine' ']'
  • log_use_fancy_output
  • TPUT=/usr/bin/tput
  • EXPR=/usr/bin/expr
  • '[' -t 1 ']'
  • '[' xxterm '!=' x ']'
  • '[' xxterm '!=' xdumb ']'
  • '[' -x /usr/bin/tput ']'
  • '[' -x /usr/bin/expr ']'
  • /usr/bin/tput hpa 60
  • /usr/bin/tput setaf 1
  • '[' -z ']'
  • FANCYTTY=1
  • case "$FANCYTTY" in
  • true
  • /usr/bin/tput xenl ++ /usr/bin/tput cols
  • COLS=179
  • '[' 179 ']'
  • '[' 179 -gt 6 ']' ++ /usr/bin/expr 179 - 7
  • COL=172
  • log_use_plymouth
  • '[' n = y ']'
  • plymouth --ping
  • printf ' * Starting Tomcat servlet engine tomcat7 '
  • Starting Tomcat servlet engine tomcat7 ++ /usr/bin/expr 179 - 1
  • /usr/bin/tput hpa 178 $ printf ' '
  • start-stop-daemon --test --start --pidfile /var/run/tomcat7.pid --user tomcat7 --exec /usr/lib/jvm/java-8-oracle-amd64/bin/java
  • umask 022
  • echo '// AUTO-GENERATED FILE from /etc/tomcat7/policy.d/'
  • echo ''
  • cat /var/lib/tomcat7/conf/policy.d/01system.policy /var/lib/tomcat7/conf/policy.d/02debian.policy /var/lib/tomcat7/conf/policy.d/03catalina.policy /var/lib/tomcat7/conf/policy.d /04webapps.policy /var/lib/tomcat7/conf/policy.d/50local.policy
  • rm -rf /tmp/tomcat7-tmp
  • mkdir -p /tmp/tomcat7-tmp
  • chown tomcat7 /tmp/tomcat7-tmp
  • catalina_sh start ++ echo -Xmx128M -Djava.awt.headless=true -XX:+UseConcMarkSweepGC ++ sed 's/"/\"/g'
  • JAVA_OPTS='-Xmx128M -Djava.awt.headless=true -XX:+UseConcMarkSweepGC'
  • AUTHBIND_COMMAND=
  • '[' no = yes -a start = start ']'
  • TOMCAT_SH='set -a; JAVA_HOME="/usr/lib/jvm/java-8-oracle-amd64"; source "/etc/default/tomcat7"; CATALINA_HOME="/usr/share/tomcat7"; CATALINA_BASE="/var/lib/tomcat7"; JAVA_OPTS="-Xmx128M -Djava.awt.headless=true -XX:+UseConcMarkSweepGC"; CATALINA_PID="/var/run/tomcat7.pid"; CATALINA_TMPDIR="/tmp/tomcat7-tmp"; LANG="en_US.UTF-8"; JSSE_HOME="/usr/lib/jvm/java-8-oracle-amd64/jre/"; cd "/var/lib/tomcat7"; "/usr/share/tomcat7/bin/catalina.sh" start'
  • '[' no = yes -a start = start ']'
  • set +e
  • touch /var/run/tomcat7.pid /var/lib/tomcat7/logs/catalina.out
  • chown tomcat7 /var/run/tomcat7.pid /var/lib/tomcat7/logs/catalina.out
  • start-stop-daemon --start -b -u tomcat7 -g tomcat7 -c tomcat7 -d /tmp/tomcat7-tmp -p /var/run/tomcat7.pid -x /bin/bash -- -c ' set -a; JAVA_HOME="/usr/lib/jvm/java-8-oracle-amd64"; source "/etc/default/tomcat7"; CATALINA_HOME="/usr/share/tomcat7"; CATALINA_BASE="/var/lib/tomcat7"; JAVA_OPTS="-Xmx128M -Djava.awt.headless=true -XX:+UseConcMarkSweepGC"; CATALINA_PID="/var/run/tomcat7.pid"; CATALINA_TMPDIR="/tmp/tomcat7-tmp"; LANG="en_US.UTF-8"; JSSE_HOME="/usr/lib/jvm/java-8-oracle-amd64/jre/"; cd "/var/lib/tomcat7"; "/usr/share/tomcat7/bin/catalina.sh" start'
  • status=0
  • set +a -e
  • return 0
  • sleep 5
  • start-stop-daemon --test --start --pidfile /var/run/tomcat7.pid --user tomcat7 --exec /usr/lib/jvm/java-8-oracle-amd64/bin/java
  • '[' -f /var/run/tomcat7.pid ']'
  • rm -f /var/run/tomcat7.pid
  • log_end_msg 1
  • '[' -z 1 ']'
  • '[' 172 ']'
  • '[' -x /usr/bin/tput ']'
  • log_use_plymouth
  • '[' n = y ']'
  • plymouth --ping
  • printf '\r'
  • /usr/bin/tput hpa 172 + '[' 1 -eq 0 ']'
  • printf '[' [+ /usr/bin/tput setaf 1
  • printf fail fail+ /usr/bin/tput op
  • echo ']' ]
  • return 1 [email protected]:/# ps -ef | grep tomcat tomcat7 5997 1 99 04:09 ? 00:00:07 /usr/lib/jvm/java-8-oracle-amd64/bin/java -Djava.util.logging.config.file=/var/lib/tomcat7/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xmx128M -Djava.awt.headless=true -XX:+UseConcMarkSweepGC -Djava.endorsed.dirs=/usr/share/tomcat6/lib/endorsed -classpath /usr/share/tomcat7/bin/bootstrap.jar:/usr/share/tomcat7/bin/tomcat-juli.jar -Dcatalina.base=/var/lib/tomcat7 -Dcatalina.home=/usr/share/tomcat7 -Djava.io.tmpdir=/tmp/tomcat7-tmp org.apache.catalina.startup.Bootstrap start root 6038 1 0 04:09 ? 00:00:00 grep --color=auto tomcat #+END_EXAMPLE ** TODO [#B] docker: How to resolve docker pull fails, if we can't restart docker server https://github.com/docker/docker/issues/3115

#+BEGIN_EXAMPLE 5089df36ca81: Downloading 522.8 MB 5089df36ca81: Download complete 96f92ffea108: Download complete 709ece14260e: Downloading 189.8 MB 709ece14260e: Download complete 598ad0443ad3: Error downloading dependent layers Error pulling image (latest) from docker.io/denny/osc, Untar re-exec error: exit status 1: output: unexpected EOF [email protected]:# [email protected]:# [email protected]:# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0d8215343ca0 denny/osc:latest "/usr/sbin/sshd -D" 2 days ago Up 2 days 0.0.0.0:3128->3128/tcp, 0.0.0.0:28000->28000/tcp, 0.0.0.0:28080->28080/tcp, 0.0.0.0:4022->22/tcp docker-jenkins b238788f275c denny/osc:latest "/usr/sbin/sshd -D" 4 days ago Up 4 days 0.0.0.0:9022->22/tcp, 0.0.0.0:11001->10001/tcp frontend 573193f277ae denny/osc:latest "/usr/sbin/sshd -D" 4 days ago Up 4 days 0.0.0.0:8022->22/tcp backend 848e8b95ae8b denny/osc:latest "/usr/sbin/sshd -D" 4 days ago Up 4 days 0.0.0.0:7022->22/tcp database cd851ef5c567 denny/osc:latest "/usr/sbin/sshd -D" 4 days ago Up 4 days 0.0.0.0:8080->8080/tcp, 0.0.0.0:5022->22/tcp iam-registry [email protected]:# docker pull denny/osc:latest Repository docker.io/denny/osc already being pulled by another client. Waiting. #+END_EXAMPLE ** TODO [#A] docker disable people: docker exec -it docker-jenkins bash ** docker #+begin_example [email protected]:/tmp# cat ./test.cpp cat ./test.cpp #include using namespace std; int main() { int i; long number; number=9010241024; char * str;

str = new char[number]; for(i=0; i<number; i++) str[i] = 1;

str = new char[number]; for(i=0; i<number; i++) str[i] = 1;

cin>>i;

return 0; }

[email protected]:/tmp# pmap -x $(ps -ef | grep a.out | head -n 1 | awk -F' ' '{print $2}') print $2}') 218: ./a.out Address Kbytes RSS Dirty Mode Mapping 0000000000400000 4 4 0 r-x-- a.out 0000000000600000 4 4 0 r---- a.out 0000000000601000 4 4 4 rw--- a.out 00007f3b92ca9000 163848 100000 100000 rw--- [ anon ] 00007f3b9ccab000 1004 64 0 r-x-- libm-2.15.so 00007f3b9cda6000 2044 0 0 ----- libm-2.15.so 00007f3b9cfa5000 4 0 0 r---- libm-2.15.so 00007f3b9cfa6000 4 0 0 rw--- libm-2.15.so 00007f3b9cfa7000 1748 284 0 r-x-- libc-2.15.so 00007f3b9d15c000 2044 0 0 ----- libc-2.15.so 00007f3b9d35b000 16 4 0 r---- libc-2.15.so 00007f3b9d35f000 8 4 4 rw--- libc-2.15.so 00007f3b9d361000 20 4 4 rw--- [ anon ] 00007f3b9d366000 84 12 0 r-x-- libgcc_s.so.1 00007f3b9d37b000 2044 0 0 ----- libgcc_s.so.1 00007f3b9d57a000 4 4 0 r---- libgcc_s.so.1 00007f3b9d57b000 4 0 0 rw--- libgcc_s.so.1 00007f3b9d57c000 916 448 0 r-x-- libstdc++.so.6.0.17 00007f3b9d661000 2044 0 0 ----- libstdc++.so.6.0.17 00007f3b9d860000 32 12 0 r---- libstdc++.so.6.0.17 00007f3b9d868000 8 8 8 rw--- libstdc++.so.6.0.17 00007f3b9d86a000 84 4 0 rw--- [ anon ] 00007f3b9d87f000 136 108 0 r-x-- ld-2.15.so 00007f3b9da97000 20 12 4 rw--- [ anon ] 00007f3b9da9e000 12 4 0 rw--- [ anon ] 00007f3b9daa1000 4 4 0 r---- ld-2.15.so 00007f3b9daa2000 8 8 4 rw--- ld-2.15.so 00007fff2e251000 84 4 4 rw--- [ stack ] 00007fff2e2da000 4 4 0 r-x-- [ anon ] ffffffffff600000 4 0 0 r-x-- [ anon ]


total kB 176244 101004 100032 [email protected]:/tmp# #+end_example *** TODO How to deal with persistent storage (e.g. databases) in docker http://stackoverflow.com/questions/18496940/how-to-deal-with-persistent-storage-e-g-databases-in-docker *** docker run -link redis:db -i -t ubuntu:12.10 /bin/bash docker start be5d4ed8a770 ls -lt /var/lib/docker/containers cat /var/lib/docker/containers/be5d4ed8a770*/config.lxc

docker attach -sig-proxy=false be5d4ed8a770 *** docker use as much as result in the hosting OS **** memory in container #+begin_example [email protected]:/# cat /proc/meminfo cat /proc/meminfo MemTotal: 5985200 kB MemFree: 293432 kB Buffers: 333504 kB Cached: 4334064 kB SwapCached: 0 kB Active: 2648444 kB Inactive: 2243752 kB Active(anon): 110420 kB Inactive(anon): 122744 kB Active(file): 2538024 kB Inactive(file): 2121008 kB Unevictable: 8424 kB Mlocked: 8424 kB SwapTotal: 6127608 kB SwapFree: 6127608 kB Dirty: 360 kB Writeback: 0 kB AnonPages: 232892 kB Mapped: 37532 kB Shmem: 1320 kB Slab: 659948 kB SReclaimable: 488088 kB SUnreclaim: 171860 kB KernelStack: 8616 kB PageTables: 9128 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 9120208 kB Committed_AS: 1639724 kB VmallocTotal: 34359738367 kB VmallocUsed: 210804 kB VmallocChunk: 34359474912 kB HardwareCorrupted: 0 kB AnonHugePages: 63488 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10240 kB DirectMap2M: 6281216 kB [email protected]:/# free -ml free -ml total used free shared buffers cached Mem: 5844 5558 286 0 325 4232 Low: 5844 5558 286 High: 0 0 0 -/+ buffers/cache: 1000 4844 Swap: 5983 0 5983 [email protected]:/# #+end_example **** memory in hosting OS #+begin_example [[email protected] docker]# cat /proc/meminfo cat /proc/meminfo MemTotal: 5985200 kB MemFree: 293416 kB Buffers: 333504 kB Cached: 4334068 kB SwapCached: 0 kB Active: 2648380 kB Inactive: 2243720 kB Active(anon): 110320 kB Inactive(anon): 122744 kB Active(file): 2538060 kB Inactive(file): 2120976 kB Unevictable: 8424 kB Mlocked: 8424 kB SwapTotal: 6127608 kB SwapFree: 6127608 kB Dirty: 368 kB Writeback: 0 kB AnonPages: 232748 kB Mapped: 37532 kB Shmem: 1320 kB Slab: 659972 kB SReclaimable: 488088 kB SUnreclaim: 171884 kB KernelStack: 8616 kB PageTables: 9128 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 9120208 kB Committed_AS: 1639720 kB VmallocTotal: 34359738367 kB VmallocUsed: 210804 kB VmallocChunk: 34359474912 kB HardwareCorrupted: 0 kB AnonHugePages: 63488 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10240 kB DirectMap2M: 6281216 kB [[email protected] docker]# free -ml free -ml total used free shared buffers cached Mem: 5844 5558 286 0 325 4232 Low: 5844 5558 286 High: 0 0 0 -/+ buffers/cache: 1000 4844 Swap: 5983 0 5983 [[email protected] docker]# #+end_example *** web page: Gathering LXC and Docker containers metrics | Docker Blog http://blog.docker.io/2013/10/gathering-lxc-docker-containers-metrics/ **** webcontent :noexport: #+begin_example Location: http://blog.docker.io/2013/10/gathering-lxc-docker-containers-metrics/ [docker-top] Toggle navigation

  • Home
  • Learn More
  • Getting started
  • Community
  • Documentation
  • Blog
  • INDEX [external-l]

October 8, 2013

Gathering LXC and Docker containers metrics

Linux Containers rely on control groups which not only track groups of processes, but also expose a lot of metrics about CPU, memory, and block I/O usage. We will see how to access those metrics, and how to obtain network usage metrics as well. This is relevant for "pure" LXC containers, as well as for Docker containers.

Locate your control groups

Control groups are exposed through a pseudo-filesystem. In recent distros, you should find this filesystem under /sys/fs/cgroup. Under that directory, you will see multiple sub-directories, called devices, freezer, blkio, etc.; each sub-directory actually corresponds to a different cgroup hierarchy.

On older systems, the control groups might be mounted on /cgroup, without distinct hierarchies. In that case, instead of seeing the sub-directories, you will see a bunch of files in that directory, and possibly some directories corresponding to existing containers.

To figure out where your control groups are mounted, you can run:

[grep cgroup /proc/mo]

1 grep cgroup /proc/mounts

Control groups hierarchies

The fact that different control groups can be in different hierarchies mean that you can use completely different groups (and policies) for e.g. CPU allocation and memory allocation. Let's make up a completely imaginary example: you have a 2-CPU system running Python webapps with Gunicorn, a PostgreSQL database, and accepting SSH logins. You can put each webapp and each SSH session in their own memory control group (to make sure that a single app or user doesn't use up the memory of the whole system), and at the same time, stick the webapps and database on a CPU, and the SSH logins on another CPU.

Of course, if you run LXC containers, each hierarchy will have one group per container, and all hierarchies will look the same.

Merging or splitting hierarchies is achieved by using special options when mounting the cgroup pseudo-filesystems. Note that if you want to change that, you will have to remove all existing cgroups in the hierarchies that you want to split or merge.

Enumerating our cgroups

You can look into /proc/cgroups to see the different control group subsystems known to the system, the hierarchy they belong to, and how many groups they contain.

You can also look at /proc//cgroup to see which control groups a process belongs to. The control group will be shown as a path relative to the root of the hierarchy mountpoint; e.g. / means "this process has not been assigned into a particular group", while /lxc/pumpkin means that the process is likely to be a member of a container named pumpkin.

Finding the cgroup for a given container

For each container, one cgroup will be created in each hierarchy. On older systems with older versions of the LXC userland tools, the name of the cgroup will be the name of the container. With more recent versions of the LXC tools, the cgroup will be lxc/<container_name>.

Additional note for Docker users: the container name will be the full ID or long ID of the container. If a container shows up as ae836c95b4c3 in docker ps, its long ID might be something like ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79. You can look it up with docker inspect or docker ps -notrunc.

Putting everything together: on my system, if I want to look at the memory metrics for a Docker container, I have to look at /sys/fs/cgroup/memory/lxc//.

Collecting memory, CPU, block I/O metrics

For each subsystem, we will find one pseudo-file (in some cases, multiple) containing statistics about used memory, accumulated CPU cycles, or number of I/O completed. Those files are easy to parse, as we will see.

Memory metrics

Those will be found in the memory cgroup (duh!). Note that the memory control group adds a little overhead, because it does very fine-grained accounting of the memory usage on your system. Therefore, many distros chose to not enable it by default. Generally, to enable it, all you have to do is to add some kernel command-line parameters: cgroup_enable=memory swapaccount=1.

The metrics are in the pseudo-file memory.stat. Here is what it will look like:

[cache 11492564992 ]

1 cache 11492564992 2 rss 1930993664 3 mapped_file 306728960 4 pgpgin 406632648 5 pgpgout 403355412 6 swap 0 7 pgfault 728281223 8 pgmajfault 1724 9 inactive_anon 46608384 10 active_anon 1884520448 11 inactive_file 7003344896 12 active_file 4489052160 13 unevictable 32768 14 hierarchical_memory_limit 9223372036854775807 15 hierarchical_memsw_limit 9223372036854775807 16 total_cache 11492564992 17 total_rss 1930993664 18 total_mapped_file 306728960 19 total_pgpgin 406632648 20 total_pgpgout 403355412 21 total_swap 0 22 total_pgfault 728281223 23 total_pgmajfault 1724 24 total_inactive_anon 46608384 25 total_active_anon 1884520448 26 total_inactive_file 7003344896 27 total_active_file 4489052160 28 total_unevictable 32768

The first half (without the total_ prefix) contains statistics relevant to the processes within the cgroup, excluding sub-cgroups. The second half (with the total_ prefix) includes sub-cgroups as well.

Some metrics are "gauges", i.e. values that can increase or decrease (e.g. swap, the amount of swap space used by the members of the cgroup). Some others are "counters", i.e. values that can only go up, because they represent occurrences of a specific event (e.g. pgfault, which indicates the number of page faults which happened since the creation of the cgroup; this number can never decrease).

Let's see what those metrics stand for. All memory amounts are in bytes (except for event counters).

  • cache is the amount of memory used by the processes of this control group that can be associated precisely with a block on a block device. When you read and write files from and to disk, this amount will increase. This will be the case if you use "conventional" I/O (open, read, write syscalls) as well as mapped files (with mmap). It also accounts for the memory used by tmpfs mounts. I don't know exactly why; it might be because tmpfs filesystems work directly with the page cache.
  • rss is the amount of memory that doesn't correspond to anything on disk: stacks, heaps, and anonymous memory maps.
  • mapped_file indicates the amount of memory mapped by the processes in the control group. In my humble opinion, it doesn't give you an information about how much memory is used; it rather tells you how it is used.
  • pgpgin and pgpgout are a bit tricky. If you are used to vmstat, you might think that they indicate the number of times that a page had to be read and written (respectively) by a process of the cgroup, and that they should reflect both file I/O and swap activity. Wrong! In fact, they correspond to charging events. Each time a page is "charged" (=added to the accounting) to a cgroup, pgpgin increases. When a page is "uncharged" (=no longer "billed" to a cgroup), pgpgout increases.
  • pgfault and pgmajfault indicate the number of times that a process of the cgroup triggered a "page fault" and a "major fault", respectively. A page fault happens when a process accesses a part of its virtual memory space which is inexistent or protected. The former can happen if the process is buggy and tries to access an invalid address (it will then be sent a SIGSEGV signal, typically killing it with the famous Segmentation fault message). The latter can happen when the process reads from a memory zone which has been swapped out, or which corresponds to a mapped file: in that case, the kernel will load the page from disk, and let the CPU complete the memory access. It can also happen when the process writes to a copy-on-write memory zone: likewise, the kernel will preempt the process, duplicate the memory page, and resume the write operation on the process' own copy of the page. "Major" faults happen when the kernel actually has to read the data from disk. When it just has to duplicate an existing page, or allocate an empty page, it's a regular (or "minor") fault.
  • swap is (as expected) the amount of swap currently used by the processes in this cgroup.
  • active_anon and inactive_anon is the amount of anonymous memory that has been identified has respectively active and inactive by the kernel. "Anonymous" memory is the memory that is not linked to disk pages. In other words, that's the equivalent of the rss counter described above. In fact, the very definition of the rss counter is active_anon+inactive_anon-tmpfs (where tmpfs is the amount of memory used up by tmpfs filesystems mounted by this control group). Now, what's the difference between "active" and "inactive"? Pages are initially "active"; and at regular intervals, the kernel sweeps over the memory, and tags some pages as "inactive". Whenever they are accessed again, they are immediately retagged "active". When the kernel is almost out of memory, and time comes to swap out to disk, the kernel will swap "inactive" pages.
  • Likewise, the cache memory is broken down into active_file and inactive_file. The exact formula is cache=active_file+inactive_file+tmpfs. The exact rules used by the kernel to move memory pages between active and inactive sets are different from the ones used for anonymous memory, but the general principle is the same. Note that when the kernel needs to reclaim memory, it is cheaper to reclaim a clean (=non modified) page from this pool, since it can be reclaimed immediately (while anonymous pages and dirty/modified pages have to be written to disk first).
  • unevictable is the amount of memory that cannot be reclaimed; generally, it will account for memory that has been "locked" with mlock. It is often used by crypto frameworks to make sure that secret keys and other sensitive material never gets swapped out to disk.
  • Last but not least, the memory and memsw limits are not really metrics, but a reminder of the limits applied to this cgroup. The first one indicates the maximum amount of physical memory that can be used by the processes of this control group; the second one indicates the maximum amount of RAM+swap.

Accounting for memory in the page cache is very complex. If two processes in different control groups both read the same file (ultimately relying on the same blocks on disk), the corresponding memory charge will be split between the control groups. It's nice, but it also means that when a cgroup is terminated, it could increase the memory usage of another cgroup, because they are not splitting the cost anymore for those memory pages.

CPU metrics

Now that we've covered memory metrics, everything else will look very simple in comparison. CPU metrics will be found in the cpuacct controller.

For each container, you will find a pseudo-file cpuacct.stat, containing the CPU usage accumulated by the processes of the container, broken down between user and system time. If you're not familiar with the distinction, user is the time during which the processes were in direct control of the CPU (i.e. executing process code), and system is the time during which the CPU was executing system calls on behalf of those processes.

Those times are expressed in ticks of 1/100th of second. (Actually, they are expressed in "user jiffies". There are USER_HZ "jiffies" per second, and on x86 systems, USER_HZ is 100. This used to map exactly to the number of scheduler "ticks" per second; but with the advent of higher frequency scheduling, as well as tickless kernels, the number of kernel ticks wasn't relevant anymore. It stuck around anyway, mainly for legacy and compatibility reasons.)

Block I/O metrics

Block I/O is accounted in the blkio controller. Different metrics are scattered across different files. While you can find in-depth details in the blkio-controller file in the kernel documentation, here is a short list of the most relevant ones:

  • blkio.sectors contains the number of 512-bytes sectors read and written by the processes member of the cgroup, device by device. Reads and writes are merged in a single counter.
  • blkio.io_service_bytes indicates the number of bytes read and written by the cgroup. It has 4 counters per device, because for each device, it differentiates between synchronous vs. asynchronous I/O, and reads vs. writes.
  • blkio.io_serviced is similar, but instead of showing byte counters, it will show the number of I/O operations performed, regardless of their size. It also has 4 counters per device.
  • blkio.io_queued indicates the number of I/O operations currently queued for this cgroup. In other words, if the cgroup isn't doing any I/O, this will be zero. Note that the opposite is not true. In other words, if there is no I/O queued, it does not mean that the cgroup is idle (I/O-wise). It could be doing purely synchronous reads on an otherwise quiescent device, which is therefore able to handle them immediately, without queuing. Also, while it is helpful to figure out which cgroup is putting stress on the I/O subsystem, keep in mind that is is a relative quantity. Even if a process group does not perform more I/O, its queue size can increase just because the device load increases because of other devices.

For each file, there is a _recursive variant, that aggregates the metrics of the control group and all its sub-cgroups.

Also, it's worth mentioning that in most cases, if the processes of a control group have not done any I/O on a given block device, the block device will not appear in the pseudo-files. In other words, you have to be careful each time you parse one of those files, because new entries might have appeared since the previous time.

Collecting network metrics

Interestingly, network metrics are not exposed directly by control groups. There is a good explanation for that: network interfaces exist within the context of network namespaces. The kernel could probably accumulate metrics about packets and bytes sent and received by a group of processes, but those metrics wouldn't be very useful. You want (at least!) per-interface metrics (because traffic happening on the local lo interface doesn't really count). But since processes in a single cgroup can belong to multiple network namespaces, those metrics would be harder to interpret: multiple network namespaces means multiple lo interfaces, potentially multiple eth0 interfaces, etc.; so this is why there is no easy way to gather network metrics with control groups.

So what shall we do? Well, we have multiple options.

Iptables

When people think about iptables, they usually think about firewalling, and maybe NAT scenarios. But iptables (or rather, the netfilter framework for which iptables is just an interface) can also do some serious accounting.

For instance, you can setup a rule to account for the outbound HTTP traffic on a web server:

[iptables -I OUTPUT -]

1 iptables -I OUTPUT -p tcp --sport 80

There is no -j or -g flag, so the rule will just count matched packets and go to the following rule.

Later, you can check the values of the counters, with:

[iptables -nxvL OUTPU]

1 iptables -nxvL OUTPUT

(Technically, -n is not required, but it will prevent iptables from doing DNS reverse lookups, which are probably useless in this scenario.)

Counters include packets and bytes. If you want to setup metrics for container traffic like this, you could execute a for loop to add two iptables rules per container IP address (one in each direction), in the FORWARD chain. This will only meter traffic going through the NAT layer; you will also have to add traffic going through the userland proxy.

Then, you will need to check those counters on a regular basis. If you happen to use collectd, there is a nice plugin to automate iptables counters collection.

Interface-level counters

Since each container has a virtual Ethernet interface, you might want to check directly the TX and RX counters of this interface. However, this is not as easy as it sounds. If you use Docker (as of current version 0.6) or lxc-start, then you will notice that each container is associated to a virtual Ethernet interface in your host, with a name like vethKk8Zqi. Figuring out which interface corresponds to which container is, unfortunately, difficult. (If you know an easy way, let me know.)

In the long run, Docker will probably take over the setup of those virtual interfaces. It will keep track of their names, and make sure that it can easily associate containers with their respective interfaces.

But for now, the best way is to check the metrics from within the containers. I'm not talking about running a special agent in the container, or anything like that. We are going to run an executable from the host environment, but within the network namespace of a container.

ip-netns magic

To do that, we will use the ip netns exec command. This command will let you execute any program (present in the host system) within any network namespace visible to the current process. This means that your host will be able to enter the network namespace of your containers, but your containers won't be able to access the host, nor their sibling containers. Containers will be able to "see" and affect their sub-containers, though.

The exact format of the command is:

[ip netns exec <nsnam]

1 ip netns exec <command...>

For instance:

[ip netns exec mycont]

1 ip netns exec mycontainer netstat -i

How does the naming system work? How does ip netns find mycontainer? Answer: by using the namespaces pseudo-files. Each process belongs to one network namespace, one PID namespace, one mnt namespace, etc.; and those namespaces are materialized under /proc//ns/. For instance, the network namespace of PID 42 is materialized by the pseudo-file /proc/42/ns/net.

When you run ip netns exec mycontainer ..., it expects /var/run/netns/mycontainer to be one of those pseudo-files. (Symlinks are accepted.)

In other words, to execute a command within the network namespace of a container, we need to:

  • find out the PID of any process within the container that we want to investigate;
  • create a symlink from /var/run/netns/ to /proc//ns/net;
  • execute ip netns exec ....

Now, we need to figure out a way to find the PID of a process (any process!) running in the container that we want to investigate. This is actually very easy. You have to locate one of the control groups corresponding to the container. We explained how to locate those cgroups in the beginning of this post, so we won't cover that again.

On my machine, a control group will typically be located in /sys/fs/cgroup/devices/lxc/ . Within that directory, you will find a pseudo-file called tasks. It contains the list of the PIDs that are in the control group, i.e., in the container. We can take any of them; so the first one will do.

Putting everything together, if the "short ID" of a container is held in the environment variable $CID, here is a small shell snippet to put everything together:

[TASKS=/sys/fs/cgroup]

1 TASKS=/sys/fs/cgroup/devices/$CID*/tasks 2 PID=$(head -n 1 $TASKS) 3 mkdir -p /var/run/netns 4 ln -sf /proc/$PID/ns/net /var/run/netns/$CID 5 ip netns exec $CID netstat -i

The same mechanism is used in Pipework to setup network interfaces within containers from outside the containers.

Tips for high-performance metric collection

Note that running a new process each time you want to update metrics is (relatively) expensive. If you want to collect metrics at high resolutions, and/or over a large number of containers (think 1000 containers on a single host), you do not want to fork a new process each time.

Here is how to collect metrics from a single process. You will have to write your metric collector in C (or any language that lets you do low-level system calls). You need to use a special system call, setns(), which lets the current process enter any arbitrary namespace. It requires, however, an open file descriptor to the namespace pseudo-file (remember: that's the pseudo-file in /proc/ /ns/net).

However, there is a catch: you must not keep this file descriptor open. If you do, when the last process of the control group exits, the namespace will not be destroyed, and its network resources (like the virtual interface of the container) will stay around for ever (or until you close that file descriptor).

The right approach would be to keep track of the first PID of each container, and re-open the namespace pseudo-file each time.

Collecting metrics when a container exits

Sometimes, you do not care about real time metric collection, but when a container exits, you want to know how much CPU, memory, etc. it has used.

The current implementation of Docker (as of 0.6) makes this particularly challenging, because it relies on lxc-start, and when a container stops, lxc-start carefully cleans up behind it. If you really want to collect the metrics anyway, here is how. For each container, start a collection process, and move it to the control groups that you want to monitor by writing its PID to the tasks file of the cgroup. The collection process should periodically re-read the tasks file to check if it's the last process of the control group. (If you also want to collect network statistics as explained in the previous section, you should also move the process to the appropriate network namespace.)

When the container exits, lxc-start will try to delete the control groups. It will fail, since the control group is still in use; but that's fine. You process should now detect that it is the only one remaining in the group. Now is the right time to collect all the metrics you need!

Finally, your process should move itself back to the root control group, and remove the container control group. To remove a control group, just rmdir its directory. It's counter-intuitive to rmdir a directory as it still contains files; but remember that this is a pseudo-filesystem, so usual rules don't apply. After the cleanup is done, the collection process can exit safely.

As you can see, collecting metrics when a container exits can be tricky; for this reason, it is usually easier to collect metrics at regular intervals (e.g. every minute, with the collectd LXC plugin) and rely on that instead.

Wrapping it up

To recap, we covered:

  • how to locate the control groups for containers;
  • reading and interpreting compute metrics for containers;
  • different ways to obtain network metrics for containers;
  • a technique to gather overall metrics when a container exits.

As we have seen, metrics collection is not insanely difficult, but still involves many complicated steps, with special cases like those for the network subsystem. Docker will take care of this, or at least expose hooks to make it more straightforward. It is one of the reasons why we repeat over and over "Docker is not production ready yet": it's fine to skip metrics for development, continuous testing, or staging environments, but it's definitely not fine to run production services without metrics!

Last but not least, note that even with all that information, you will still need a storage and graphing system for those metrics. There are many such systems out there. If you want something that you can deploy on your own, you can check e.g. collectd or Graphite. There are also "-as-a-Service" offerings. Those services will store your metrics and let you query them in various ways, for a given price. Some examples include Librato, AWS CloudWatch, New Relic Server Monitoring , and many more.

About Jerôme Petazzoni

sam

Jerôme is a senior engineer at dotCloud, where he rotates between Ops, Support and Evangelist duties and has earned the nickname of "master Yoda". In a previous life he built and operated large scale Xen hosting back when EC2 was just the name of a plane, supervized the deployment of fiber interconnects through the French subway, built a specialized GIS to visualize fiber infrastructure, specialized in commando deployments of large-scale computer systems in bandwidth-constrained environments such as conference centers, and various other feats of technical wizardry. He cares for the servers powering dotCloud, helps our users feel at home on the platform, and documents the many ways to use dotCloud in articles, tutorials and sample applications. He's also an avid dotCloud power user who has deployed just about anything on dotCloud - look for one of his many custom services on our Github repository.

Connect with Jerôme on Twitter! @jpetazzo

EmailFacebookTwitter By Jerome Petazzoni - Posted in Demos, Tutorials - Tagged with cgroups, containers, docker, lxc, metrics

One Response to "Gathering LXC and Docker containers metrics"

  1. [54a757]

    chenyf

    October 12, 2013

    This article is very useful , nice job!

Comments are closed.

Search

Search Sign up for Docker Weekly

[ ] Subscribe Jobs

we're hiring

Pages

  • Docker Weekly Archives

Categories

  • Baidu
  • Community
  • Demos
  • Design
  • Docker releases
  • Dockerization
  • Features
  • Guest blog posts
  • Hackday
  • Index
  • Installation
  • Meetups
  • News
  • OpenStack
  • PaaS
  • RedHat
  • Registry
  • RHEL
  • Security
  • Survey
  • Talks & presentations
  • Team
  • Tutorials
  • Uncategorized
  • Yandex

Docker is an open source project, sponsored by dotCloud, under the apache 2.0 licence Twitter GitHub

#+end_example ** TODO docker start container with fixed ip ** # --8<-------------------------- separator ------------------------>8-- :noexport: ** DONE ubuntu install docker CLOSED: [2015-04-19 Sun 10:25] https://docs.docker.com/installation/ubuntulinux/

Ubuntu 12.04

sudo apt-get update sudo apt-get install linux-image-generic-lts-trusty sudo reboot wget -qO- https://get.docker.com/ | sh

Ubuntu 14.04

wget -qO- https://get.docker.com/ | sh ** DONE [#A] docker storage driver plugin: aufs VS devicemapper :IMPORTANT: CLOSED: [2015-03-08 Sun 20:56]

  • Prior to 0.7.0, Docker relied upon AUFS as its only storage driver.
  • After 0.7.0, default storage driver is devicemapper
  • AUFS is not in the upstream Linux kernel. Why is AUFS chosen as the default storage backend (for example in Ubuntu's Docker)?

https://github.com/docker/docker/tree/master/cheatsheet-docker-A4][challenges-leetcode-interesting]] http://muehe.org/posts/switching-docker-from-aufs-to-devicemapper/ http://stackoverflow.com/questions/24764908/why-use-aufs-as-the-default-docker-storage-backend-instead-of-devicemapper

  • The device mapper graphdriver uses the device mapper thin provisioning module (dm-thinp) to implement CoW snapshots. For each devicemapper graph location (typically /var/lib/docker/devicemapper, $graph below) a thin pool is created based on two block devices, one for data and one for metadata. *** docker info #+BEGIN_EXAMPLE [email protected]:# docker info docker info Containers: 0 Images: 0 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 0 Execution Driver: native-0.2 Kernel Version: 3.13.0-24-generic Operating System: Ubuntu 14.04.1 LTS CPUs: 1 Total Memory: 364.1 MiB Name: default-ubuntu-1404 ID: LPML:YQM3:VTHJ:6B6Y:2Y7X:CRTU:PAKR:GNPV:7MW7:ZACV:OOI4:LZMY WARNING: No swap limit support [email protected]:# #+END_EXAMPLE ** DONE [#A] setup chef server in docker CLOSED: [2015-04-20 Mon 08:12] docker run -it --privileged ubuntu:14.04 /bin/bash

Enable ssh

apt-get install openssh-server sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

SSH login fix. Otherwise user is kicked off after login

sed '[email protected]\srequired\s[email protected] optional [email protected]' -i /etc/pam.d/sshd service ssh restart passwd #markDenny1

sysctl change for docker container

dpkg-divert --local --rename --add /sbin/initctl ln -sf /bin/true /sbin/initctl sysctl -w kernel.shmmax=17179869184 echo "kernel.shmmax=17179869184" > /etc/sysctl.d/shmmax.conf

get deb file for chef server

apt-get update

http://downloads.chef.io/chef-server/

cp /var/lib/docker/aufs/mnt/

dpkg -i ./chef-server-core_12.0.8-1_amd64.deb /opt/opscode/embedded/bin/runsvdir-start & chef-server-ctl reconfigure

chef-server-ctl user-create admin denny zhang [email protected] dennyMarkfilebat1 --filename /root/admin.pem chef-server-ctl org-create digitalocean "DigitalOcean, Inc." --association_user admin -f /root/digitalocean-validator.pem

cat > ~/.ssh/knife.rb <<EOF log_level :info log_location STDOUT node_name 'admin' client_key '/Users/mac/.chef/admin.pem' validation_client_name 'digitalocean-validator' validation_key '/Users/mac/.chef/digitalocean-validator.pem' chef_server_url 'https://104.131.157.119/organizations/digitalocean' syntax_check_cache_path '/Users/mac/.chef/syntax_check_cache' ssl_verify_mode :verify_none EOF

generate image from the container

docker commit -m "Initial version" -a "Denny Zhang[email protected]" 8c0be19ecd87 denny/chefserver:v1

docker run -i -t --privileged -p 3022:22 -p 3443:443 denny/chefserver:v1

docker run -d -t --privileged -p 3022:22 -p 3443:443 denny/chefserver:v1 /usr/sbin/sshd -D sysctl -w kernel.shmmax=17179869184 /opt/opscode/embedded/bin/runsvdir-start & chef-server-ctl stop chef-server-ctl start chef-server-ctl status ** DONE chconfig service error when the docker starts CLOSED: [2015-04-20 Mon 19:44] http://zgu.me/blog/2014/08/20/cgconfig-service-error-when-the-docker-starts/

#+BEGIN_EXAMPLE 当启动 Docker 服务的时候,遇到如下错误: Starting cgconfig service: Error: cannot mount cpuset to /cgroup/cpuset: Device or resource busy /sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to parse /etc/cgconfig.conf [FAILED] Starting docker: [ OK ] 可以使用 cgclear 命令,清理一下.

之后记得先停止 Docker ,再启动.

1 2 3 cgclear service docker stop service docker start PS. OS = CentOS #+END_EXAMPLE ** DONE CentOS fail to start docker: yum update -y device-mapper-libs CLOSED: [2015-04-20 Mon 19:53] http://stackoverflow.com/questions/27216473/docker-1-3-fails-to-start-on-rhel6-5 http://exceptiontrail.blogspot.com/2014/12/docker-140-fails-to-start-due-to-error.html #+BEGIN_EXAMPLE [[email protected] backend]# /usr/bin/docker -d /usr/bin/docker -d INFO[0000] +job serveapi(unix:///var/run/docker.sock) INFO[0000] WARNING: You are running linux kernel version 2.6.32-431.17.1.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0. INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) /usr/bin/docker: relocation error: /usr/bin/docker: symbol dm_task_get_info_with_deferred_remove, version Base not defined in file libdevmapper.so.1.02 with link time reference #+END_EXAMPLE ** DONE [#A] commit customized docker images to supermarket CLOSED: [2015-02-26 Thu 09:01] https://docs.docker.com/userguide/dockerimages/#push-an-image-to-docker-hub #+BEGIN_EXAMPLE sudo docker push ouruser/sinatra The push refers to a repository [ouruser/sinatra] (len: 1) Sending image list Pushing repository ouruser/sinatra (3 tags) . . . #+END_EXAMPLE ** DONE commit customized docker images locally CLOSED: [2015-02-26 Thu 08:44] https://docs.docker.com/userguide/dockerimages/ #+BEGIN_EXAMPLE

[email protected]:~# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE totvs/buildkit v1 7f8520a12f86 14 seconds ago 393.9 MB ubuntu 14.04 2d24f826cb16 5 days ago 188.3 MB ubuntu 14.04.2 2d24f826cb16 5 days ago 188.3 MB ubuntu latest 2d24f826cb16 5 days ago 188.3 MB ubuntu trusty 2d24f826cb16 5 days ago 188.3 MB ubuntu trusty-20150218.1 2d24f826cb16 5 days ago 188.3 MB #+END_EXAMPLE ** DONE docker enforce devicemapper, instead of aufs CLOSED: [2015-03-08 Sun 21:03] http://stackoverflow.com/questions/20810555/ensure-that-docker-is-using-device-mapper-storage-backend

/etc/default/docker #+BEGIN_EXAMPLE [email protected]:/tmp# cat /etc/default/docker cat /etc/default/docker

Docker Upstart and SysVinit configuration file

Customize location of Docker binary (especially for development testing).

#DOCKER="/usr/local/bin/docker"

Use DOCKER_OPTS to modify the daemon startup options.

#DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"

If you need Docker to use an HTTP proxy, it can also be specified here.

#export http_proxy="http://127.0.0.1:3128/"

This is also a handy place to tweak where Docker's temporary files go.

#export TMPDIR="/mnt/bigdrive/docker-tmp" DOCKER_OPTS="-s=devicemapper" [email protected]:/tmp# #+END_EXAMPLE

#+BEGIN_EXAMPLE service docker stop rm -rf /var/lib/docker

mkdir -p /var/lib/docker/devicemapper/devicemapper

cp -r /tmp/devicemapper/* /var/lib/docker/devicemapper/devicemapper/ echo "DOCKER_OPTS="-s=devicemapper"" >> /etc/default/docker

tree /var/lib/docker/devicemapper/ export DOCKER_OPTS="-s=devicemapper" service docker start sleep 1 docker info cd /tmp/

dd if=/dev/zero of=/var/lib/docker/devicemapper/devicemapper/data bs=1G count=0 seek=8 dd if=/dev/zero of=/var/lib/docker/devicemapper/devicemapper/metadata bs=500M count=0 seek=1

#+END_EXAMPLE ** DONE Why vagrant pull so huge image file: Doesn't support AUFS, but only devicemapper CLOSED: [2015-03-08 Sun 21:24] #+BEGIN_EXAMPLE macs-air:vagrant mac$ vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Importing base box 'ubuntu/trusty64'... ==> default: Matching MAC address for NAT networking... ==> default: Checking if box 'ubuntu/trusty64' is up to date... ==> default: Setting the name of the VM: ci-totvs-mdm ==> default: Clearing any previously set forwarded ports... ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat ==> default: Forwarding ports... default: 22 => 2222 (adapter 1) ==> default: Running 'pre-boot' VM customizations... ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2222 default: SSH username: vagrant default: SSH auth method: private key default: Warning: Connection timeout. Retrying... default: Warning: Remote connection disconnect. Retrying... default: default: Vagrant insecure key detected. Vagrant will automatically replace default: this with a newly generated keypair for better security. default: default: Inserting generated public key within guest... default: Removing insecure key from the guest if its present... default: Key inserted! Disconnecting and reconnecting using new SSH key... ==> default: Machine booted and ready! ==> default: Checking for guest additions in VM... ==> default: Mounting shared folders... default: /vagrant => /Users/mac/vagrant ==> default: Running provisioner: shell... default: Running: inline script ==> default: stdin: is not a tty ==> default: Running provisioner: shell... default: Running: inline script ==> default: stdin: is not a tty ==> default: dpkg-preconfigure: unable to re-open stdin: No such file or directory ==> default: Selecting previously unselected package tree. ==> default: (Reading database ... 60969 files and directories currently installed.) ==> default: Preparing to unpack .../tree_1.6.0-1_amd64.deb ... ==> default: Unpacking tree (1.6.0-1) ... ==> default: Processing triggers for man-db (2.6.7.1-1ubuntu1) ... ==> default: Setting up tree (1.6.0-1) ... ==> default: ==> default: ==> default: % ==> default: ==> default: T ==> default: o ==> default: tal % Received % ==> default: Xferd Average Speed Time Time Time Curre ==> default: nt ==> default: Dload Upload Total ==> default: Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0 100 178 100 178 0 0 30 0 0:00:05 0:00:05 --:--:-- 41 0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:08 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:11 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:11 --:--:-- 0 100 178 100 178 0 0 14 0 0:00:12 0:00:11 0:00:01 49 0 0 0 0 0 0 0 0 --:--:-- 0:00:13 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:15 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:16 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:17 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:17 --:--:-- 0 ==> default: 1 ==> default: 0 ==> default: 0 ==> default: ==> default: 1 ==> default: Downloading Chef for ubuntu... ==> default: 8 ==> default: 3 ==> default: 5 ==> default: 8 ==> default: ==> default: ==> default: 1 ==> default: downloading https://www.chef.io/chef/metadata?v=&prerelease=false&nightlies=false&p=ubuntu&pv=14.04&m=x86_64 ==> default: to file /tmp/install.sh.2079/metadata.txt ==> default: 0 ==> default: trying wget... ==> default: 0 18358 0 0 1028 0 0:00:17 0:00:17 --:--:-- 5478 ==> default: url https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/13.04/x86_64/chef_12.1.0-1_amd64.deb ==> default: md5 b86c3dd0171e896ab3fb42f26e688fef ==> default: sha256 9bbde88f2eeb846a862512ab6385dff36278ff2ba8bd2e07a237a23337c4165a ==> default: downloaded metadata file looks valid... ==> default: downloading https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/13.04/x86_64/chef_12.1.0-1_amd64.deb ==> default: to file /tmp/install.sh.2079/chef_12.1.0-1_amd64.deb ==> default: trying wget... ==> default: Comparing checksum with sha256sum... ==> default: Installing Chef ==> default: installing with dpkg... ==> default: (Reading database ... 60976 files and directories currently installed.) ==> default: Preparing to unpack .../chef_12.1.0-1_amd64.deb ... ==> default: * Stopping chef-client chef-client ==> default: ...done. ==> default: Unpacking chef (12.1.0-1) over (11.8.2-2) ... ==> default: dpkg: warning: unable to delete old directory '/var/log/chef': Directory not empty ==> default: dpkg: warning: unable to delete old directory '/etc/chef': Directory not empty ==> default: Setting up chef (12.1.0-1) ... ==> default: Thank you for installing Chef! ==> default: Processing triggers for man-db (2.6.7.1-1ubuntu1) ... ==> default: ssh start/running, process 1237 ==> default: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.6f9TRIGLAm --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 ==> default: gpg: ==> default: requesting key A88D21E9 from hkp server keyserver.ubuntu.com ==> default: gpg: key A88D21E9: public key "Docker Release Tool (releasedocker) [email protected]" imported ==> default: gpg: Total number processed: 1 ==> default: gpg: imported: 1 (RSA: 1) ==> default: Ign http://security.ubuntu.com trusty-security InRelease ==> default: Hit http://security.ubuntu.com trusty-security Release.gpg ==> default: Ign http://archive.ubuntu.com trusty InRelease ==> default: Hit http://security.ubuntu.com trusty-security Release ==> default: Ign http://archive.ubuntu.com trusty-updates InRelease ==> default: Hit http://security.ubuntu.com trusty-security/main Sources ==> default: Hit http://security.ubuntu.com trusty-security/universe Sources ==> default: Hit http://archive.ubuntu.com trusty Release.gpg ==> default: Hit http://security.ubuntu.com trusty-security/main amd64 Packages ==> default: Hit http://security.ubuntu.com trusty-security/universe amd64 Packages ==> default: Hit http://archive.ubuntu.com trusty-updates Release.gpg ==> default: Hit http://security.ubuntu.com trusty-security/main Translation-en ==> default: Hit http://security.ubuntu.com trusty-security/universe Translation-en ==> default: Hit http://archive.ubuntu.com trusty Release ==> default: Hit http://archive.ubuntu.com trusty-updates Release ==> default: Hit http://archive.ubuntu.com trusty/main Sources ==> default: Get:1 https://get.docker.com docker InRelease ==> default: Hit http://archive.ubuntu.com trusty/universe Sources ==> default: Ign https://get.docker.com docker InRelease ==> default: Hit http://archive.ubuntu.com trusty/main amd64 Packages ==> default: Hit http://archive.ubuntu.com trusty/universe amd64 Packages ==> default: Hit http://archive.ubuntu.com trusty/main Translation-en ==> default: Hit http://archive.ubuntu.com trusty/universe Translation-en ==> default: Hit http://archive.ubuntu.com trusty-updates/main Sources ==> default: Hit http://archive.ubuntu.com trusty-updates/universe Sources ==> default: Hit http://archive.ubuntu.com trusty-updates/main amd64 Packages ==> default: Get:2 https://get.docker.com docker Release ==> default: Hit http://archive.ubuntu.com trusty-updates/universe amd64 Packages ==> default: Hit http://archive.ubuntu.com trusty-updates/main Translation-en ==> default: Get:3 https://get.docker.com docker/main amd64 Packages ==> default: Hit http://archive.ubuntu.com trusty-updates/universe Translation-en ==> default: Get:4 https://get.docker.com docker/main Translation-en_US ==> default: Ign http://archive.ubuntu.com trusty/main Translation-en_US ==> default: Ign http://archive.ubuntu.com trusty/universe Translation-en_US ==> default: Ign https://get.docker.com docker/main Translation-en_US ==> default: Ign https://get.docker.com docker/main Translation-en ==> default: Fetched 7,590 B in 21s (357 B/s) ==> default: Reading package lists... ==> default: Reading package lists... ==> default: Building dependency tree... ==> default: ==> default: Reading state information... ==> default: The following packages were automatically installed and are no longer required: ==> default: chef-zero erubis ohai ruby-diff-lcs ruby-erubis ruby-hashie ruby-highline ==> default: ruby-ipaddress ruby-mime-types ruby-mixlib-authentication ruby-mixlib-cli ==> default: ruby-mixlib-config ruby-mixlib-log ruby-mixlib-shellout ruby-net-ssh ==> default: ruby-net-ssh-gateway ruby-net-ssh-multi ruby-rack ruby-rest-client ==> default: ruby-sigar ruby-systemu ruby-yajl ==> default: Use 'apt-get autoremove' to remove them. ==> default: The following extra packages will be installed: ==> default: aufs-tools cgroup-lite git git-man liberror-perl lxc-docker-1.5.0 ==> default: Suggested packages: ==> default: git-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk ==> default: gitweb git-arch git-bzr git-cvs git-mediawiki git-svn ==> default: The following NEW packages will be installed: ==> default: aufs-tools cgroup-lite git git-man liberror-perl lxc-docker lxc-docker-1.5.0 ==> default: 0 upgraded, 7 newly installed, 0 to remove and 1 not upgraded. ==> default: Need to get 8,077 kB of archives. ==> default: After this operation, 37.1 MB of additional disk space will be used. ==> default: Get:1 http://archive.ubuntu.com/ubuntu/ trusty/universe aufs-tools amd64 1:3.2+20130722-1.1 [92.3 kB] ==> default: Get:2 http://archive.ubuntu.com/ubuntu/ trusty/main liberror-perl all 0.17-1.1 [21.1 kB] ==> default: Get:3 http://archive.ubuntu.com/ubuntu/ trusty-updates/main git-man all 1:1.9.1-1ubuntu0.1 [698 kB] ==> default: Get:4 https://get.docker.com/ubuntu/ docker/main lxc-docker-1.5.0 amd64 1.5.0 [4,632 kB] ==> default: Get:5 http://archive.ubuntu.com/ubuntu/ trusty-updates/main git amd64 1:1.9.1-1ubuntu0.1 [2,627 kB] ==> default: Get:6 https://get.docker.com/ubuntu/ docker/main lxc-docker amd64 1.5.0 [2,092 B] ==> default: Get:7 http://archive.ubuntu.com/ubuntu/ trusty/main cgroup-lite all 1.9 [3,918 B] ==> default: dpkg-preconfigure: unable to re-open stdin: No such file or directory ==> default: Fetched 8,077 kB in 23s (349 kB/s) ==> default: Selecting previously unselected package aufs-tools. ==> default: (Reading database ... 76454 files and directories currently installed.) ==> default: Preparing to unpack .../aufs-tools_1%3a3.2+20130722-1.1_amd64.deb ... ==> default: Unpacking aufs-tools (1:3.2+20130722-1.1) ... ==> default: Selecting previously unselected package liberror-perl. ==> default: Preparing to unpack .../liberror-perl_0.17-1.1_all.deb ... ==> default: Unpacking liberror-perl (0.17-1.1) ... ==> default: Selecting previously unselected package git-man. ==> default: Preparing to unpack .../git-man_1%3a1.9.1-1ubuntu0.1_all.deb ... ==> default: Unpacking git-man (1:1.9.1-1ubuntu0.1) ... ==> default: Selecting previously unselected package git. ==> default: Preparing to unpack .../git_1%3a1.9.1-1ubuntu0.1_amd64.deb ... ==> default: Unpacking git (1:1.9.1-1ubuntu0.1) ... ==> default: Selecting previously unselected package cgroup-lite. ==> default: Preparing to unpack .../cgroup-lite_1.9_all.deb ... ==> default: Unpacking cgroup-lite (1.9) ... ==> default: Selecting previously unselected package lxc-docker-1.5.0. ==> default: Preparing to unpack .../lxc-docker-1.5.0_1.5.0_amd64.deb ... ==> default: Unpacking lxc-docker-1.5.0 (1.5.0) ... ==> default: Selecting previously unselected package lxc-docker. ==> default: Preparing to unpack .../lxc-docker_1.5.0_amd64.deb ... ==> default: Unpacking lxc-docker (1.5.0) ... ==> default: Processing triggers for man-db (2.6.7.1-1ubuntu1) ... ==> default: Processing triggers for ureadahead (0.100.0-16) ... ==> default: Setting up aufs-tools (1:3.2+20130722-1.1) ... ==> default: Setting up liberror-perl (0.17-1.1) ... ==> default: Setting up git-man (1:1.9.1-1ubuntu0.1) ... ==> default: Setting up git (1:1.9.1-1ubuntu0.1) ... ==> default: Setting up cgroup-lite (1.9) ... ==> default: cgroup-lite start/running ==> default: Setting up lxc-docker-1.5.0 (1.5.0) ... ==> default: docker start/running, process 5186 ==> default: Processing triggers for ureadahead (0.100.0-16) ... ==> default: Setting up lxc-docker (1.5.0) ... ==> default: Processing triggers for libc-bin (2.19-0ubuntu6.6) ... ==> default: Pulling repository XXX/mdm ==> default: 3e2418a2e608: Pulling image (v1) from XXX/mdm ==> default: 3e2418a2e608: Pulling image (v1) from XXX/mdm, endpoint: https://registry-1.docker.io/v1/ ==> default: 3e2418a2e608: Pulling dependent layers ==> default: 511136ea3c5a: Pulling metadata ==> default: 511136ea3c5a: Pulling fs layer ==> default: 511136ea3c5a: Download complete ==> default: 27d47432a69b: Pulling metadata ==> default: 27d47432a69b: Pulling fs layer ==> default: 27d47432a69b: Download complete ==> default: 5f92234dcf1e: Pulling metadata ==> default: 5f92234dcf1e: ==> default: Pulling fs layer ==> default: 5f92234dcf1e: Download complete ==> default: 51a9c7c1f8bb: Pulling metadata ==> default: 51a9c7c1f8bb: Pulling fs layer ==> default: 51a9c7c1f8bb: Download complete ==> default: 5ba9dab47459: ==> default: Pulling metadata ==> default: 5ba9dab47459: Pulling fs layer ==> default: 5ba9dab47459: Download complete ==> default: a806c63d1e4d: Pulling metadata ==> default: a806c63d1e4d: Pulling fs layer ==> default: a806c63d1e4d: Download complete ==> default: a8328f6f348a: Pulling metadata ==> default: a8328f6f348a: Pulling fs layer ==> default: a8328f6f348a: Download complete ==> default: 54085386062b: Pulling metadata ==> default: 54085386062b: Pulling fs layer ==> default: 54085386062b: Download complete ==> default: f1c759c3a4b5: Pulling metadata ==> default: f1c759c3a4b5: Pulling fs layer ==> default: f1c759c3a4b5: Download complete ==> default: 763e9222a24f: Pulling metadata ==> default: 763e9222a24f: Pulling fs layer ==> default: 763e9222a24f: Download complete ==> default: bd30865240be: Pulling metadata ==> default: bd30865240be: Pulling fs layer ==> default: bd30865240be: Download complete ==> default: 83cec26d130b: Pulling metadata ==> default: 83cec26d130b: Pulling fs layer ==> default: 83cec26d130b: Download complete ==> default: ffe4981287bc: Pulling metadata ==> default: ffe4981287bc: Pulling fs layer ==> default: ffe4981287bc: Download complete ==> default: 849953a4d24a: Pulling metadata ==> default: 849953a4d24a: Pulling fs layer ==> default: 849953a4d24a: Download complete ==> default: da3dc9aa88ea: Pulling metadata ==> default: da3dc9aa88ea: Pulling fs layer ==> default: da3dc9aa88ea: Download complete ==> default: a81d5a840c5e: Pulling metadata ==> default: a81d5a840c5e: Pulling fs layer ==> default: a81d5a840c5e: Download complete ==> default: 65a05ff6ff98: Pulling metadata ==> default: 65a05ff6ff98: Pulling fs layer ==> default: 65a05ff6ff98: Download complete ==> default: 679c4ce72e12: Pulling metadata ==> default: 679c4ce72e12: Pulling fs layer ==> default: 679c4ce72e12: Download complete ==> default: 32d35259ec14: Pulling metadata ==> default: 32d35259ec14: Pulling fs layer ==> default: 32d35259ec14: Download complete ==> default: f7c8c8c92180: Pulling metadata ==> default: f7c8c8c92180: Pulling fs layer ==> default: f7c8c8c92180: Download complete ==> default: 6271cc67a45f: Pulling metadata ==> default: 6271cc67a45f: Pulling fs layer ==> default: 6271cc67a45f: Download complete ==> default: 71d449bc0252: Pulling metadata ==> default: 71d449bc0252: Pulling fs layer ==> default: 71d449bc0252: Download complete ==> default: 3e2418a2e608: Pulling metadata ==> default: 3e2418a2e608: Pulling fs layer ==> default: 3e2418a2e608: Download complete ==> default: 3e2418a2e608: Download complete ==> default: Status: Downloaded newer image for XXX/mdm:v1 macs-air:vagrant mac$ vagrant ssh Welcome to Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-46-generic x86_64)

System information as of Mon Mar 9 00:50:26 UTC 2015

System load: 0.83 Processes: 101 Usage of /: 2.8% of 39.34GB Users logged in: 0 Memory usage: 11% IP address for eth0: 10.0.2.15 Swap usage: 0%

Graph this data and manage this system at: https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud

0 packages can be updated. 0 updates are security updates.

[email protected]:$ sudo su - [email protected]:# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [email protected]:# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE XXX/mdm v1 3e2418a2e608 14 hours ago 625.8 MB [email protected]:# ls -lth /var/lib/docker/devicemapper/devicemapper total 1.1G -rw------- 1 root root 100G Mar 9 01:06 data -rw------- 1 root root 2.0G Mar 9 01:06 metadata [email protected]:~# ls -lth /var/lib/docker/ total 44K -rw------- 1 root root 108 Mar 9 01:06 repositories-devicemapper drwx------ 26 root root 4.0K Mar 9 01:06 graph drwx------ 5 root root 4.0K Mar 9 00:54 devicemapper drwx------ 3 root root 4.0K Mar 9 00:54 execdriver drwx------ 2 root root 4.0K Mar 9 00:54 init -rw-r--r-- 1 root root 5.0K Mar 9 00:54 linkgraph.db drwx------ 2 root root 4.0K Mar 9 00:54 tmp drwx------ 2 root root 4.0K Mar 9 00:54 trust drwx------ 2 root root 4.0K Mar 9 00:54 volumes drwx------ 2 root root 4.0K Mar 9 00:54 containers #+END_EXAMPLE ** DONE docker do the directory mapping CLOSED: [2015-03-08 Sun 05:51] docker run -d -v /root/code:/root/code -p 2200:22 -p 18080:8080 -p 18000:80 XXX/mdm:v1 /usr/sbin/sshd -D ** DONE docker apply chef: jenkins job CLOSED: [2015-03-08 Sun 06:39] cat > /root/solo.rb <<EOF cookbook_path ['/root/code/cookbooks', '/root/code/common_cookbooks'] EOF

cat > /root/node.json <<EOF { "run_list": ["recipe[jenkins-mdm]"] } EOF

chef-solo --config /root/solo.rb --log_level auto --force-formatter --no-color --json-attributes /root/node.json

#+BEGIN_EXAMPLE [email protected]:/code# ls -lth /code ls -lth /code total 36K drwxr-xr-x 79 1000 1000 4.0K Mar 8 10:05 common_cookbooks -rw-r--r-- 1 1000 1000 31 Mar 8 09:29 README.md drwxr-xr-x 2 1000 1000 4.0K Mar 8 09:29 misc -rw-r--r-- 1 1000 1000 12K Mar 8 09:29 LICENSE -rw-r--r-- 1 1000 1000 168 Mar 8 09:29 Makefile drwxr-xr-x 4 1000 1000 4.0K Mar 8 09:29 image_template drwxr-xr-x 8 1000 1000 4.0K Mar 8 09:29 cookbooks #+END_EXAMPLE ** DONE docker run fail: docker ps -a CLOSED: [2015-03-08 Sun 23:39] Error response from daemon: Error running DeviceCreate (createSnapDevice) dm_task_run failed #+BEGIN_EXAMPLE [email protected]:# docker info docker info Containers: 7 Images: 23 Storage Driver: devicemapper Pool Name: docker-0:34-122-pool Pool Blocksize: 65.54 kB Backing Filesystem: nfs Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: 1.126 GB Data Space Total: 6.442 GB Metadata Space Used: 9.282 MB Metadata Space Total: 524.3 MB Udev Sync Supported: false Data loop file- /root/vagrant/docker/devicemapper/devicemapper/data Metadata loop file- /root/vagrant/docker/devicemapper/devicemapper/metadata Library Version: 1.02.82-git (2013-10-04) Execution Driver: native-0.2 Kernel Version: 3.13.0-46-generic Operating System: Ubuntu 14.04.2 LTS CPUs: 2 Total Memory: 993.9 MiB Name: aio-ubuntu-1404 ID: HHHA:OQPR:35EU:6IUR:QVCJ:ZRRK:L7IH:KJMZ:5DMK:4R26:HB3F:347F WARNING: No swap limit support [email protected]:# docker run -d -v /root/vagrant:/root/vagrant -p 2200:22 -p 18080:8080 -p 18000:80 XXX/mdm:v1 /usr/sbin/sshd -D < -p 18080:8080 -p 18000:80 XXX/mdm:v1 /usr/sbin/sshd -D FATA[0000] Error response from daemon: Error running DeviceCreate (createSnapDevice) dm_task_run failed [email protected]:# #+END_EXAMPLE ** DONE [#A] docker run help CLOSED: [2015-03-13 Fri 22:19] #+BEGIN_EXAMPLE macs-MacBook-Air:org_data mac$ docker run help Unable to find image 'help:latest' locally Pulling repository help C-c C-cmacs-MacBook-Air:org_data mac$ docker help run

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Run a command in a new container

-a, --attach=[] Attach to STDIN, STDOUT or STDERR. --add-host=[] Add a custom host-to-IP mapping (host:ip) -c, --cpu-shares=0 CPU shares (relative weight) --cap-add=[] Add Linux capabilities --cap-drop=[] Drop Linux capabilities --cidfile="" Write the container ID to the file --cpuset="" CPUs in which to allow execution (0-3, 0,1) -d, --detach=false Detached mode: run the container in the background and print the new container ID --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm) --dns=[] Set custom DNS servers --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain) -e, --env=[] Set environment variables --entrypoint="" Overwrite the default ENTRYPOINT of the image --env-file=[] Read in a line delimited file of environment variables --expose=[] Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host -h, --hostname="" Container host name -i, --interactive=false Keep STDIN open even if not attached --ipc="" Default is to create a private IPC namespace (POSIX SysV IPC) for the container 'container:<name|id>': reuses another container shared memory, semaphores and message queues 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure. --link=[] Add link to another container in the form of name:alias --lxc-conf=[] (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" -m, --memory="" Memory limit (format: , where unit = b, k, m or g) --mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33) --name="" Assign a name to the container --net="bridge" Set the Network mode for the container 'bridge': creates a new network stack for the container on the docker bridge 'none': no networking for this container 'container:<name|id>': reuses another container network stack 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure. -P, --publish-all=false Publish all exposed ports to the host interfaces -p, --publish=[] Publish a container's port to the host format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort (use 'docker port' to see the actual mapping) --privileged=false Give extended privileges to this container --restart="" Restart policy to apply when a container exits (no, on-failure[:max-retry], always) --rm=false Automatically remove the container when it exits (incompatible with -d) --security-opt=[] Security Options --sig-proxy=true Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied. -t, --tty=false Allocate a pseudo-TTY -u, --user="" Username or UID -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container) --volumes-from=[] Mount volumes from the specified container(s) -w, --workdir="" Working directory inside the container

#+END_EXAMPLE ** DONE Why docker use so many memory: it just show the hosting OS, instead of docker CLOSED: [2015-03-14 Sat 21:08] #+BEGIN_EXAMPLE [email protected]:~# top -n 1

top - 02:06:52 up 14 min, 2 users, load average: 0.08, 0.16, 0.18 Tasks: 6 total, 1 running, 5 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.9 us, 3.6 sy, 0.2 ni, 84.5 id, 1.0 wa, 0.0 hi, 0.8 si, 0.0 st KiB Mem: 2049916 total, 1864272 used, 185644 free, 9612 buffers KiB Swap: 0 total, 0 used, 0 free. 112388 cached Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 61360 864 180 S 0.0 0.0 0:00.07 sshd 7 root 20 0 95084 1240 312 S 0.0 0.1 0:00.09 sshd 45 root 20 0 18172 1216 724 S 0.0 0.1 0:00.03 bash 180 root 20 0 95084 3968 3036 S 0.0 0.2 0:00.08 sshd 209 root 20 0 18184 2044 1544 S 0.0 0.1 0:00.02 bash 225 root 20 0 19868 1324 984 R 0.0 0.1 0:00.00 top [email protected]:~# free -ml total used free shared buffers cached Mem: 2001 1820 181 0 9 109 Low: 2001 1820 181 High: 0 0 0 -/+ buffers/cache: 1701 300 Swap: 0 0 0 #+END_EXAMPLE ** DONE [#A] docker history XXX/mdm:v3 CLOSED: [2015-03-16 Mon 09:44] #+BEGIN_EXAMPLE macs-air:image_template mac$ docker history XXX/mdm:v3 IMAGE CREATED CREATED BY SIZE 19d1769259ad 3 minutes ago /bin/sh -c #(nop) CMD [/usr/sbin/sshd -D] 0 B 22dddd4f906e 3 minutes ago /bin/sh -c #(nop) EXPOSE map[22/tcp:{}] 0 B 3b764e97ce3c 4 minutes ago /bin/sh -c bash -e /root/mdmdevops/misc/updat 846.9 MB ccaf93baa82c 31 minutes ago /bin/sh -c apt-get -yqq install git && cd 519.7 kB 90b43b734276 32 minutes ago /bin/sh -c apt-get -yqq install build-essenti 366 MB 80f943d5fc79 About an hour ago /bin/sh -c apt-get -yqq update && mkdir -p 220.3 MB c876f1cd0500 About an hour ago /bin/sh -c #(nop) MAINTAINER TOTVS Labs <denn 0 B 2103b00b3fdf 5 days ago /bin/sh -c #(nop) CMD [/bin/bash] 0 B 4faa69f72743 5 days ago /bin/sh -c sed -i 's/^#\s*(deb.universe)$/ 1.895 kB 76b658ecb564 5 days ago /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic 194.5 kB f0dde87450ec 5 days ago /bin/sh -c #(nop) ADD file:a2d97c73fb08b9738c 188.1 MB 511136ea3c5a 21 months ago 0 B #+END_EXAMPLE . #+BEGIN_EXAMPLE macs-air:image_template mac$ docker history XXX/mdm:v4 IMAGE CREATED CREATED BY SIZE 80ca99f8da09 4 minutes ago /bin/sh -c #(nop) CMD [/usr/sbin/sshd -D] 0 B 3f4601894390 4 minutes ago /bin/sh -c #(nop) EXPOSE map[22/tcp:{}] 0 B a1765548e66b 4 minutes ago /bin/sh -c bash -e /root/mdmdevops/misc/updat 613.4 MB 491d8f53fc0e 9 minutes ago /bin/sh -c cd /root && git clone [email protected] 512.9 kB 93608390b054 9 minutes ago /bin/sh -c gem install berkshelf --no-ri --no 193.8 MB a14191b7b2cc 19 minutes ago /bin/sh -c curl -L https://getchef.com/chef/i 154.8 MB ccfcc9c77afa 19 minutes ago /bin/sh -c apt-get -yqq update && mkdir -p 204 MB c876f1cd0500 2 hours ago /bin/sh -c #(nop) MAINTAINER TOTVS Labs <denn 0 B 2103b00b3fdf 5 days ago /bin/sh -c #(nop) CMD [/bin/bash] 0 B 4faa69f72743 5 days ago /bin/sh -c sed -i 's/^#\s(deb.*universe)$/ 1.895 kB 76b658ecb564 5 days ago /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic 194.5 kB f0dde87450ec 5 days ago /bin/sh -c #(nop) ADD file:a2d97c73fb08b9738c 188.1 MB 511136ea3c5a 21 months ago #+END_EXAMPLE ** DONE Optimizing Docker Images CLOSED: [2015-03-19 Thu 11:07] http://www.centurylinklabs.com/optimizing-docker-images/ ** DONE docker sysctl change fail CLOSED: [2015-04-18 Sat 08:15] http://stackoverflow.com/questions/23840737/how-to-remount-the-proc-filesystem-in-a-docker-as-a-r-w-system http://tonybai.com/2014/10/14/discussion-on-the-approach-to-modify-system-variables-in-docker/

Docker的base image做的很精简,甚至都没有init进程,原本在OS启动时执行生效系统变量的过程(sysctl -p)也给省略了,导致这些系统变量依旧保留着kernel默认值.以CentOs为例,在linux kernel boot后,init都会执行/etc/rc.d/rc.sysinit,后者会加载/etc/sysctl.conf中的系统变量值.下面是 CentOs5.6中的rc.sysinit代码摘录:

Docker容器中的系统变量在non-priviledged模式下目前(我使用的时docker 1.2.0版本)就无法修改,这 和resolv.conf`hosts等文件映射到宿主机对应的文件有不同.

You don't. sysctl values are not confined to the container - they affect the whole system, so it's not appropriate for a container to be able to change them.

If you want to set a higher value for this variable, do that from the host system. #+BEGIN_EXAMPLE [email protected]:/# sysctl -w kernel.shmmax=17179869184 sysctl: setting key "kernel.shmmax": Read-only file system #+END_EXAMPLE ** DONE docker fail to run commands: Error opening terminal: unknown: export TERM=xterm CLOSED: [2016-05-15 Sun 09:45] export TERM=xterm

https://andykdocs.de/development/Docker/Fixing+the+Docker+TERM+variable+issue #+BEGIN_EXAMPLE [email protected]:/etc/monit/conf.d# watch monit status Error opening terminal: unknown. #+END_EXAMPLE ** DONE docker couchbase fail to start: docker container's ip changed after reboot CLOSED: [2016-06-08 Wed 20:03] ** DONE docker start vm with specify ip CLOSED: [2016-06-08 Wed 20:17] http://stackoverflow.com/questions/27937185/assign-static-ip-to-docker-container

docker stop my-test docker rm my-test

docker network create --subnet=172.18.0.0/16 mynet123 docker run -t -d --privileged -h mytest --name my-test --net=mynet123 --ip 172.18.0.22 denny/sshd:v1 /usr/sbin/sshd -D docker exec -it my-test ifconfig eth0

docker stop my-test docker start my-test docker exec -it my-test ifconfig eth0

#+BEGIN_EXAMPLE Easy with Docker version 1.10.1, build 9e83765.

First you need to create you own docker network (mynet123)

docker network create --subnet=172.18.0.0/16 mynet123 than simply run the image (I'll take ubuntu as example)

docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash then in ubuntu shell

ip addr Additionally you could use

--hostname to specify a hostname --add-host to add more entries to /etc/hosts #+END_EXAMPLE ** DONE docker: use squid to speed up the test by http proxy CLOSED: [2015-05-12 Tue 09:20]

driver: name: docker driver_config: http_proxy: http://10.165.4.67:3128 https_proxy: https://10.165.4.67:3128 instance_name: "all-in-one" use_sudo: false privileged: true tls_verify: true tls_cacert: /Users/mac/Dropbox/private_data/project/docker/docker_tls_XXX/ca.pem tls_cert: /Users/mac/Dropbox/private_data/project/docker/docker_tls_XXX/cert.pem tls_key: /Users/mac/Dropbox/private_data/project/docker/docker_tls_XXX/key.pem socket: tcp://10.165.4.67:4243 provision_command: "curl -L https://www.opscode.com/chef/install.sh | bash"

forward:

- 5022:22

volume: /home/denny/cache:/var/chef/cache/

provisioner: name: chef_zero

platforms:

  • name: ubuntu-14.04

suites:

  • name: default run_list:
    • recipe[apt::default]
    • recipe[all-in-one::default] attributes: {os_basic: {enable_firewall: '0'}} ** DONE docker cache directory CLOSED: [2015-05-12 Tue 09:21]

mkdir /home/denny/cache #+BEGIN_EXAMPLE

driver: name: docker driver_config: http_proxy: http://10.165.4.67:3128 https_proxy: https://10.165.4.67:3128 instance_name: "all-in-one" use_sudo: false privileged: true tls_verify: true tls_cacert: /Users/mac/Dropbox/private_data/project/docker/docker_tls_XXX/ca.pem tls_cert: /Users/mac/Dropbox/private_data/project/docker/docker_tls_XXX/cert.pem tls_key: /Users/mac/Dropbox/private_data/project/docker/docker_tls_XXX/key.pem socket: tcp://10.165.4.67:4243 provision_command: "curl -L https://www.opscode.com/chef/install.sh | bash"

forward:

- 5022:22

volume: /home/denny/cache:/var/chef/cache/ #+END_EXAMPLE ** DONE docker rename the instance name CLOSED: [2015-05-12 Tue 15:02] docker run -d -t --privileged --name denny-sandbox -p 7022:22 denny/dennysandbox:latest /usr/sbin/sshd -D

https://groups.google.com/forum/#!topic/docker-dev/8vhmtyjqjME I would stop the container and run a new container off the same image with a different --name parameter to docker run

http://stackoverflow.com/questions/19035358/how-to-copy-and-rename-a-docker-container ** DONE docker container ip: 172.17.42.1 CLOSED: [2015-05-19 Tue 11:02] ** DONE docker: client and server don't have same version CLOSED: [2015-05-24 Sun 18:36] boot2docker upgrade http://blog.zedroot.org/error-response-from-daemon-client-and-server-dont-have-same-version/ #+BEGIN_EXAMPLE MacPro:~ mac$ docker pull ubuntu:14.04 FATA[0000] Error response from daemon: client and server don't have same version (client : 1.18, server: 1.16)

MacPro:org_data mac$ docker --version Docker version 1.6.2, build 7c8fca2 MacPro:org_data mac$ boot2docker version Boot2Docker-cli version: v1.6.2 Git commit: cb2c3bc MacPro:org_data mac$ #+END_EXAMPLE ** DONE [#A] nfs mount issue to bypass docker issue: mdm fail to start at coucbhase CLOSED: [2015-05-25 Mon 23:58] [5/25/15, 9:32:15 PM] kungchaowang: for that machine, in order to increase performance I have do this to the fstab file:

/dev/sdb1 /data ext4 noatime,data=writeback,barrier=0 0 0 [5/25/15, 9:32:23 PM] denny: In that docker container, mdm service successfully start. [5/25/15, 9:32:29 PM] kungchaowang: so, can you use that /data/ folder in docker for couchbase database [5/25/15, 9:32:51 PM] denny: Got it. I will try it, when it's up tomorrow. [5/25/15, 9:32:57 PM] kungchaowang: it's up now [5/25/15, 9:33:06 PM] kungchaowang: as I went into office to restart it [5/25/15, 9:33:17 PM] denny: Yes, it's up. [5/25/15, 9:33:20 PM] kungchaowang: you can ssh into 10.165.4.67 [5/25/15, 9:33:52 PM] kungchaowang: so, please configure couchbase to put database on that host /data folder [5/25/15, 9:34:02 PM] kungchaowang: see if that error would go away ** DONE [#A] manually start docker CLOSED: [2015-07-29 Wed 13:59] export DOCKER_OPTS="-g /data/docker/ --tlsverify --tlscacert=/root/docker/ca.pem --tlscert=/root/docker/server-cert.pem --tlskey=/root/docker/server-key.pem -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock"

docker -d ** DONE [#A] setup docker server with daemon tcp port :IMPORTANT: CLOSED: [2015-05-26 Tue 15:40] http://docs.docker.com/articles/https/ *** Install docker wget -qO- https://get.docker.com/ | sh *** generate SSL certificate for docker mkdir -p /root/docker/

start TLS certificate for docker

cd /root/docker

openssl genrsa -aes256 -out ca-key.pem 2048

password: password1

openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem #+BEGIN_EXAMPLE Country Name (2 letter code) [AU]:US US State or Province Name (full name) [Some-State]:MA MA Locality Name (eg, city) []:Boston Boston Organization Name (eg, company) [Internet Widgits Pty Ltd]:OSC OSC Organizational Unit Name (eg, section) []:cloud cloud Common Name (e.g. server FQDN or YOUR name) []:www.oscgc.com www.oscgc.com Email Address []:[email protected] [email protected] #+END_EXAMPLE

openssl genrsa -out server-key.pem 2048 openssl req -subj "/CN=www.oscgc.com" -new -key server-key.pem -out server.csr

TODO: change below to right ip

echo subjectAltName = IP:172.17.42.1,IP:172.17.0.1,IP:172.18.42.1,IP:172.18.0.1,IP:123.57.240.189,IP:127.0.0.1 > extfile.cnf

openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf

client

openssl genrsa -out key.pem 2048 openssl req -subj '/CN=client' -new -key key.pem -out client.csr

echo extendedKeyUsage = clientAuth > extfile2.cnf

Now sign the public key:

openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile2.cnf

chmod -v 0400 ca-key.pem key.pem server-key.pem

scp -r [email protected]://root/docker/ /Users/mac/Dropbox/private_data/project/docker/docker_tls_oscgc/

*** docker start conf cat > /etc/default/docker <<EOF

Docker Upstart and SysVinit configuration file

Customize location of Docker binary (especially for development testing).

#DOCKER="/usr/local/bin/docker"

Use DOCKER_OPTS to modify the daemon startup options.

DOCKER_OPTS="--tlsverify --tlscacert=/root/docker/ca.pem --tlscert=/root/docker/server-cert.pem --tlskey=/root/docker/server-key.pem -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock"

If you need Docker to use an HTTP proxy, it can also be specified here.

#export http_proxy="http://127.0.0.1:3128/"

This is also a handy place to tweak where Docker's temporary files go.

#export TMPDIR="/mnt/bigdrive/docker-tmp" EOF

cd /root/docker docker --tlsverify --tlscacert=/root/docker/ca.pem --tlscert=/root/docker/server-cert.pem --tlskey=/root/docker/server-key.pem -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock

restart docker

service docker restart ps -ef | grep docker

start docker

/usr/bin/docker --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d

HOST="52.74.24.59" scp -r [email protected]$HOST:/home/denny/docker/* ~/docker/ cd ~/docker

docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=$HOST:4243 images

docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=52.74.24.59:4243 images *** DONE [#A] Docker daemon Socket with HTTPS :IMPORTANT: CLOSED: [2015-05-11 Mon 14:37] https://docs.docker.com/engine/security/https/

  • If you need Docker to be reachable via the network in a safe manner, you can enable TLS by specifying the tlsverify flag and pointing Docker's tlscacert flag to a trusted CA certificate.

  • In the daemon mode, it will only allow connections from clients authenticated by a certificate signed by that CA. In the client mode, it will only connect to servers with a certificate signed by that CA.

http://blog.trifork.com/2013/12/24/docker-from-a-distance-the-remote-api/ http://sheerun.net/2014/05/17/remote-access-to-docker-with-tls/

/usr/bin/docker -H tcp://127.0.0.1:4243 -d /usr/bin/docker -d -H tcp://0.0.0.0:4243 --tlsverify=false

DOCKER_OPTS="--tlsverify -H=unix:///var/run/docker.sock -H=0.0.0.0:4243 --tlscacert=/root/.docker/ca.pem --tlscert=/root/.docker/cert.pem --tlskey=/root/.docker/key.pem"

docker --tlsverify=false -H tcp://www.dennyzhang.com:4243 images **** DONE fail to start: --tlsverify, and ip port CLOSED: [2015-05-11 Mon 12:48] [email protected]:~# /usr/bin/docker -d -H tcp://127.0.0.1:4243 /usr/bin/docker -d -H tcp://127.0.0.1:4243 INFO[0000] +job serveapi(tcp://127.0.0.1:4243) INFO[0000] Listening for HTTP on tcp (127.0.0.1:4243) INFO[0000] /!\ DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!
INFO[0000] +job init_networkdriver() INFO[0000] -job init_networkdriver() = OK (0) WARN[0000] Your kernel does not support cgroup swap limit. INFO[0000] Loading containers: start.

INFO[0000] Loading containers: done. INFO[0000] docker daemon: 1.6.1 97cd073; execdriver: native-0.2; graphdriver: aufs INFO[0000] +job acceptconnections() INFO[0000] -job acceptconnections() = OK (0) INFO[0000] Daemon has completed initialization **** fail to start http://stackoverflow.com/questions/28421391/whats-the-fastest-way-to-migrate-from-boot2docker-to-vagrantnfs-on-mac-os-x #+BEGIN_EXAMPLE MacPro:linux-basic mac$ docker --tlsverify=false -H tcp://www.dennyzhang.com:4243 images docker --tlsverify=false -H tcp://www.dennyzhang.com:4243 images FATA[0001] An error occurred trying to connect: Get https://www.dennyzhang.com:4243/v1.16/images/json: tls: oversized record received with length 20527 MacPro:linux-basic mac$ #+END_EXAMPLE

http://docs.docker.com/articles/https/ https://mistio.zendesk.com/hc/en-us/articles/201544379-Adding-a-Docker-engine http://blog.tutum.co/2013/11/23/remote-and-secure-use-of-docker-api-with-python-part-ii/ **** tls: oversized record received with length 20527 /usr/bin/docker --tlsverify=false -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d

https://github.com/fnichol/dvm/issues/47

https://github.com/docker/machine/issues/26

https://groups.google.com/forum/#!topic/docker-user/lYl650-Y8ok http://segmentfault.com/q/1010000000768007

http://rogerhacks.blogspot.com/2015/01/getting-docker-to-work-on-os-x.html **** [#A] web page: Running Docker with HTTPS - Docker Documentation http://docs.docker.com/articles/https/ ***** webcontent :noexport: #+begin_example Location: http://docs.docker.com/articles/https/ [docker-log] [ ]

  • What is Docker?
  • Use Cases
  • Try It!
  • Browse

Log In Sign Up [docker-log] [ ]

  • Browse Repos

  • Documentation

  • Community

  • Help

  • profile picture

    • View Profile
    • Settings
    • My Repositories
    • Billing
    • Log out
  • About

    • Docker
    • Release Notes
    • Understanding Docker
  • Installation

    • Ubuntu
    • Mac OS X
    • Microsoft Windows
    • Building and testing the Windows Docker client
    • Amazon EC2
    • Arch Linux
    • Binaries
    • CentOS
    • CRUX Linux
    • Debian
    • Fedora
    • FrugalWare
    • Google Cloud Platform
    • Gentoo
    • IBM Softlayer
    • Joyent Compute Service
    • Microsoft Azure
    • Rackspace Cloud
    • Red Hat Enterprise Linux
    • Oracle Linux
    • SUSE
    • Docker Compose
  • User Guide

    • The Docker User Guide
    • Getting Started with Docker Hub
    • Dockerizing Applications
    • Working with Containers
    • Working with Docker Images
    • Linking containers together
    • Managing data in containers
    • Apply custom metadata
    • Working with Docker Hub
    • Docker Compose
    • &blacksquare;  Use Compose in production
      
    • &blacksquare;  Extend Compose services
      
    • Docker Machine
    • Docker Swarm
  • Docker Hub

    • Docker Hub
    • Accounts
    • User Guide
    • Your Repositories
    • Automated Builds
    • Official Repositories
  • Docker Hub Enterprise

    • Overview
    • Quick Start: Basic Workflow
    • User Guide
    • Admin Guide
    • Installation
    • Configuration options
    • Support
  • Examples

    • Dockerizing a Node.js web application
    • Dockerizing MongoDB
    • Dockerizing a Redis service
    • Dockerizing a PostgreSQL service
    • Dockerizing a Riak service
    • Dockerizing an SSH service
    • Dockerizing a CouchDB service
    • Dockerizing an Apt-Cacher-ng service
    • Getting started with Compose and Django
    • Getting started with Compose and Rails
    • Getting started with Compose and Wordpress
  • Articles

    • Docker basics
    • Advanced networking
    • Security
    • Running Docker with HTTPS
    • Run a local registry mirror
    • Automatically starting containers
    • Creating a base image
    • Best practices for writing Dockerfiles
    • Using certificates for repository client verification
    • Using Supervisor
    • Configuring Docker
    • Process management with CFEngine
    • Using Puppet
    • Using Chef
    • Using PowerShell DSC
    • Cross-Host linking using ambassador containers
    • Runtime metrics
    • Increasing a Boot2Docker volume
    • Controlling and configuring Docker using Systemd
  • Reference

    • Docker command line
    • Dockerfile
    • FAQ
    • Run Reference
    • Compose command line
    • Compose yml
    • Compose ENV variables
    • Compose commandline completion
    • Swarm discovery
    • Swarm strategies
    • Swarm filters
    • Swarm API
    • Docker Registry 2.0
    • &blacksquare;  Deploy a registry
      
    • &blacksquare;  Configure a registry
      
    • &blacksquare;  Storage driver model
      
    • &blacksquare;  Work with notifications
      
    • &blacksquare;  Registry Service API v2
      
    • &blacksquare;  JSON format
      
    • &blacksquare;  Authenticate via central service
      
    • Docker Hub and Registry 1.0
    • &blacksquare; Docker Registry API v1
      
    • &blacksquare; Docker Registry 1.0 API Client Libraries
      
    • Docker Hub API
    • Docker Remote API
    • Docker Remote API v1.18
    • Docker Remote API v1.17
    • Docker Remote API v1.16
    • Docker Remote API Client Libraries
    • Docker Hub Accounts API
  • Contributor

    • README first
    • Get required software for Linux or OS X
    • Get required software for Windows
    • Configure Git for contributing
    • Work with a development container
    • Run tests and test documentation
    • Understand contribution workflow
    • Find an issue
    • Work on an issue
    • Create a pull request
    • Participate in the PR review
    • Advanced contributing
    • Where to get help
    • Coding style guide
    • Documentation style guide
  • Create a CA, server and client keys with OpenSSL

  • Secure by default

  • Other modes

    • Daemon modes
    • Client modes
    • Connecting to the Secure Docker port using curl
  • Version v1.6 +

    • Edit on GitHub

Protecting the Docker daemon Socket with HTTPS

By default, Docker runs via a non-networked Unix socket. It can also optionally communicate using a HTTP socket.

If you need Docker to be reachable via the network in a safe manner, you can enable TLS by specifying the tlsverify flag and pointing Docker's tlscacert flag to a trusted CA certificate.

In the daemon mode, it will only allow connections from clients authenticated by a certificate signed by that CA. In the client mode, it will only connect to servers with a certificate signed by that CA.

Warning: Using TLS and managing a CA is an advanced topic. Please familiarize yourself with
OpenSSL, x509 and TLS before using it in production.

Warning: These TLS commands will only generate a working set of certificates on Linux. Mac OS X
comes with a version of OpenSSL that is incompatible with the certificates that Docker
requires.

Create a CA, server and client keys with OpenSSL

Note: replace all instances of $HOST in the following example with the DNS name of your Docker
daemon's host.

First generate CA private and public keys:

$ openssl genrsa -aes256 -out ca-key.pem 2048 Generating RSA private key, 2048 bit long modulus ......+++ ...............+++ e is 65537 (0x10001) Enter pass phrase for ca-key.pem: Verifying - Enter pass phrase for ca-key.pem: $ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem Enter pass phrase for ca-key.pem: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank.

Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]:Queensland Locality Name (eg, city) []:Brisbane Organization Name (eg, company) [Internet Widgits Pty Ltd]:Docker Inc Organizational Unit Name (eg, section) []:Boot2Docker Common Name (e.g. server FQDN or YOUR name) []:$HOST Email Address []:[email protected]

Now that we have a CA, you can create a server key and certificate signing request (CSR). Make sure that "Common Name" (i.e., server FQDN or YOUR name) matches the hostname you will use to connect to Docker:

Note: replace all instances of $HOST in the following example with the DNS name of your Docker
daemon's host.

$ openssl genrsa -out server-key.pem 2048 Generating RSA private key, 2048 bit long modulus ......................................................+++ ............................................+++ e is 65537 (0x10001) $ openssl req -subj "/CN=$HOST" -new -key server-key.pem -out server.csr

Next, we're going to sign the public key with our CA:

Since TLS connections can be made via IP address as well as DNS name, they need to be specified when creating the certificate. For example, to allow connections using 10.10.10.20 and 127.0.0.1:

$ echo subjectAltName = IP:10.10.10.20,IP:127.0.0.1 > extfile.cnf

$ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem
-CAcreateserial -out server-cert.pem -extfile extfile.cnf Signature ok subject=/CN=your.host.com Getting CA Private Key Enter pass phrase for ca-key.pem:

For client authentication, create a client key and certificate signing request:

$ openssl genrsa -out key.pem 2048 Generating RSA private key, 2048 bit long modulus ...............................................+++ ...............................................................+++ e is 65537 (0x10001) $ openssl req -subj '/CN=client' -new -key key.pem -out client.csr

To make the key suitable for client authentication, create an extensions config file:

$ echo extendedKeyUsage = clientAuth > extfile.cnf

Now sign the public key:

$ openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem
-CAcreateserial -out cert.pem -extfile extfile.cnf Signature ok subject=/CN=client Getting CA Private Key Enter pass phrase for ca-key.pem:

After generating cert.pem and server-cert.pem you can safely remove the two certificate signing requests:

$ rm -v client.csr server.csr

With a default umask of 022, your secret keys will be world-readable and writable for you and your group.

In order to protect your keys from accidental damage, you will want to remove their write permissions. To make them only readable by you, change file modes as follows:

$ chmod -v 0400 ca-key.pem key.pem server-key.pem

Certificates can be world-readable, but you might want to remove write access to prevent accidental damage:

$ chmod -v 0444 ca.pem server-cert.pem cert.pem

Now you can make the Docker daemon only accept connections from clients providing a certificate trusted by our CA:

$ docker -d --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem
-H=0.0.0.0:2376

To be able to connect to Docker and validate its certificate, you now need to provide your client keys, certificates and trusted CA:

Note: replace all instances of $HOST in the following example with the DNS name of your Docker
daemon's host.

$ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem
-H=$HOST:2376 version

Note: Docker over TLS should run on TCP port 2376.

Warning: As shown in the example above, you don't have to run the docker client with sudo or
the docker group when you use certificate authentication. That means anyone with the keys can
give any instructions to your Docker daemon, giving them root access to the machine hosting the
daemon. Guard these keys as you would a root password!

Secure by default

If you want to secure your Docker client connections by default, you can move the files to the .docker directory in your home directory -- and set the DOCKER_HOST and DOCKER_TLS_VERIFY variables as well (instead of passing -H=tcp://$HOST:2376 and --tlsverify on every call).

$ mkdir -pv ~/.docker $ cp -v {ca,cert,key}.pem ~/.docker $ export DOCKER_HOST=tcp://$HOST:2376 DOCKER_TLS_VERIFY=1

Docker will now connect securely by default:

$ docker ps

Other modes

If you don't want to have complete two-way authentication, you can run Docker in various other modes by mixing the flags.

Daemon modes

  • tlsverify, tlscacert, tlscert, tlskey set: Authenticate clients
  • tls, tlscert, tlskey: Do not authenticate clients

Client modes

  • tls: Authenticate server based on public/default CA pool
  • tlsverify, tlscacert: Authenticate server based on given CA
  • tls, tlscert, tlskey: Authenticate with client certificate, do not authenticate server based on given CA
  • tlsverify, tlscacert, tlscert, tlskey: Authenticate with client certificate and authenticate server based on given CA

If found, the client will send its client certificate, so you just need to drop your keys into ~ /.docker/{ca,cert,key}.pem. Alternatively, if you want to store your keys in another location, you can specify that location using the environment variable DOCKER_CERT_PATH.

$ export DOCKER_CERT_PATH=~/.docker/zone1/ $ docker --tlsverify ps

Connecting to the Secure Docker port using curl

To use curl to make test API requests, you need to use three extra command line flags:

$ curl https://$HOST:2376/images/json
--cert ~/.docker/cert.pem
--key ~/.docker/key.pem
--cacert ~/.docker/ca.pem

Community

  • Events
  • Friends' Posts
  • Meetups
  • Governance
  • Forums
  • IRC
  • GitHub
  • Stackoverflow
  • Swag

Enterprise

  • Support
  • Education
  • Services

Partner Solutions

  • Find a Partner
  • Partner Program
  • Learn More

Resources

  • Documentation
  • Help
  • Use Cases
  • Online Tutorial
  • How To Buy
  • Status
  • Security

Company

  • About Us
  • Team
  • News
  • Press
  • Careers
  • Contact

Connect Subscribe to our newsletter Email:[ ]

  • Blog

  • Twitter

  • Google+

  • Facebook

  • YouTube

  • Slideshare

  • LinkedIn

  • GitHub

  • Reddit

  • AngelList

© 2014-2015 Docker, Inc. Terms · Privacy · Trademarks

#+end_example **** /etc/default/docker #+BEGIN_EXAMPLE

Docker Upstart and SysVinit configuration file

Customize location of Docker binary (especially for development testing).

#DOCKER="/usr/local/bin/docker"

Use DOCKER_OPTS to modify the daemon startup options.

DOCKER_OPTS="--tlsverify --tlscacert=/home/denny/docker/ca.pem --tlscert=/home/denny/docker/server-cert.pem --tlskey=/home/denny/docker/server-key.pem -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock"

If you need Docker to use an HTTP proxy, it can also be specified here.

#export http_proxy="http://127.0.0.1:3128/"

This is also a handy place to tweak where Docker's temporary files go.

#export TMPDIR="/mnt/bigdrive/docker-tmp" #+END_EXAMPLE ** DONE [#A] docker container start with new NAT port forwardining CLOSED: [2015-05-26 Tue 17:56] http://stackoverflow.com/questions/19897743/exposing-a-port-on-a-live-docker-container

iptables -t nat -A DOCKER -p tcp --dport 8001 -j DNAT --to-destination 172.17.0.19:8000 sudo iptables -t nat -L -n

You cannot do this via Docker, but you can access the container's un-exposed port from the host machine.

if you have a container that with something running on its port 8000, you can run

wget http://container_ip:8000 To get the container´s ip address, run the 2 commands:

docker ps

docker inspect container_name | grep IPAddress Internally, Docker shells out to call iptables when you run an image, so maybe some variation on this will work.

to expose the containerś port 8000 on your localhosts port 8001:

iptables -t nat -A DOCKER -p tcp --dport 8001 -j DNAT --to-destination 172.17.0.19:8000 One way you can work this out, is to setup another container with the port mapping you want, and compare the output of the iptables-save command (though, I had to remove some of the other options that force traffic to go via the docker proxy). ** DONE [#B] docker check container mapped volume: docker inspect all-in-one-auth CLOSED: [2015-05-26 Tue 22:03] #+BEGIN_EXAMPLE [email protected]:/data/docker# docker ps docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b02d20eda076 70bc3690bca8:latest "/usr/sbin/sshd -D - 11 seconds ago Up 10 seconds 0.0.0.0:32785->22/tcp all-in-one 0f4b41488636 70bc3690bca8:latest "/usr/sbin/sshd -D - 5 minutes ago Up 5 minutes 0.0.0.0:32784->22/tcp all-in-one-jenkins 4840a264a7f8 denny/sshd:v1 "/usr/sbin/sshd -D" 2 weeks ago Up 12 hours 0.0.0.0:18000->18000/tcp, 0.0.0.0:4022->22/tcp, 0.0.0.0:48080->18080/tcp clever_yonath #+END_EXAMPLE ** DONE when start docker container, run some commands: docker exec <container_id> echo "Hello from container!" CLOSED: [2015-05-28 Thu 13:48] In October 2014 Docker team introduced docker exec command: https://docs.docker.com/reference/commandline/cli/#exec

So now you can run any command in running container just knowing its ID: docker exec <container_id> echo "Hello from container!" ** DONE Docker images: avoid slow internet issue: docker save/load CLOSED: [2015-06-03 Wed 23:28] http://stackoverflow.com/questions/22381442/pulling-docker-images save Save an image to a tar archive load Load an image from a tar archive ** DONE docker port forwarding for a range of ports CLOSED: [2015-06-20 Sat 19:28] http://stackoverflow.com/questions/28717464/docker-expose-all-ports-or-range-of-ports-from-7000-to-8000 docker run -d -t --privileged -p 10000-10050:10000-10050 -p 20022:22 --name denny-test denny/osc:latest /usr/sbin/sshd -D

nc -l 32769 ** DONE CentOS install docker CLOSED: [2015-07-02 Thu 10:35] http://philipzheng.gitbooks.io/docker_practice/content/install/centos.html #+BEGIN_EXAMPLE CentOS6

對於 CentOS6,可以使用 EPEL 套件庫安裝 Docker,命令以下

$ sudo yum install http://mirrors.yun-idc.com/epel/6/i386/epel-release-6-8.noarch.rpm $ sudo yum install docker-io CentOS7

CentOS7 系統 CentOS-Extras 庫中已內建 Docker,可以直接安裝:

$ sudo yum install docker 安裝之後啟動 Docker 服務,並讓它隨系統啟動自動載入.

$ sudo service docker start $ sudo chkconfig docker on #+END_EXAMPLE ** DONE initctl too old upstart check: ERROR: version of /sbin/initctl too old CLOSED: [2015-07-16 Thu 00:17] http://stackoverflow.com/questions/28596795/initctl-too-old-upstart-check init-checkconf /etc/init/docker-registry.conf

#+BEGIN_EXAMPLE /sbin/initctl.distrib: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=4ded3ddfa60029907d7b080e3b87f635fbf5844a, stripped [email protected]:# ls -lth /sbin/initctl ls -lth /sbin/initctl lrwxrwxrwx 1 root root 9 Jun 30 21:15 /sbin/initctl -> /bin/true [email protected]:# rm -rf /sbin/initctl && ln -s /sbin/initctl.distrib /sbin/initctl [email protected]:~# file /sbin/initctl.distrib file /sbin/initctl.distrib #+END_EXAMPLE ** DONE [#A] upstart does not work inside of a docker container: they do some magic with the init system. CLOSED: [2015-07-16 Thu 00:55] http://stackoverflow.com/questions/28055715/running-services-upstart-init-d-in-a-container #+BEGIN_EXAMPLE Unfortunately, upstart does not work inside of a docker container because they do some magic with the init system.

This issue explains:

If your application uses upstart, this wont fit well in bare docker images, and even more if they divert /sbin/init or /sbin/initctl to something like /bin/true or /dev/null. You application may use service to start if this one has an old school systemV initscript and if the initctl command has not been diverted.

In the case of salt-minion, on ubuntu the packaging uses an upstart job and no classical init script so it is normal that it wont start in both cases. And this one says:

Because Docker replaces the default /sbin/init with its own, there's no way to run the Upstart init inside a Docker container. #+END_EXAMPLE ** DONE docker get current Docker registry server: /root/.docker/config.json, /root/.dockercfg CLOSED: [2015-07-19 Sun 08:26] #+BEGIN_EXAMPLE [email protected]:# cat /root/.dockercfg { "https://index.docker.io/v1/": { "auth": "ZGVubnk6ZmlsZWJhdDI=", "email": "[email protected]" }, "https://www.testdocker.com:8080": { "auth": "bXlkb2NrZXI6ZG9ja2VycGFzc3dk", "email": "" } } [email protected]:# cat /root/.docker/config.json { "auths": { "https://www.testdocker.com:8080": { "auth": "bXlkb2NrZXI6ZG9ja2VycGFzc3dk", "email": "" } } } #+END_EXAMPLE ** DONE 阿里云: aliyun can't install docker CLOSED: [2015-07-25 Sat 23:52] http://www.zhihu.com/question/24863856

sudo route del -net 172.16.0.0 netmask 255.240.0.0 cat /etc/network/interfaces cat /etc/network/interface

因为阿里云默认把所有的私网地址段都加到导致docker找不到一个可用的IP段了. 要解决这个问题只需要一步,修改/etc/network/interface 去掉172那段的路由, 然后ip route del 172.段的路由.

#+BEGIN_EXAMPLE [email protected]:~# cat /etc/network/interfaces cat /etc/network/interfaces auto lo iface lo inet loopback auto eth1 iface eth1 inet static address 123.57.240.189 netmask 255.255.252.0 up route add -net 0.0.0.0 netmask 0.0.0.0 gw 123.57.243.247 dev eth1 auto eth0 iface eth0 inet static address 10.51.97.58 netmask 255.255.248.0 up route add -net 11.0.0.0 netmask 255.0.0.0 gw 10.51.103.247 dev eth0 up route add -net 192.168.0.0 netmask 255.255.0.0 gw 10.51.103.247 dev eth0 up route add -net 172.16.0.0 netmask 255.240.0.0 gw 10.51.103.247 dev eth0 up route add -net 100.64.0.0 netmask 255.192.0.0 gw 10.51.103.247 dev eth0 up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.51.103.247 dev eth0

#+END_EXAMPLE ** DONE docker remove none image CLOSED: [2015-07-25 Sat 11:28] https://meta.discourse.org/t/low-on-disk-space-cleaning-up-old-docker-containers/15792/6 docker images --no-trunc| grep none | awk '{print $3}' | xargs -r docker rmi ** DONE customize docker data directory CLOSED: [2015-07-29 Wed 12:16] http://stackoverflow.com/questions/24309526/how-to-change-the-docker-image-installation-directory Here an example: DOCKER_OPTS="-g /mnt/somewhere/else/docker/"

/etc/sysconfig/docker ** DONE docker persist hostname: docker run -h $hosntame CLOSED: [2015-08-11 Tue 11:23] docker run -h oscaio -t -d --privileged -p 7022:22 denny/sshd:latest /usr/sbin/sshd -D ** [#A] Docker in Mac :IMPORTANT: | Name | Summary | |----------------------------+---------------------------| | boot2docker start | | |----------------------------+---------------------------| | docker ps | list containers | | docker images | list images | | docker start | Start a stopped container | | docker stop | Stop a running container | | docker attach 8d0f79ccbd2e | | *** [#A] Boot2Docker: a lightweight Linux distribution to run docker on windows and mac OSX It runs completely from RAM, is a small ~24MB download and boots in ~5s (YMMV). https://github.com/boot2docker/boot2docker

  • Boot2Docker | Name | Summary | |--------------------------+------------| | boot2docker init | Initialize | | boot2docker delete | Destory | | boot2docker start | Start VM | | $(boot2docker shellinit) | | | boot2docker stop | | | docker run hello-world | | | boot2docker ip | | | boot2docker help | |

  • NAT networked (Docker 2375->2375 and SSH 22->2022 are forwarded to the host) **** basic test https://github.com/boot2docker/boot2docker If you run a container with an exposed port, and then use OSX's open command:

$ boot2docker up $ $(boot2docker shellinit) $ docker run --name nginx-test -d -p 80:80 nginx $ open http://$(boot2docker ip 2>/dev/null)/ $ docker stop nginx-test $ docker rm nginx-test **** DONE How to login the boot2Docker VM of virtualbox CLOSED: [2015-02-05 Thu 22:55] https://github.com/boot2docker/boot2docker

  • boot2docker ssh
  • ssh [email protected]$(boot2docker ip) pass: tcuser **** DONE Since boot2Docker runs completely from RAM, how to persist huge data? CLOSED: [2015-02-05 Thu 22:53] #+BEGIN_EXAMPLE Boot2Docker is essentially a remote Docker engine with a read only filesystem (other than Docker images, containers and volumes). The most scalable and portable way to share disk space between your local desktop and a Docker container is by creating a volume container and then sharing that to where it's needed. #+END_EXAMPLE *** DONE Installing Docker on Mac OS X CLOSED: [2015-02-04 Wed 00:04] https://docs.docker.com/installation/mac/ Install virtualbox and boot2docker *** CANCELED docker pull XXX/mdm:v3: x509: certificate has expired or is not yet valid CLOSED: [2015-03-21 Sat 10:36] https://github.com/docker/docker/issues/4507 ntpdate pool.ntp.org

#+BEGIN_EXAMPLE [email protected]:~# docker pull XXX/mdm:v3 docker pull XXX/mdm:v3 Pulling repository XXX/mdm FATA[0000] Get https://index.docker.io/v1/repositories/XXX/mdm/images: x509: certificate has expired or is not yet valid #+END_EXAMPLE *** DONE [#A] docker pull error: certificate signed by unknown authority CLOSED: [2015-03-22 Sun 21:12] https://github.com/docker/docker/issues/9752

#+BEGIN_EXAMPLE I'm having the exact same problem as you. And my docker version is the same like you. My OS is ubuntu 14.04

I found something wrong: the file of "/etc/ssl/certs/ca-certificates.crt" is empty. I tried to below commands, then it works.

  1. sudo update-ca-certificates
  2. Confirm file of /etc/ssl/certs/ca-certificates.crt is not empty any more.
  3. service docker restart
  4. "docker pull ubuntu:14.04" or "docker login"
  5. Confirm no warning about "x509: certificate signed by unknown authority" @DennyZhang

DennyZhang commented just now Here are certificate used for different OS https://golang.org/src/crypto/x509/root_unix.go

12 var certFiles = []string{ 13 "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc. 14 "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL 15 "/etc/ssl/ca-bundle.pem", // OpenSUSE 16 "/etc/ssl/cert.pem", // OpenBSD 17 "/usr/local/share/certs/ca-root-nss.crt", // FreeBSD/DragonFly 18 "/etc/pki/tls/cacert.pem", // OpenELEC 19 "/etc/certs/ca-certificates.crt", // Solaris 11.2+ 20 } #+END_EXAMPLE

ls -lth /usr/local/share/ca-certificates

docker -d --insecure-registry="www.example.com:8080"

DOCKER_OPTS="-H unix:///var/run/docker.sock --insecure-registry index.docker.io"

#+BEGIN_EXAMPLE [email protected]:~# docker pull XXX/mdm:v3 docker pull XXX/mdm:v3 Pulling repository XXX/mdm 2015/03/15 20:41:39 Get https://index.docker.io/v1/repositories/XXX/mdm/images: x509: certificate signed by unknown authority #+END_EXAMPLE **** useful link https://github.com/docker/docker/issues/9752

https://golang.org/src/crypto/x509/root_unix.go

http://stackoverflow.com/questions/24062803/docker-error-x509-certificate-signed-by-unknown-authority https://github.com/docker/docker/issues/6474 https://groups.google.com/forum/#!topic/docker-user/R0ODeGfP8AQ http://www.oschina.net/question/1054876_137850 http://dluat.com/docker-on-mac-behind-proxy-that-changes-ssl-certificate/ http://www.tagwith.com/question_67518_x509-certificate-signed-by-unknown-authority-on-docker-1-3-2-rhel-7-host https://github.com/docker/docker/issues/10150 ** DONE docker login without ssh: docker exec -it osc-aio bash CLOSED: [2015-09-06 Sun 16:52] #+BEGIN_EXAMPLE [email protected]:# docker ps docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4635852c5b8e denny/osc:latest "/usr/sbin/sshd -D" 6 days ago Up 6 days 0.0.0.0:28000->28000/tcp, 0.0.0.0:28080->28080/tcp, 0.0.0.0:4022->22/tcp osc-jenkins f1a08745fd59 denny/osc:latest "/usr/sbin/sshd -D" 11 days ago Up 11 days 0.0.0.0:80->80/tcp, 0.0.0.0:1389->1389/tcp, 0.0.0.0:10001->10001/tcp, 0.0.0.0:6022->22/tcp osc-aio [email protected]:# docker exec -it osc-aio bash docker exec -it osc-aio bash ]0;[email protected]: /[email protected]:/# ls ls 2 UpdateJenkinsItself_xml etc mnt sbin usr BuildRepoCode_xml bin home opt srv var CommonServerCheck_xml boot lib proc sys DeployAllInOne_xml data lib64 root sysctl.conf SmokeTest_xml dev media run tmp ]0;[email protected]: /[email protected]:/# ls ls 2 UpdateJenkinsItself_xml etc mnt sbin usr BuildRepoCode_xml bin home opt srv var CommonServerCheck_xml boot lib proc sys DeployAllInOne_xml data lib64 root sysctl.conf SmokeTest_xml dev media run tmp ]0;[email protected]: /[email protected]:/# pwd pwd / ]0;[email protected]: /[email protected]:/# #+END_EXAMPLE ** DONE [#A] docker sshd container CLOSED: [2015-05-06 Wed 18:21]

docker pull denny/sshd:v1

docker run -d -t --privileged -p 5022:22 denny/sshd:v1 /usr/sbin/sshd -D

password: sophia1 ** DONE docker fail to push image: no enough disk capacity CLOSED: [2015-10-27 Tue 09:29] #+BEGIN_EXAMPLE www.testdocker.com:8080/osc The push refers to a repository [www.testdocker.com:8080/osc] (len: 1) Sending image list Pushing repository www.testdocker.com:8080/osc (1 tags) Image b7cf8f0d9e82 already pushed, skipping Image a62a42e77c9c already pushed, skipping Image 2c014f14d3d9 already pushed, skipping Image 9b7cfe63e7c7 already pushed, skipping Image 706766fe1019 already pushed, skipping Image 2018e7b80612 already pushed, skipping Image f1453d1367c1 already pushed, skipping Image f9a89b0f1c4b already pushed, skipping Image ecd2b1ce62e0 already pushed, skipping Image 2e8269e3837c already pushed, skipping Image 2a5643a86fda already pushed, skipping Image 3676b0d66d7f already pushed, skipping Image 037428ea93d8 already pushed, skipping Image ec5ea7361d17 already pushed, skipping Image 9d7e32bd39c9 already pushed, skipping Image b0cac752965a already pushed, skipping 3ffdb76f8d81: Pushing 265.7 MB/379 MB Failed to upload layer: Put https://www.testdocker.com:8080/v1/images/3ffdb76f8d81f4ce509dd21f4a1104da579ef9871e1247dd21ed7fe3a089b461/layer: write tcp 127.0.0.1:8080: broken pipe #+END_EXAMPLE

#+BEGIN_EXAMPLE [email protected]:/data# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 20G 1.6G 17G 9% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 3.9G 4.0K 3.9G 1% /dev tmpfs 799M 468K 798M 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.9G 2.7M 3.9G 1% /run/shm none 100M 0 100M 0% /run/user /dev/xvdb 30G 28G 490M 99% /data #+END_EXAMPLE ** DONE docker rename container: docker rename OLD_NAME NEW_NAME CLOSED: [2015-11-10 Tue 18:42] https://docs.docker.com/reference/commandline/rename/ http://stackoverflow.com/questions/19035358/how-to-copy-and-rename-a-docker-container http://stackoverflow.com/questions/25211198/docker-how-to-change-repository-name-or-rename-image

#+BEGIN_EXAMPLE [email protected]:# docker ps docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5d1ede415595 denny/osc:latest "/bin/bash" 5 hours ago Up 1 seconds 22/tcp boring_pasteur [email protected]:# docker rename boring_pasteur test docker rename boring_pasteur test [email protected]:# docker ps docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5d1ede415595 denny/osc:latest "/bin/bash" 5 hours ago Up 6 seconds 22/tcp test [email protected]:# #+END_EXAMPLE ** DONE docker container fail to remove port: docker already hold it, even it we change iptables CLOSED: [2015-11-10 Tue 23:31] ** DONE docker fail to enable iptables: missing kernel package for iptables6; disable iptables6 CLOSED: [2015-11-13 Fri 05:17] http://askubuntu.com/questions/459296/could-not-open-moddep-file-lib-modules-3-xx-generic-modules-dep-bin-when-mo

http://askubuntu.com/questions/664668/ufw-not-working-in-an-lxc-container

uname -a

apt-get install --reinstall linux-image-3.13.0-65-generic linux-image-3.13.0-65-lowlatency

https://www.digitalocean.com/community/tutorials/how-to-update-a-digitalocean-server-s-kernel sed -i 's/IPV6=yes/IPV6=no/g' /etc/default/ufw

#+BEGIN_EXAMPLE [email protected]:/# ufw enable ERROR: initcaps [Errno 2] modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.0-57-generic/modules.dep.bin' ip6tables v1.4.21: can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?) Perhaps ip6tables or your kernel needs to be upgraded. #+END_EXAMPLE

#+BEGIN_EXAMPLE iptables/mdmdevops/cookbooks/firewall/libraries/provider_firewall_ufw.rb line 31) [2015-11-07T12:19:55+00:00] INFO: template[/etc/default/ufw] backed up to /var/chef/backup/etc/default/ufw.chef-20151107121955.737923 [2015-11-07T12:19:55+00:00] INFO: template[/etc/default/ufw] updated file contents /etc/default/ufw [0m ================================================================================[0m [31mError executing action enable on resource 'firewall[default]'[0m ================================================================================[0m

[0mMixlib::ShellOut::ShellCommandFailed[0m ------------------------------------[0m Expected process to exit with [0], but received '1' [0m---- Begin output of ["ufw", "enable"] ---- [0mSTDOUT: Command may disrupt existing ssh connections. Proceed with operation (y|n)? [0mSTDERR: ERROR: initcaps [0m[Errno 2] modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.0-57-generic/modules.dep.bin' [0mip6tables v1.4.21: can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?) [0mPerhaps ip6tables or your kernel needs to be upgraded. [0m---- End output of ["ufw", "enable"] ---- [0mRan ["ufw", "enable"] returned 1[0m

[0mCookbook Trace:[0m ---------------[0m /root/test/MDM-1125-iptables/mdmdevops/cookbooks/firewall/libraries/provider_firewall_ufw.rb:45:in block in action_enable' [0m/root/test/MDM-1125-iptables/mdmdevops/cookbooks/firewall/libraries/provider_firewall_ufw.rb:26:inaction_enable'[0m

[0mResource Declaration:[0m ---------------------[0m

In /root/test/MDM-1125-iptables/mdmdevops/cookbooks/firewall/recipes/default.rb

[0m [0m 20: firewall 'default' do [0m 21: action :enable [0m 22: end [0m 23: [0m [0mCompiled Resource:[0m ------------------[0m

Declared in /root/test/MDM-1125-iptables/mdmdevops/cookbooks/firewall/recipes/default.rb:20:in `from_file'

[0m [0mfirewall("default") do [0m action [:enable] [0m retries 0 [0m retry_delay 2 [0m default_guard_interpreter :default [0m subresources [firewall_rule[allow world to ssh], firewall_rule[XXX], firewall_rule[localhost], firewall_rule[http]] [0m declared_type :firewall [0m cookbook_name :firewall [0m recipe_name "default" [0mend [0m [0m[2015-11-07T12:19:56+00:00] INFO: Running queued delayed notifications before re-raising exception [2015-11-07T12:19:56+00:00] ERROR: Running exception handlers [2015-11-07T12:19:56+00:00] ERROR: Exception handlers complete [2015-11-07T12:19:56+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out [2015-11-07T12:19:56+00:00] ERROR: firewall[default] (firewall::default line 20) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1' ---- Begin output of ["ufw", "enable"] ---- STDOUT: Command may disrupt existing ssh connections. Proceed with operation (y|n)? STDERR: ERROR: initcaps [Errno 2] modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.0-57-generic/modules.dep.bin' ip6tables v1.4.21: can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?) Perhaps ip6tables or your kernel needs to be upgraded. ---- End output of ["ufw", "enable"] ---- Ran ["ufw", "enable"] returned 1 [2015-11-07T12:19:56+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1) Build step 'Execute shell' marked build as failure Finished: FAILURE #+END_EXAMPLE ** DONE docker add a new disk volume to an existing docker container: the only way is recreation CLOSED: [2015-11-14 Sat 22:58] http://jpetazzo.github.io/2015/01/13/docker-mount-dynamic-volumes/ http://stackoverflow.com/questions/28302178/how-can-i-add-a-volume-to-an-existing-docker-container http://crosbymichael.com/advanced-docker-volumes.html ** DONE Fail to start docker: yum install device-mapper-event-libs CLOSED: [2015-11-13 Fri 23:50] http://qicheng0211.blog.51cto.com/3958621/1582909

#+BEGIN_EXAMPLE docker: relocation error: docker: symbol dm_task_get_info_with_deferred_remove, version Base not defined in file libdevmapper.so.1.02 with link time reference 报这个错误的时候, 运行yum install device-mapper-event-libs即可. 详见: http://stackoverflow.com/questions/27216473/docker-1-3-fails-to-start-on-rhel6-5 #+END_EXAMPLE

不升级内核会报错,我把错误摘一下: [info] WARNING: You are running linux kernel version 2.6.32-431.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0. /usr/bin/docker: relocation error: /usr/bin/docker: symbol dm_task_get_info_with_deferred_remove, version Base not defined in file libdevmapper.so.1.02 with link time reference

#+BEGIN_EXAMPLE [[email protected] ~(keystone_admin)]# tail /var/log/docker \nFri Nov 13 10:43:02 EST 2015\n time="2015-11-13T10:43:02.571108787-05:00" level=info msg="Listening for HTTP on unix (/var/run/docker.sock)" time="2015-11-13T10:43:02.578820070-05:00" level=warning msg="You are running linux kernel version 2.6.32-431.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.10.0." /usr/bin/docker: relocation error: /usr/bin/docker: symbol dm_task_get_info_with_deferred_remove, version Base not defined in file libdevmapper.so.1.02 with link time reference \nFri Nov 13 10:44:22 EST 2015\n time="2015-11-13T10:44:22.435590912-05:00" level=warning msg="You are running linux kernel version 2.6.32-431.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.10.0." /usr/bin/docker: relocation error: /usr/bin/docker: symbol dm_task_get_info_with_deferred_remove, version Base not defined in file libdevmapper.so.1.02 with link time reference #+END_EXAMPLE ** TODO docker fail: Error response from daemon: client is newer than server (client API version: 1.21, server API version: 1.19) https://github.com/docker/docker/issues/16059 http://stackoverflow.com/questions/24586573/docker-error-client-and-server-dont-have-same-version https://github.com/docker/docker/issues/14077

#+BEGIN_EXAMPLE [email protected]:~/code/dockercookbooknoalert/dev/iamdevops/cookbooks/all-in-one-auth$ docker version Client: Version: 1.9.0 API version: 1.21 Go version: go1.4.2 Git commit: 76d6bc9 Built: Tue Nov 3 17:43:42 UTC 2015 OS/Arch: linux/amd64 Cannot connect to the Docker daemon. Is the docker daemon running on this host? #+END_EXAMPLE ** [#A] docker save *** tar.bz2 http://blog.shantanu.io/2014/01/13/reuse-docker-images-and-save-bandwidth/

export image

mkdir -p /home/denny/ docker_filename="denny_gitlab_latest.tar.bz2" docker save denny/gitlab:latest | bzip2 > /home/denny/$docker_filename

download image

tmux attach -t denny

docker_filename="denny_gitlab_latest.tar.bz2" cd /home/denny/ scp -P 2702 -i /home/denny/denny [email protected]:/home/denny/$docker_filename /home/denny/$docker_filename

load image

docker_filename="denny_gitlab_latest.tar.bz2" bunzip2 /home/denny/$docker_filename --stdout | docker load *** tar.gz

export image

docker_filename="denny_gitlab_latest.tar.gz" docker save denny/gitlab:latest > /home/denny/$docker_filename

download image

tmux attach -t denny

docker_filename="denny_gitlab_latest.tar.gz" cd /home/denny/ scp -P 2702 -i /home/denny/denny [email protected]:/home/denny/$docker_filename /home/denny/$docker_filename

load image

docker_filename="denny_gitlab_latest.tar.gz" docker load -i /home/denny/$docker_filename ** DONE [#A] docker save(image) VS docker export(instance) CLOSED: [2015-11-17 Tue 17:28] http://tuhrig.de/difference-between-save-and-export-in-docker/

https://blog.giantswarm.io/moving-docker-container-images-around/

--8<-------------------------- separator ------------------------>8--

如下命令是相匹配的: docker save <--> docker load docker export <-->docker import

docker save

1 export docker image docker save denny/osc:latest > iam_docker.tar.gz 2 import docker image docker load -i my_image.tar.gz 3 test imported image docker run -t -d --privileged denny/osc:latest /bin/bash

docker export

docker export docker-jenkins > /home/denny/docker_jenkins_20160903.tar

docker import /home/denny/docker_jenkins_20160903.tar #+BEGIN_EXAMPLE Export vs. Save

Docker supports two different types of methods for saving container images to a single tarball:

docker export - saves a container's running or paused instance to a file docker save - saves a non-running container image to a file #+END_EXAMPLE ** web page: Difference between save and export in Docker - Thomas Uhrig http://tuhrig.de/difference-between-save-and-export-in-docker/ *** webcontent :noexport: #+begin_example Location: http://tuhrig.de/difference-between-save-and-export-in-docker/ [e7c3430a016]

Thomas Uhrig

  • GitHub
  • Instagram
  • LinkedIn

Published

26/03/2014

Skip to content

  • Home
  • Stuff
  • About
  • Impressum

Thomas Uhrig in Coding | 26/03/2014

Difference between save and export in Docker

I recently played around with Docker, an application container and virtualization‎ technology for Linux. It was pretty cool and I was able to create Docker images and containers within a couple of minutes. Everything was working right out of the box!

At the end of my day I wanted to persist my work. I stumbled over the Docker commands save and export and wondered what their difference is. So I went to StackOverflow and asked a question which was nicely answered by mbarthelemy. Here is what I found out.

How Docker works (in a nutshell)

Docker is based on so called images. These images are comparable to virtual machine images and contain files, configurations and installed programs. And just like virtual machine images you can start instances of them. A running instance of an image is called container. You can make changes to a container (e.g. delete a file), but these changes will not affect the image. However, you can create a new image from a running container (and all it changes) using docker commit .

Let's make an example:

[# we pull a base Doc]

we pull a base Docker image called busybox

1 # just like in the official Hello-World-example 2 sudo docker pull busybox 3 4 # let's check which images we have 5 # we should see the image busybox 6 sudo docker images 7 8 # now we make changes to a container of this image 9 # in this case we make a new folder 10 sudo docker run busybox mkdir /home/test 11 12 # let's check which containers we now have 13 # note that the container stops after each command 14 # we should see a busybox container with our command 15 sudo docker ps -a 16 17 # now we can commit this changed container 18 # this will create a new image called busybox-1 19 # you see the with the command above 20 sudo docker commit busybox-1 21 22 # let's check which images we have now 23 # we should see the image busybox and busybox-1 24 sudo docker images 25 26 # to see the difference between both images we 27 # can use the following check for folders: 28 sudo docker run busybox [ -d /home/test ] && echo 'Directory found' || echo 'Directory not 29 found' 30 sudo docker run busybox-1 [ -d /home/test ] && echo 'Directory found' || echo 'Directory not found'

Now we have two different images (busybox and busybox-1) and we have a container made from busybox which also contains the change (the new folder /home/test). Let's see how we can persist our changes.

Export

Export is used to persist a container (not an image). So we need the container id which we can see like this:

[sudo docker ps -a ]

1 sudo docker ps -a

To export a container we simply do:

[sudo docker export <]

1 sudo docker export > /home/export.tar

The result is a TAR-file which should be around 2.7 MB big (slightly smaller than the one from save).

Save

Save is used to persist an image (not a container). So we need the image name which we can see like this:

[sudo docker images ]

1 sudo docker images

To save an image we simply do:

[sudo docker save bus]

1 sudo docker save busybox-1 > /home/save.tar

The result is a TAR-file which should be around 2.8 MB big (slightly bigger than the one from export).

The difference

Now after we created our TAR-files, let's see what we have. First of all we clean up a little bit - we remove all containers and images we have right now:

[# first we see which]

1 # first we see which containers we have 2 sudo docker ps -a 3 4 # now we remove all of them 5 sudo docker rm 6 7 # now we see which images we have 8 sudo docker images 9 10 # and we remove them too 11 sudo docker rmi busybox-1 12 sudo docker rmi busybox

We start with our export we did from the container. We can import it like this:

[# import the exporte]

1 # import the exported tar ball: 2 cat /home/export.tar | sudo docker import - busybox-1-export:latest 3 4 # check the available images 5 sudo docker images 6 7 # and check if a new container made from this image 8 # contains our folder (it does!) 9 sudo docker run busybox-1-export [ -d /home/test ] && echo 'Directory found' || echo 'Directory not found'

We can do the same for the saved image:

[# import the exporte]

1 # import the exported tar ball: 2 docker load < /home/save.tar 3 4 # check the available images 5 sudo docker images 6 7 # and check if a new container made from this image 8 # contains our folder (it does!) 9 sudo docker run busybox-1 [ -d /home/test ] && echo 'Directory found' || echo 'Directory not found'

So what's the difference between both? Well, as we saw the exported version is slightly smaller. That is because it is flattened, which means it lost its history and meta-data. We can see this by the following command:

[# shows the layers o]

1 # shows the layers of all images 2 sudo docker images --tree

If we run the command we will see an output like the following. As you can see there, the exported-imported image has lost all of its history whereas the saved-loaded image still have its history and layers. This means that you cannot do any rollback to a previous layer if you export-import it while you can still do this if you save-load the whole (complete) image (you can go back to a previous layer by using docker tag ).

[[email protected]:~$]

1 [email protected]:~$ sudo docker images --tree 2 ├─f502877df6a1 Virtual Size: 2.489 MB Tags: busybox-1-export:latest 3 └─511136ea3c5a Virtual Size: 0 B 4 └─bf747efa0e2f Virtual Size: 0 B 5 └─48e5f45168b9 Virtual Size: 2.489 MB 6 └─769b9341d937 Virtual Size: 2.489 MB 7 └─227516d93162 Virtual Size: 2.489 MB Tags: busybox-1:latest

Best regards, Thomas

[e7c3430a016]

Thomas Uhrig

Published

26/03/2014

Write a Comment

  • Rajesh Rao

    Thanks for clearing this question, I did see history on exported image is zero, where as on saved image is there

  • Pingback: Flatten a Docker container or image | Thomas Uhrig()

  • Дмитрий

    Do I need to stop container defore export it to tar file?

  • http://www.tuhrig.de/ Thomas Uhrig

    @Дмитрий No, you should also be able to export a running container.

  • Aris Setyawan

    Can I "pause" a computation, and then "resume" it using EXPORT container command? Or is It possible to do, using Docker?

  • Pingback: Introduction to Docker()

  • Alex Glover

    Thanks for posting this Thomas - this isn't a topic that is well covered in the documentation. I appreciate how you walked through the 'experiment' end to end, clearing the original images before loading them and showing the image tree.

    Great post.

  • Virendhar Sivaraman

    interesting post ! neatly done.

  • azizkhani

    thanks for nice post

  • Related Content by Tag

  • Container

  • DevOp

  • Docker

  • Linux

  • VM

Search [ ] Search Tags

Android Blog Book Cloud Data-Mining DB Deploy Design Pattern Docker Download Eclipse ERASMUS Google HdM HTML Informatica Java Java 8 JavaScript JUnit KI Linköping Linux LiU Media Night Oracle OSGi Picture Python Schweden Slides Spring Statistics Studies SWE TechTrends Testing Thomas Travel UI UML Web Windows Wordpress XML

Archives

  • November 2015
  • August 2015
  • June 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • February 2012
  • October 2011
  • July 2011
  • March 2011
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010

Independent Publisher empowered by WordPress

#+end_example ** DONE check whether you're inside a docker container CLOSED: [2015-12-30 Wed 12:17] http://stackoverflow.com/questions/20010199/determining-if-a-process-runs-inside-lxc-docker The most reliable way is to check /proc/1/cgroup. It will tell you the control groups of the init process, and when you are not in a container, that will be / for all hierarchies. When you are inside a container, you will see the name of the anchor point; which, with Docker containers, will be something like /lxc/. ** TODO [#B] docker image: intermediate container shutdown quickly ** DONE docker ps --format: docker ps --format "{{.ID}}: {{.Command}} {{.Names}}" CLOSED: [2017-06-06 Tue 21:21] https://docs.docker.com/engine/reference/commandline/ps/#formatting Placeholder Description .ID Container ID .Image Image ID .Command Quoted command .CreatedAt Time when the container was created. .RunningFor Elapsed time since the container was started. .Ports Exposed ports. .Status Container status. .Size Container disk size. .Names Container names. .Labels All labels assigned to the container. .Label Value of a specific label for this container. For example '{{.Label "com.docker.swarm.cpu"}}' .Mounts Names of the volumes mounted in this container. .Networks Names of the networks attached to this container. When using the --format option, the ps command will either output the data exactly as the template declares or, when using the table directive, includes column headers as well.

The following example uses a template without headers and outputs the ID and Command entries separated by a colon for all running containers: ** DONE [#A] docker: get container hostname by pid CLOSED: [2016-01-01 Fri 22:18]

http://blog.maxcnunes.net/2014/10/19/finding-out-to-which-docker-container-a-process-belongs-to/ docker ps | awk '{print $1}' | grep -v CONTAINER | xargs docker inspect --format '{{.State.Pid}} {{.Config.Hostname}}' | grep 6641

#+BEGIN_EXAMPLE

[email protected]:/home/denny# ps -ef | grep 31168 root 18809 17447 0 18:41 pts/65 00:00:00 grep --color=auto 31168 devops 31168 20509 1 Jan12 ? 00:35:29 /usr/lib/jvm/java-8-oracle-amd64/bin/java -Djava.util.logging.config.file=/var/lib/tomcat7/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xms3096M -Xmx3096M -Djava.awt.headless=true -XX:+UseConcMarkSweepGC -Djava.endorsed.dirs=/usr/share/tomcat6/lib/endorsed -classpath /usr/share/tomcat7/bin/bootstrap.jar:/usr/share/tomcat7/bin/tomcat-juli.jar -Dcatalina.base=/var/lib/tomcat7 -Dcatalina.home=/usr/share/tomcat7 -Djava.io.tmpdir=/tmp/tomcat7-tmp org.apache.catalina.startup.Bootstrap start [email protected]:/home/denny# docker ps | awk '{print $1}' | grep -v CONTAINER | xargs docker inspect --format '{{.State.Pid}} {{.Config.Hostname}}' | grep 20509 20509 c9d83954368b [email protected]:/home/denny# docker ps | grep c9d83954368b c9d83954368b 7a761525f497 "/usr/sbin/sshd -D -o" 2 days ago Up 2 days 0.0.0.0:32866->22/tcp all-in-one-auth-DeployAllInOne-49-UU #+END_EXAMPLE ** DONE docker get container pid: docker inspect --format '{{.State.Pid}}' docker-jenkins CLOSED: [2016-01-01 Fri 22:14] docker inspect docker-jenkins ** DONE docker get container ip list CLOSED: [2016-01-01 Fri 22:25] https://docs.docker.com/engine/reference/commandline/inspect/ docker inspect --format '{{.State.Pid}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' docker-jenkins ** DONE docker get all port forwaring CLOSED: [2016-01-01 Fri 22:26] https://docs.docker.com/engine/reference/commandline/inspect/ docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' docker-jenkins ** DONE docker rm fail, when run docker in docker CLOSED: [2016-01-03 Sun 12:05] https://totvslab.atlassian.net/browse/TECH-107 http://jira.jinganiam.com/browse/ID-47 When run docker in docker, we have to stop all containers inside first, then destroy the container. Otherwise the docker rm will fail.

The only remaining issue is the container can't be deleted. #+BEGIN_EXAMPLE [email protected]:/etc/apache2# docker rm sandbox-test-DockerDeploySandboxCookbook-5 Error response from daemon: Driver aufs failed to remove root filesystem ed64f5a11dcd4d2116eab0803008f3101d1e6b2370fd8805f7c585a000d13b62: rename /var/lib/docker/aufs/diff/ed64f5a11dcd4d2116eab0803008f3101d1e6b2370fd8805f7c585a000d13b62 /var/lib/docker/aufs/diff/ed64f5a11dcd4d2116eab0803008f3101d1e6b2370fd8805f7c585a000d13b62-removing: device or resource busy Error: failed to remove containers: [sandbox-test-DockerDeploySandboxCookbook-5] Permalink Edit Delete #+END_EXAMPLE ** DONE docker start fail, after random reboot: could not delete the default bridge network CLOSED: [2016-01-05 Tue 19:06] https://github.com/docker/docker/issues/18048 https://github.com/docker/docker/issues/17083

mv /var/lib/docker/network /home/denny/

/usr/bin/docker daemon --tlsverify --tlscacert=/root/docker/ca.pem --tlscert=/root/docker/server-cert.pem --tlskey=/root/docker/server-key.pem -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.1.3:8500 --cluster-advertise=em1:2376

#+BEGIN_EXAMPLE [email protected]:~# /usr/bin/docker daemon --tlsverify --tlscacert=/root/docker/ca.pem --tlscert=/root/docker/server-cert.pem --tlskey=/root/docker/server-key.pem -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.1.3:8500 --cluster-advertise=em1:2376 INFO[0000] [graphdriver] using prior storage driver "aufs" INFO[0000] API listen on [::]:4243 INFO[0000] API listen on /var/run/docker.sock INFO[0000] Initializing discovery without TLS INFO[0000] Firewalld running: false INFO[0000] 2016/01/05 18:52:28 [INFO] serf: EventMemberJoin: jayx 192.168.1.2

INFO[0000] 2016/01/05 18:52:28 [INFO] serf: EventMemberJoin: jayx2 192.168.1.3

FATA[0000] Error starting daemon: Error initializing network controller: could not delete the default bridge network: network bridge has active endpoints #+END_EXAMPLE ** DONE [#A] generate docker image locally :IMPORTANT: CLOSED: [2016-01-06 Wed 15:32] scp -P 2703 ~/Dockerfile [email protected]:/home/denny

ssh inhouse

scp -P 4022 -i /home/denny/denny /home/denny/Dockerfile [email protected]:/var/www/repo/dev/

curl -I http://192.168.1.2:28000/dev/Dockerfile

docker run -t -i denny/osc:latest /bin/bash docker commit -m "Initial version" -a "Denny Zhang[email protected]" e955748a2634 denny/osc:v1

stop old docker image

docker stop docker-all-in-one docker-jenkins docker rm docker-all-in-one docker-jenkins

docker build -t denny/osc:v2 --rm=false . docker run -t -i --privileged denny/osc:v2 /bin/bash docker commit -m "Initial version" -a "Denny Zhang[email protected]" e955748a2634 denny/osc:v2 *** Dockerfile #+BEGIN_EXAMPLE ########## How To Build Docker Image #############

Build image from Dockerfile. docker build -t denny/osc:v2 --rm=false .

Run docker intermediate container:

docker run -t -i --privileged denny/osc:v2 /bin/bash

Commit local image:

docker commit -m "Initial version" -a "Denny Zhang[email protected]" e955748a2634 denny/osc:v2

# Get docker user credential first

docker login

docker push denny/osc:v2

docker history denny/osc:v2

##################################################

########## How To Use Docker Image ###############

Install docker utility

Download docker image: docker pull denny/osc:v2

Boot docker container: docker run -t -P -d denny/osc:v2 /bin/bash

##################################################

FROM denny/osc:v1 MAINTAINER DennyZhang [email protected]

########################################################################################

Berks

RUN /root/iamdevops/misc/berk_update.sh "/root/test" "[email protected]:authright/iamdevops.git" "iamdevops" "ID-484-jenkins" "all-in-one-auth" "no" RUN /root/iamdevops/misc/berk_update.sh "/root/test" "[email protected]:authright/iamdevops.git" "iamdevops" "ID-484-jenkins" "jenkins-auth" "no"

RUN echo "cookbook_path ["/root/test/ID-484-jenkins/iamdevops/cookbooks",
"/root/test/dev/iamdevops/community_cookbooks"]" > /root/client.rb &&
echo "{"run_list": ["recipe[jenkins-auth]"], "os_basic_auth":{"repo_server":"104.236.159.226:18000"}}" > /root/client.json &&
chef-solo --config /root/client.rb -j /root/client.json &&
rm -rf /usr/lib/jvm/java-7-openjdk-amd64 /usr/lib/jvm/java-1.7.0-openjdk-amd64 &&
rm -rf /tmp/* /var/tmp/* /usr/share/doc && apt-get clean && apt-get autoclean &&
rm -rf /var/chef/cache/jdk-.tar.gz &&
rm -rf /var/chef/cache/
.plugin &&
rm -rf /var/chef/backup/*

EXPOSE 22 CMD ["/usr/sbin/sshd", "-D"] ######################################################################################## #+END_EXAMPLE ** DONE docker remove all containers CLOSED: [2016-01-21 Thu 18:27] docker ps -a | grep -v CONTAINER | awk -F' ' '{print $1}' | xargs docker stop docker ps -a | grep -v CONTAINER | awk -F' ' '{print $1}' | xargs docker rm ** DONE docker daemon log file CLOSED: [2016-01-21 Thu 21:56] http://stackoverflow.com/questions/30969435/where-is-the-docker-daemon-log It depends on your OS. Here are the few locations, with commands for few Operating Systems:

  • Ubuntu - /var/log/upstart/docker.log
  • Boot2Docker - /var/log/docker.log
  • Debian GNU/Linux - /var/log/daemon.log
  • CentOS - /var/log/daemon.log | grep docker
  • Fedora - journalctl -u docker.service
  • Red Hat Enterprise Linux Server - /var/log/messages | grep docker ** DONE Daemon storage-driver option: docker daemon -s devicemapper CLOSED: [2016-01-27 Wed 20:08] https://docs.docker.com/engine/reference/commandline/daemon/ Daemon storage-driver option

The Docker daemon has support for several different image layer storage drivers: aufs, devicemapper, btrfs, zfs and overlay.

The aufs driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged into the main kernel. These are also known to cause some serious kernel crashes. However, aufs is also the only storage driver that allows containers to share executable and shared library memory, so is a useful choice when running thousands of containers with the same program or libraries.

The devicemapper driver uses thin provisioning and Copy on Write (CoW) snapshots. For each devicemapper graph location - typically /var/lib/docker/devicemapper - a thin pool is created based on two block devices, one for data and one for metadata. By default, these block devices are created automatically by using loopback mounts of automatically created sparse files. Refer to Storage driver options below for a way how to customize this setup. jpetazzo/Resizing Docker containers with the Device Mapper plugin article explains how to tune your existing setup without the use of options. ** docker daemon --help #+BEGIN_EXAMPLE [email protected]:# docker daemon --help docker daemon --help

Usage: docker daemon [OPTIONS]

Enable daemon mode

--api-cors-header= Set CORS headers in the remote API -b, --bridge= Attach containers to a network bridge --bip= Specify network bridge IP --cluster-advertise= Address or interface name to advertise --cluster-store= Set the cluster store --cluster-store-opt=map[] Set cluster store options -D, --debug=false Enable debug mode --default-gateway= Container default gateway IPv4 address --default-gateway-v6= Container default gateway IPv6 address --default-ulimit=[] Set default ulimits for containers --disable-legacy-registry=false Do not contact legacy registries --dns=[] DNS server to use --dns-opt=[] DNS options to use --dns-search=[] DNS search domains to use -e, --exec-driver=native Exec driver to use --exec-opt=[] Set exec driver options --exec-root=/var/run/docker Root of the Docker execdriver --fixed-cidr= IPv4 subnet for fixed IPs --fixed-cidr-v6= IPv6 subnet for fixed IPs -G, --group=docker Group for the unix socket -g, --graph=/var/lib/docker Root of the Docker runtime -H, --host=[] Daemon socket(s) to connect to --help=false Print usage --icc=true Enable inter-container communication --insecure-registry=[] Enable insecure registry communication --ip=0.0.0.0 Default IP when binding container ports --ip-forward=true Enable net.ipv4.ip_forward --ip-masq=true Enable IP masquerading --iptables=true Enable addition of iptables rules --ipv6=false Enable IPv6 networking -l, --log-level=info Set the logging level --label=[] Set key=value labels to the daemon --log-driver=json-file Default driver for container logs --log-opt=map[] Set log driver options --mtu=0 Set the containers network MTU -p, --pidfile=/var/run/docker.pid Path to use for daemon PID file --registry-mirror=[] Preferred Docker registry mirror -s, --storage-driver= Storage driver to use --selinux-enabled=false Enable selinux support --storage-opt=[] Set storage driver options --tls=false Use TLS; implied by --tlsverify --tlscacert=/.docker/ca.pem Trust certs signed only by this CA --tlscert=/.docker/cert.pem Path to TLS certificate file --tlskey=/.docker/key.pem Path to TLS key file --tlsverify=false Use TLS and verify the remote --userland-proxy=true Use userland proxy for loopback traffic [email protected]:# #+END_EXAMPLE ** docker run --help #+BEGIN_EXAMPLE [email protected]:~# docker run --help docker run --help

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Run a command in a new container

-a, --attach=[] Attach to STDIN, STDOUT or STDERR --add-host=[] Add a custom host-to-IP mapping (host:ip) --blkio-weight=0 Block IO (relative weight), between 10 and 1000 --cpu-shares=0 CPU shares (relative weight) --cap-add=[] Add Linux capabilities --cap-drop=[] Drop Linux capabilities --cgroup-parent= Optional parent cgroup for the container --cidfile= Write the container ID to the file --cpu-period=0 Limit CPU CFS (Completely Fair Scheduler) period --cpu-quota=0 Limit CPU CFS (Completely Fair Scheduler) quota --cpuset-cpus= CPUs in which to allow execution (0-3, 0,1) --cpuset-mems= MEMs in which to allow execution (0-3, 0,1) -d, --detach=false Run container in background and print container ID --device=[] Add a host device to the container --disable-content-trust=true Skip image verification --dns=[] Set custom DNS servers --dns-opt=[] Set DNS options --dns-search=[] Set custom DNS search domains -e, --env=[] Set environment variables --entrypoint= Overwrite the default ENTRYPOINT of the image --env-file=[] Read in a file of environment variables --expose=[] Expose a port or a range of ports --group-add=[] Add additional groups to join -h, --hostname= Container host name --help=false Print usage -i, --interactive=false Keep STDIN open even if not attached --ipc= IPC namespace to use --kernel-memory= Kernel memory limit -l, --label=[] Set meta data on a container --label-file=[] Read in a line delimited file of labels --link=[] Add link to another container --log-driver= Logging driver for container --log-opt=[] Log driver options --lxc-conf=[] Add custom lxc options -m, --memory= Memory limit --mac-address= Container MAC address (e.g. 92:d0:c6:0a:29:33) --memory-reservation= Memory soft limit --memory-swap= Total memory (memory + swap), '-1' to disable swap --memory-swappiness=-1 Tuning container memory swappiness (0 to 100) --name= Assign a name to the container --net=default Set the Network for the container --oom-kill-disable=false Disable OOM Killer -P, --publish-all=false Publish all exposed ports to random ports -p, --publish=[] Publish a container's port(s) to the host --pid= PID namespace to use --privileged=false Give extended privileges to this container --read-only=false Mount the container's root filesystem as read only --restart=no Restart policy to apply when a container exits --rm=false Automatically remove the container when it exits --security-opt=[] Security Options --sig-proxy=true Proxy received signals to the process --stop-signal=SIGTERM Signal to stop a container, SIGTERM by default -t, --tty=false Allocate a pseudo-TTY -u, --user= Username or UID (format: <name|uid>[:<group|gid>]) --ulimit=[] Ulimit options --uts= UTS namespace to use -v, --volume=[] Bind mount a volume --volume-driver= Optional volume driver for the container --volumes-from=[] Mount volumes from the specified container(s) -w, --workdir= Working directory inside the container #+END_EXAMPLE ** DONE Docker 四种网络模式: host/container/none/bridge CLOSED: [2016-02-19 Fri 10:56] http://blog.opskumu.com/docker.html#docker--12 #+BEGIN_EXAMPLE 6.1 Docker 四种网络模式

四种网络模式摘自 Docker 网络详解及 pipework 源码解读与实践

docker run 创建 Docker 容器时,可以用 -net 选项指定容器的网络模式,Docker 有以下 4 种网络模式:

host 模式,使用 -net=host 指定. container 模式,使用 -net=container:NAME_or_ID 指定. none 模式,使用 -net=none 指定. bridge 模式,使用 -net=bridge 指定,默认设置. host 模式

如果启动容器的时候使用 host 模式,那么这个容器将不会获得一个独立的 Network Namespace,而是和宿主机共用一个 Network Namespace.容器将不会虚拟出自己的网卡,配置自己的 IP 等,而是使用宿主机的 IP 和端口.

例如,我们在 10.10.101.105/24 的机器上用 host 模式启动一个含有 web 应用的 Docker 容器,监听 tcp 80 端口.当我们在容器中执行任何类似 ifconfig 命令查看网络环境时,看到的都是宿主机上的信息.而外界访问容器中的应用,则直接使用 10.10.101.105:80 即可,不用任何 NAT 转换,就如直接跑在宿主机中一样.但是,容器的其他方面,如文件系统`进程列表等还是和宿主机隔离的.

container 模式

这个模式指定新创建的容器和已经存在的一个容器共享一个 Network Namespace,而不是和宿主机共享.新创建的容器不会创建自己的网卡,配置自己的 IP,而是和一个指定的容器共享 IP端口范围等.同样,两个容器除了网络方面,其他的如文件系统进程列表等还是隔离的.两个容器的进程可以通过 lo 网卡设备通信.

none模式

这个模式和前两个不同.在这种模式下,Docker 容器拥有自己的 Network Namespace,但是,并不为 Docker容器进行任何网络配置.也就是说,这个 Docker 容器没有网卡IP路由等信息.需要我们自己为 Docker 容器添加网卡`配置 IP 等.

bridge模式

vethbridge

图:The Container World Part 2 Networking

bridge 模式是 Docker 默认的网络设置,此模式会为每一个容器分配 Network Namespace`设置 IP 等,并将一个主机上的 Docker 容器连接到一个虚拟网桥上.当 Docker server 启动时,会在主机上创建一个名为 docker0 的虚拟网桥,此主机上启动的 Docker 容器会连接到这个虚拟网桥上.虚拟网桥的工作方式和物理交换机类似,这样主机上的所有容器就通过交换机连在了一个二层网络中.接下来就要为容器分配 IP 了,Docker 会从 RFC1918 所定义的私有 IP 网段中,选择一个和宿主机不同的IP地址和子网分配给 docker0,连接到 docker0 的容器就从这个子网中选择一个未占用的 IP 使用.如一般 Docker 会使用 172.17.0.0/16 这个网段,并将 172.17.42.1/16 分配给 docker0 网桥(在主机上使用 ifconfig 命令是可以看到 docker0 的,可以认为它是网桥的管理接口,在宿主机上作为一块虚拟网卡使用) #+END_EXAMPLE ** [#A] remove intermediate container by build docker image docker ps -a | grep -v CONTAINER | grep /bin/sh | awk -F' ' '{print $1}' | xargs docker rm ** DONE docker tag: docker tag 8656b532008b XXX/mdm_jenkins:latest CLOSED: [2016-04-06 Wed 15:58] http://blog.tmtk.net/post/2013-09-16-how_to_remove_tag_on_docker/

docker tag denny/gitlab:v1 denny/gitlab:latest ** web page: Running docker behind a proxy on Ubuntu 14.04 http://nknu.net/running-docker-behind-a-proxy-on-ubuntu-14-04/ *** webcontent :noexport: #+begin_example Location: http://nknu.net/running-docker-behind-a-proxy-on-ubuntu-14-04/ nknu.net

Random geeky stuff.

  • Home

  • About/Contact

  • Work

  • [ ] search

Wednesday 10 Sep 2014

Running docker behind a proxy on Ubuntu 14.04

proxy | howto | Ubuntu 14.04 | docker

If you're behind a proxy, chances are that docker is failing to build your containers, as it is not able to pull base images, and commands in the Dockerfile that need to access the internet are failing. Let's see how to fix that.

Edit /etc/defaults/docker.io and add the following lines:

export http_proxy='http://user:[email protected]:proxy-port'

For those settings to be taken into account, you'll have to restart your docker daemon:

$ sudo service docker.io restart

This should allow docker daemon to pull images from the central registry. However, if you need to configure the proxy in the Dockerfile (ie. if you're using apt-get to install packages), you'll need to declare it there too.

Add the following lines at the top of your Dockerfile:

ENV http_proxy 'http://user:[email protected]:proxy-port' ENV https_proxy 'http://user:[email protected]:proxy-port' ENV HTTP_PROXY 'http://user:[email protected]:proxy-port' ENV HTTPS_PROXY 'http://user:[email protected]:proxy-port'

With those settings, your container should now build, using the proxy to access the outside world.

Share this post

[twitter] [fb] [googleplus]

[1512219_10]

Adrien Anceau

http://nknu.net

Geek, gamer, photography enthusiast, system engineer, wannabe developper.

Please enable JavaScript to view the comments powered by Disqus. comments powered by Disqus Subscribe! All content copyright nknu.net © 2013 • All rights reserved. Proudly published with Ghost

#+end_example ** TODO docker fail to start #+BEGIN_EXAMPLE [email protected]:/opt# tail /var/log/docker.log time="2015-12-17T05:42:25.305050036Z" level=info msg="API listen on /var/run/docker.sock" time="2015-12-17T05:42:25.305251089Z" level=warning msg="Udev sync is not supported. This will lead to unexpected behavior, data loss and errors. For more information, see https://docs.docker.com/reference/commandline/daemon/#daemon-storage-driver-option" time="2015-12-17T05:42:25.374726369Z" level=warning msg="Usage of loopback devices is strongly discouraged for production use. Please use --storage-opt dm.thinpooldev or use man docker to refer to dm.thinpooldev section." time="2015-12-17T05:42:25.375842331Z" level=error msg="[graphdriver] prior storage driver "devicemapper" failed: Base Device UUID and Filesystem verification failed.Error running deviceCreate (ActivateDevice) dm_task_run failed" time="2015-12-17T05:42:25.375964074Z" level=fatal msg="Error starting daemon: error initializing graphdriver: Base Device UUID and Filesystem verification failed.Error running deviceCreate (ActivateDevice) dm_task_run failed" time="2015-12-17T05:42:30.827935494Z" level=warning msg="Udev sync is not supported. This will lead to unexpected behavior, data loss and errors. For more information, see https://docs.docker.com/reference/commandline/daemon/#daemon-storage-driver-option" time="2015-12-17T05:42:30.839653618Z" level=info msg="API listen on /var/run/docker.sock" time="2015-12-17T05:42:30.948265978Z" level=warning msg="Usage of loopback devices is strongly discouraged for production use. Please use --storage-opt dm.thinpooldev or use man docker to refer to dm.thinpooldev section." time="2015-12-17T05:42:30.949016814Z" level=error msg="[graphdriver] prior storage driver "devicemapper" failed: Base Device UUID and Filesystem verification failed.Error running deviceCreate (ActivateDevice) dm_task_run failed" time="2015-12-17T05:42:30.949113351Z" level=fatal msg="Error starting daemon: error initializing graphdriver: Base Device UUID and Filesystem verification failed.Error running deviceCreate (ActivateDevice) dm_task_run failed" [email protected]:/opt# docker ps #+END_EXAMPLE ** BYPASS DEVOPS-158: docker image优化:将logstash.tar.gz烧到默认的image中 CLOSED: [2015-11-09 Mon 01:02] https://authright.atlassian.net/browse/DEVOPS-158 7022

docker run -t -d --privileged -h aio --name docker-all-in-one -p 6022:22 denny/osc:latest /usr/sbin/sshd -D

docker exec -it docker-all-in-one bash ls -lth /var/chef/cache/server.tar.gz

/var/chef/backup/var/chef/cache/server.tar.gz.chef-20151104151105.158356

/var/chef/cache/server.tar.gz /tmp/kitchen/cache/server.tar.gz

cd /root/test/dev/iamdevops/cookbooks docker run -t -d --privileged -p 7022:22 denny/osc:test /usr/sbin/sshd -D ssh -p 7022 [email protected]

echo "cookbook_path "/root/test/DEVOPS-158-Denny/iamdevops/cookbooks"" > /root/client.rb &&
echo "{"run_list": ["recipe[all-in-one-auth::precache]"]}" > /root/client.json &&
chef-solo --config /root/client.rb -j /root/client.json

| cookbooks | Size (MB) | |-----------------------------+-----------| | nagios-auth | 188.6 | | os-basic-auth::packages | 65.95 | | os-basic-auth::default | 376.2 | | os-basic-auth::devkit | 52.74 | | os-basic-auth::jenkins-auth | 696.6 | | all-in-one-auth::basiccache | 378.2 | | all-in-one-auth::precache | 386.3 |

https://bitbucket.org/authright/iamdevops/pull-requests/14/devops-141-devops-158-docker-image

curl -I https://download.elasticsearch.org/logstash/logstash/logstash-1.5.3.tar.gz curl -I https://download.elasticsearch.org/logstash/logstash/logstash-1.5.3.tar.gz #+BEGIN_EXAMPLE [email protected]:/var/chef/cache# stat /var/chef/cache/server.tar.gz File '/var/chef/cache/server.tar.gz' Size: 45507888 Blocks: 88896 IO Block: 4096 regular file Device: 31h/49d Inode: 86 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-11-04 14:17:53.658001104 +0800 Modify: 2015-11-04 13:35:47.000000000 +0800 Change: 2015-11-04 13:36:56.414001104 +0800 Birth: -

https://download.elasticsearch.org/logstash/logstash/logstash-1.5.3.tar.gz HTTP/1.1 200 OK Accept-Ranges: bytes Content-Length: 91914390 Content-Type: application/x-gzip Date: Wed, 04 Nov 2015 06:42:39 GMT ETag: "2d475a3bcd6a8375fb749685104189c1-2" Last-Modified: Tue, 21 Jul 2015 18:06:23 GMT Server: nginx/1.4.6 (Ubuntu) x-amz-id-2: TPLiEVodx5PmJ+G7F8UsEZYkqbwZwthSmXg6HlRrxgOtsy1aTmU8forNNcpIgpN3WxVJGI/YXY0= x-amz-request-id: 60380254FB5E0C8F x-ngx-hostname: www02 Connection: keep-alive #+END_EXAMPLE ** DONE docker upgrade server from 1.18 to 1.19 CLOSED: [2015-08-19 Wed 00:27] http://askubuntu.com/questions/472412/how-do-i-upgrade-docker

docker version service docker stop

curl -sSL https://get.docker.com/ | sudo sh

docker version docker ps ** TODO docker start is very slow #+BEGIN_EXAMPLE [email protected]:# ps -ef | grep 7356 root 7356 7165 0 03:27 ? 00:00:00 bash -xe /root/bootstrap_sandbox.sh denny/osc root 7480 7356 0 03:40 ? 00:00:00 docker run -d -t --privileged --name docker-all-in-one -p 10000-10050:10000-10050 -p 80:80 -p 443:443 -p 6022:22 denny/osc:latest /usr/sbin/sshd -D root 7565 7502 0 03:49 pts/2 00:00:00 grep --color=auto 7356 [email protected]:# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cbab593b4994 denny/osc:latest "/usr/sbin/sshd -D" 9 minutes ago docker-all-in-one 39d882b5254f denny/osc:latest "/usr/sbin/sshd -D" 22 minutes ago Up 9 minutes 0.0.0.0:3128->3128/tcp, 0.0.0.0:28000->28000/tcp, 0.0.0.0:28080->28080/tcp, 0.0.0.0:5022->22/tcp docker-jenkins [email protected]:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cbab593b4994 denny/osc:latest "/usr/sbin/sshd -D" 9 minutes ago docker-all-in-one 39d882b5254f denny/osc:latest "/usr/sbin/sshd -D" 22 minutes ago Up 9 minutes 0.0.0.0:3128->3128/tcp, 0.0.0.0:28000->28000/tcp, 0.0.0.0:28080->28080/tcp, 0.0.0.0:5022->22/tcp docker-jenkins #+END_EXAMPLE ** TODO [#B] docker start is very slow: start mdm-all-in-one docker in docker #+BEGIN_EXAMPLE [email protected]:/# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1bf423e3e5a8 XXX/mdm:latest "/usr/sbin/sshd -D" 2 minutes ago mdm-all-in-one 226f26484976 XXX/mdm:latest "/usr/sbin/sshd -D" 6 minutes ago Up 2 minutes 0.0.0.0:18000->18000/tcp, 0.0.0.0:18080->18080/tcp, 0.0.0.0:5022->22/tcp mdm-jenkins [email protected]:/# docker rm mdm-all-in-one

mdm-all-in-one [email protected]:/# [email protected]:/# [email protected]:/# [email protected]:/# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 226f26484976 XXX/mdm:latest "/usr/sbin/sshd -D" 7 minutes ago Up 4 minutes 0.0.0.0:18000->18000/tcp, 0.0.0.0:18080->18080/tcp, 0.0.0.0:5022->22/tcp mdm-jenkins [email protected]:/# docker run -d -t --privileged -v /root/couchbase/:/opt/couchbase/ --name mdm-all-in-one -p 8080:8080 -p 8443:8443 -p 8091:8091 -p 9200:9200 -p 80:80 -p 8081:8081 -p 6022:22 XXX/mdm:latest /usr/sbin/sshd -D

7cf51d57414290f615d734a89a2c4aa6d544eb7522c218d147b2acfeda9433c3

[email protected]:/# [email protected]:/# [email protected]:/# [email protected]:/# [email protected]:/# [email protected]:/# [email protected]:/# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7cf51d574142 XXX/mdm:latest "/usr/sbin/sshd -D" 2 minutes ago Up 5 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:8080-8081->8080-8081/tcp, 0.0.0.0:8091->8091/tcp, 0.0.0.0:8443->8443/tcp, 0.0.0.0:9200->9200/tcp, 0.0.0.0:6022->22/tcp mdm-all-in-one 226f26484976 XXX/mdm:latest "/usr/sbin/sshd -D" 9 minutes ago Up 6 minutes 0.0.0.0:18000->18000/tcp, 0.0.0.0:18080->18080/tcp, 0.0.0.0:5022->22/tcp mdm-jenkins #+END_EXAMPLE ** TODO [#B] docker commit stuck #+BEGIN_EXAMPLE [email protected]" 22d0cba1cb6c XXX/sandbox-test:v1

#+END_EXAMPLE ** DONE [#B] Docker can't rename file: Run into Device or resource busy CLOSED: [2015-07-02 Thu 22:17] https://github.com/jwhitehorn/pi_piper/issues/30 https://github.com/docker/docker/issues/9295

#+BEGIN_EXAMPLE [email protected]:# cat /tmp/hosts 172.17.0.223 313631c64e82 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [email protected]:# mv /etc/hosts /etc/hosts.bak mv: cannot move '/etc/hosts' to '/etc/hosts.bak': Device or resource busy [email protected]:~# #+END_EXAMPLE *** TODO Why hostsfile doesn't work for docker container? https://supermarket.chef.io/cookbooks/hostsfile

https://github.com/docker/docker/issues/9295

The issue is that even though you can "write" to those files, you can't rename them. This is due to both files being bind mounted. I am using ansible to automatically update /etc/hosts with references to either other machines, or to localhost. I then place lines like the following in the file:

127.0.0.1 sandbox admin.fluigdata.com app.fluigdata.com XXX.fluigdata.com #+BEGIN_EXAMPLE

https://supermarket.chef.io/cookbooks/hostsfile

# TODO: Doesn't work on docker

hostsfile_entry '127.0.0.1' do

hostname 'admin.fluigdata.com'

aliases ['app.fluigdata.com' 'XXX.fluigdata.com']

unique true

end

#+END_EXAMPLE ** DONE docker stats: docker stats docker-jenkins demo-all-in-one1 dev-all-in-one1 CLOSED: [2016-01-27 Wed 17:14] #+BEGIN_EXAMPLE CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O demo-all-in-one1 30.10% 5.035 GB / 33.49 GB 15.04% 2.663 GB / 453.8 MB 17.73 GB / 30.72 GB demo93-all-in-one1 3.14% 4.371 GB / 33.49 GB 13.05% 878.4 MB / 109.9 MB 2.403 GB / 15.03 GB dev-all-in-one1 23.95% 4.226 GB / 33.49 GB 12.62% 8.882 GB / 1.18 GB 43.17 GB / 43.17 GB docker-jenkins 0.73% 3.144 GB / 33.49 GB 9.39% 432.2 MB / 816.7 MB 8.013 GB / 16.86 GB fuxin-all-in-one1 24.11% 4.815 GB / 33.49 GB 14.38% 6.48 GB / 132.6 MB 30.59 GB / 37.08 GB qa-all-in-one1 40.81% 4.09 GB / 33.49 GB 12.21% 3.262 GB / 1.073 GB 27.27 GB / 30.53 GB #+END_EXAMPLE ** DONE docker resume paused: docker unpause 4840a264a7f8 CLOSED: [2015-05-19 Tue 13:25] [email protected]:# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7259f6d6d981 6d0d83f7281e:latest "/usr/sbin/sshd -D - About an hour ago Up About an hour 0.0.0.0:33005->22/tcp jenkins-mdm f7363c45828c 6d0d83f7281e:latest "/usr/sbin/sshd -D - 4 hours ago Up 4 hours 0.0.0.0:33004->22/tcp all-in-one 4840a264a7f8 denny/sshd:v1 "/usr/sbin/sshd -D" 11 days ago Up 7 days (Paused) 0.0.0.0:18000->18000/tcp, 0.0.0.0:4022->22/tcp, 0.0.0.0:48080->18080/tcp clever_yonath ** TODO docker rm fail: Driver aufs failed to remove root filesystem: device or resource busy #+BEGIN_EXAMPLE [email protected]:# docker ps -a docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3c896e3a9dfc db73d42e776e:latest "/usr/sbin/sshd -D - About a minute ago Up About a minute 0.0.0.0:32812->22/tcp couchbase-mdm 06e2c1fa5680 db73d42e776e:latest "/usr/sbin/sshd -D - 5 minutes ago Up 5 minutes 0.0.0.0:32811->22/tcp nagios-mdm 47e44974beac db73d42e776e:latest "/usr/sbin/sshd -D - 6 minutes ago Up 6 minutes 0.0.0.0:32810->22/tcp all-in-one d68d83da1363 denny/dennysandbox:latest "/usr/sbin/sshd -D" 25 hours ago Up 25 hours 0.0.0.0:7022->22/tcp denny-sandbox c440619a18ee eb6e152ee835:latest "/usr/sbin/sshd -D - 31 hours ago Up 31 hours 0.0.0.0:32796->22/tcp os-basic 05b3b50c1b9d eb6e152ee835:latest "/usr/sbin/sshd -D - 40 hours ago Dead 79cbc5c7fb60 eb6e152ee835:latest "/usr/sbin/sshd -D - 42 hours ago Up 42 hours 0.0.0.0:32782->22/tcp backup-mdm 4840a264a7f8 denny/sshd:v1 "/usr/sbin/sshd -D" 5 days ago Up 2 days 0.0.0.0:18000->18000/tcp, 0.0.0.0:4022->22/tcp, 0.0.0.0:48080->18080/tcp clever_yonath fae9a07b4839 denny/sshd:latest "/usr/sbin/sshd -D" 6 days ago Up 2 days 0.0.0.0:6022->22/tcp boring_leakey [email protected]:# docker rm 05b3b50c1b9d docker rm 05b3b50c1b9d Error response from daemon: Cannot destroy container 05b3b50c1b9d: Driver aufs failed to remove root filesystem 05b3b50c1b9dfb366ee0463d8e2a2f5b2402dc49adb8ba89a2af40713af13c53: rename /var/lib/docker/aufs/diff/05b3b50c1b9dfb366ee0463d8e2a2f5b2402dc49adb8ba89a2af40713af13c53 /var/lib/docker/aufs/diff/05b3b50c1b9dfb366ee0463d8e2a2f5b2402dc49adb8ba89a2af40713af13c53-removing: device or resource busy FATA[0000] Error: failed to remove one or more containers [email protected]:# #+END_EXAMPLE ** TODO detect whether env is docker container itself ** DONE [#A] docker push sandbox-test:latest fail :IMPORTANT: CLOSED: [2015-06-26 Fri 16:46] http://10.165.4.67:48080/view/MustPass/job/KitchenDockerTestAllCookbooks/265/console

https://github.com/docker/docker/issues/12237 https://github.com/jboss-dockerfiles/wildfly/issues/4 https://github.com/docker/docker/issues/2461

docker run -t -d --privileged -p 9022:22 XXX/kitchendocker:v2 /usr/sbin/sshd -D ssh -p 9022 [email protected]

docker commit -m "initial" -a "Denny[email protected]" b823b3ae8f54 XXX/sandbox-test:v1

wget https://raw.githubusercontent.com/TOTVS/mdmpublic/master/test/bootstrap_mdm_sandbox.sh

#+BEGIN_EXAMPLE [email protected]:~# docker pull XXX/mdm:latest latest: Pulling from XXX/mdm 9ea0c29266b3: Extracting [=============> ] 491.5 kB/1.79 MB 9ea0c29266b3: Error downloading dependent layers 274566660d14: Download complete b6ad1b495e32: Download complete 35b3fe2a3450: Download complete c058296e31b2: Downloading [========> ] 19.46 MB/120.9 MB c058296e31b2: Downloading [========================> ] 60.01 MB/120.9 MB 9aa8aacdf83b: Downloading [=======================> ] 60.55 MB/129.9 MB 93e083ff0c35: Download complete 17011c5f13b5: Downloading [============================> ] 20.55 MB/35.5 MB 17011c5f13b5: Download complete 5a40679892c4: Download complete 6ef17911f6a3: Download complete 6ef17911f6a3: Error pulling image (latest) from XXX/mdm, ApplyLayer exit status 1 stdout: stderr: lstat /var/lib/jenkins/plugins/git/.wh..opq: operation not permitted ugins/git/.wh..opq: operation not permitted a62a42e77c9c: Download complete 2c014f14d3d9: Download complete b7cf8f0d9e82: Download complete c0e1b2dc3545: Download complete 509bbeda1bf3: Download complete 85e317e21694: Download complete 5a12872f8839: Download complete 01e7ee16f9af: Download complete 18294b13b72b: Download complete 287e252bf5f3: Download complete 3f5ba0eaf957: Download complete 4983ec8ad0a6: Download complete Error pulling image (latest) from XXX/mdm, ApplyLayer exit status 1 stdout: stderr: lstat /var/lib/jenkins/plugins/git/.wh..opq: operation not permitted #+END_EXAMPLE ** TODO [#A] docker jenkins container not responding ** DONE docker mysql status: error while loading shared libraries: libaio.so.1 CLOSED: [2016-01-10 Sun 15:06] removed --privileged, when starting docker container

https://github.com/docker/docker/issues/7512 http://stackoverflow.com/questions/22473830/docker-and-mysql-libz-so-1-cannot-open-shared-object-file-permission-denied

#+BEGIN_EXAMPLE [email protected]:/# service mysql status /usr/sbin/mysqld: error while loading shared libraries: libaio.so.1: cannot open shared object file permission denied

  • MySQL is stopped. #+END_EXAMPLE ** DONE docker fail to remove docker image with tag: specify tag name, instead of image id CLOSED: [2016-04-12 Tue 18:19] https://github.com/docker/docker/issues/1530 [email protected]:~# docker images docker images REPOSITORY TAG IMAGE ID CREATED SIZE denny/devubuntu denny ee6f3e4874c1 3 days ago 1.494 GB denny/devubuntu v1 ee6f3e4874c1 3 days ago 1.494 GB

[email protected]:# docker rmi ee6f3e4874c1 docker rmi 97893b56105b Failed to remove image (ee6f3e4874c1): Error response from daemon: conflict: unable to delete ee6f3e4874c1 (must be forced) - image is referenced in one or more repositories [email protected]:# docker rmi denny/devubuntu:denny docker rmi denny/devubuntu:denny Untagged: denny/devubuntu:denny [email protected]:~# docker rmi ee6f3e4874c1 docker rmi ee6f3e4874c1 Untagged: denny/devubuntu:v1 Deleted: sha256:ee6f3e4874c12216b1719c75cb003d1f2eefc24c19d7ba386366aa403459778f Deleted: sha256:82be248a5e558770bd61fd6469459c4353f0c41d32dbb9632905ea1c8b3ba6c3 Deleted: sha256:eeb709b939fb21f58c4a8f6f3b145246ca32b5090649092b9626099a9b43c8d4 Deleted: sha256:da2a6fcd16f339e0ad1db8586353547cf1a667dd9616d3017cb525b1179db6d8 Deleted: sha256:a83b37baef06701da1fe9a0bda9ce5bd85d5246e74fbf0c5ca42151dab76cdfb Deleted: sha256:3c5906c5da6e26e01f66a87b0445a0064b7463bd6a0e667ce7ef9ae12b69e0fb Deleted: sha256:9aeaf099cf304e791194f1ae2ebf334dcce4c9f67dab3720f5737ad44a8b52fb ** DONE lesson learn: start docker container with /etc/hosts and private ip fixed CLOSED: [2016-04-18 Mon 10:11] ** DONE docker daemon use chinese dns CLOSED: [2016-04-18 Mon 14:24] cat /etc/resolv.conf

Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)

DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN

options timeout:1 attempts:1 rotate nameserver 10.202.72.116 nameserver 10.202.72.118 ** TODO docker aufs backing file system is unsupported for this graph driver ** DONE [#A] Setup in-house docker hub to speed up the test CLOSED: [2015-07-18 Sat 15:29] ls -lth /var/docker-registry/registry/images/ *** DONE docker: 如何创建public docker hub registry server CLOSED: [2015-10-26 Mon 14:12] https://authright.atlassian.net/wiki/pages/viewpage.action?pageId=9470037 #+BEGIN_EXAMPLE 创建docker hub registry服务器 Setup sandbox solution Start a docker container to run private hub registry docker run -t -d --privileged -h iamregistry --name iam-registry -p 5022:22 -p 8080:8080 denny/osc:latest /usr/sbin/sshd -D In Jenkins, run DeployAllInOne job with below parameter

Project_name: docker-hub-registry-auth chef_json: {"run_list": ["recipe[docker-registry2]"]} check_command: 留空 Note: 第一次运行时,可能会遇到如下出错信息.重试一次,应该OK了. 09:12:00 [2015-10-26T09:12:00+08:00] WARN: Ohai::Config[:plugin_path] is set. Ohai::Config[:plugin_path] is deprecated and will be removed in future releases of ohai. Use ohai.plugin_path in your configuration file to configure :plugin_path for ohai. 09:12:00 [2015-10-26T09:12:00+08:00] WARN: Ohai::Config[:plugin_path] is set. Ohai::Config[:plugin_path] is deprecated and will be removed in future releases of ohai. Use ohai.plugin_path in your configuration file to configure :plugin_path for ohai. 09:12:00 [2015-10-26T09:12:00+08:00] ERROR: Encountered error while running plugins: #<Ohai::Exceptions::AttributeNotFound: No such attribute: 'nginx'> 09:12:00 [0m 09:12:00 ================================================================================[0m 09:12:00 [31mError executing action reload on resource 'ohai[reload_nginx]'[0m 09:12:00 ================================================================================[0m 09:12:00 09:12:00 [0mOhai::Exceptions::AttributeNotFound[0m 09:12:00 -----------------------------------[0m 09:12:00 No such attribute: 'nginx'[0m 09:12:00 09:12:00 [0mResource Declaration:[0m 09:12:00 ---------------------[0m 09:12:00 # In /root/test/master/iamdevops/cookbooks/nginx/recipes/ohai_plugin.rb 09:12:00 [0m 09:12:00 [0m 22: ohai 'reload_nginx' do 09:12:00 [0m 23: plugin 'nginx' 09:12:00 [0m 24: action :nothing 09:12:00 [0m 25: end 09:12:00 [0m 26: 09:12:00 [0m 09:12:00 [0mCompiled Resource:[0m append /etc/hosts echo "127.0.0.1 www.testdocker.com" >> /etc/hosts 登录docker daemon server, 将原docker image提交到新创建的docker hub server docker tag -f denny/osc:latest www.testdocker.com:8080/osc:latest docker login -u mydocker -p dockerpasswd -e ' ' https://www.testdocker.com:8080 docker push www.testdocker.com:8080/osc Docker如何使用自行搭建的docker hub registry server #+END_EXAMPLE *** DONE Docker如何使用自行搭建的docker hub registry server CLOSED: [2015-10-26 Mon 14:12] https://authright.atlassian.net/wiki/pages/viewpage.action?pageId=9863173 #+BEGIN_EXAMPLE wiki: 如何创建public docker hub registry server

Client: 使用docker hub registry服务器 Install docker wget -qO- https://get.docker.com/ | sh Confirm docker service is up and running service docker status Change /etc/hosts echo "101.251.244.78 www.testdocker.com" >> /etc/hosts Please customize above ip to the one of docker hub registry server Install self-signed certificate sudo mkdir /usr/local/share/ca-certificates/docker-dev-cert cat > /usr/local/share/ca-certificates/docker-dev-cert/devdockerCA.crt <<EOF -----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJAPvIhXM0kD6kMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQwHhcNMTUwNzE4MDU1MjQ1WhcNNDIxMjAzMDU1MjQ1WjBF MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50 ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB CgKCAQEAsf1Owc4D073Qlq3f9fp6PKlclXBC0HdLFx0F+6cICCj1/UMGlMECXvAr +mWQaRfIHTxOumurmgV3wigX1VaoWiXYfYyD1jfTPHAP1fLA6wol9VB2+rR/i03x z6AoNPNZARUoxeShfDho7SsSjm1b7Hu7y2um6Ed1JEn7THJrpB4dBd38VUg2vQwN nsIhvE+ubzZelZUn9vrMTavlPkeCJu0xJuhCbSD6WdB0gL1I79XF42bGk2cSUrNO o4AHwQzmA9bFbpLCQXqTJdkZv4/SyOlljUXTkqR2JBIuv7G+SaTgkrChX9neBy4n aQ5sZZXqE1CVqDlX0BAXbGqbWWhW/QIDAQABo1AwTjAdBgNVHQ4EFgQUB8fUo0mM 4qJKs8pmDdPwlxubvQEwHwYDVR0jBBgwFoAUB8fUo0mM4qJKs8pmDdPwlxubvQEw DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAbFMxM02/bCXUqrvwWNWr Nv5DtPLXiwAEOA2sm7PNRnPemWLrhxpmmAGMfanL9Hj776zj+XMV0nCE3WAG5HTV j1VdRMfshPmGmo+Jyl4pmQdUdm3FTAQcaTSP0lVSVQUYQ6xogBxVBQMEn/zm0UeL BHvUltkhgmZN1Iz996pwztOngBBffCGX0ylvUWczySgjULzY/I2Lf2Cu4iQinnLy 8MWXWEzbYKy5zLG9hXO3yorIzrPLFy0jVccqY12SKhKdzlFT8O1b67x9ZFteMHuy 383mAn6tSSq7/u3OvtX7NTxaGAw1HVpWkEc8pp5SZtA3Vi6ihf04/145YY1FNk/M OA== -----END CERTIFICATE----- EOF sudo update-ca-certificates login to our public docker hub registry server docker login -u mydocker -p dockerpasswd -e ' ' https://www.testdocker.com:8080 pull docker image docker pull www.testdocker.com:8080/osc

#+END_EXAMPLE *** install https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04

gunicorn --access-logfile /var/log/docker-registry/access.log --error-logfile /var/log/docker-registry/server.log -k gevent --max-requests 100 --graceful-timeout 3600 -t 3600 -b localhost:5000 -w 8 docker_registry.wsgi:application

--8<-------------------------- separator ------------------------>8--

ssh -p 4022 [email protected]

docker server

ssh -i /home/denny/denny [email protected]

docker run -t -d --privileged -p 5022:22 denny/sshd:latest /usr/sbin/sshd -D ssh -p 5022 [email protected]

Step One - Install Prerequisites

apt-get update apt-get -y install build-essential python-dev libevent-dev python-pip liblzma-dev apt-get -y install swig apt-get -y install openssl apt-get -y install libssl-dev sudo apt-get -y install nginx apache2-utils

Step Two - Install and Configure Docker Registry

sudo pip install docker-registry

gunicorn --access-logfile - --debug -k gevent -b 0.0.0.0:5000 -w 1 docker_registry.wsgi:application cd /usr/local/lib/python2.7/dist-packages/docker_registry/lib/../../config/ sudo cp config_sample.yml config.yml

create a more permanent folder to store our data:

sudo mkdir /var/docker-registry

apt-get install -y vim curl lsof

vim config.yml

www.dennydocker.com *** # --8<-------------------------- separator ------------------------>8-- *** DONE [#A] setup for private hub :IMPORTANT: CLOSED: [2015-07-18 Sat 15:28] https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04 **** start a server container start by docker-hub-regisry-auth cookbook specify the port mapping from 8080 to 8080 in kitchen yaml file

When run first time, we may suffer from ohai issue of loading nginx attribute **** In client nodes, change /etc/hosts and add ssl certificate echo "172.17.42.1 www.testdocker.com" >> /etc/hosts sudo mkdir /usr/local/share/ca-certificates/docker-dev-cert cat > /usr/local/share/ca-certificates/docker-dev-cert/devdockerCA.crt <<EOF -----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJAPvIhXM0kD6kMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQwHhcNMTUwNzE4MDU1MjQ1WhcNNDIxMjAzMDU1MjQ1WjBF MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50 ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB CgKCAQEAsf1Owc4D073Qlq3f9fp6PKlclXBC0HdLFx0F+6cICCj1/UMGlMECXvAr +mWQaRfIHTxOumurmgV3wigX1VaoWiXYfYyD1jfTPHAP1fLA6wol9VB2+rR/i03x z6AoNPNZARUoxeShfDho7SsSjm1b7Hu7y2um6Ed1JEn7THJrpB4dBd38VUg2vQwN nsIhvE+ubzZelZUn9vrMTavlPkeCJu0xJuhCbSD6WdB0gL1I79XF42bGk2cSUrNO o4AHwQzmA9bFbpLCQXqTJdkZv4/SyOlljUXTkqR2JBIuv7G+SaTgkrChX9neBy4n aQ5sZZXqE1CVqDlX0BAXbGqbWWhW/QIDAQABo1AwTjAdBgNVHQ4EFgQUB8fUo0mM 4qJKs8pmDdPwlxubvQEwHwYDVR0jBBgwFoAUB8fUo0mM4qJKs8pmDdPwlxubvQEw DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAbFMxM02/bCXUqrvwWNWr Nv5DtPLXiwAEOA2sm7PNRnPemWLrhxpmmAGMfanL9Hj776zj+XMV0nCE3WAG5HTV j1VdRMfshPmGmo+Jyl4pmQdUdm3FTAQcaTSP0lVSVQUYQ6xogBxVBQMEn/zm0UeL BHvUltkhgmZN1Iz996pwztOngBBffCGX0ylvUWczySgjULzY/I2Lf2Cu4iQinnLy 8MWXWEzbYKy5zLG9hXO3yorIzrPLFy0jVccqY12SKhKdzlFT8O1b67x9ZFteMHuy 383mAn6tSSq7/u3OvtX7NTxaGAw1HVpWkEc8pp5SZtA3Vi6ihf04/145YY1FNk/M OA== -----END CERTIFICATE----- EOF sudo update-ca-certificates

try curl

curl https://mydocker:[email protected]:8080 **** docker login docker login https://www.testdocker.com:8080

docker tag denny/osc:latest www.testdocker.com:8080/osc docker push www.testdocker.com:8080/osc *** web page: How To Set Up a Private Docker Registry on Ubuntu 14.04 | DigitalOcean https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04 **** webcontent :noexport: #+begin_example Location: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04 Share

  • Twitter
  • Facebook
  • Google+
  • Hacker News

Contents

nikvdp By: Nik van der Ploeg Oct 15, 2014 HeartedHeart 53 63 Share Contents Contents

Share

  • Twitter
  • Facebook
  • Google+
  • Hacker News

Sign Up Log In

[ ] submit

  • Tutorials
  • Questions
  • Projects
  • Main Site

Community Menu

  • Tutorials
  • Questions
  • Projects
  • Main Site

Sign Up Log In [ ] submit View All Results We hope you find this tutorial helpful. In addition to guides like this one, we provide simple cloud infrastructure for developers. Learn more ->p

How To Set Up a Private Docker Registry on Ubuntu 14.04

Tags: Docker, Nginx Distribution: Ubuntu

Introduction

Docker is a great tool for deploying your servers. While docker.io lets you upload your Docker creations to their registry for free, anything you upload is also public. This probably isn't what you want for a non-open source-project.

This guide will show you how to set up and secure your own private Docker registry. By the end of this tutorial you will be able to push a custom Docker image to your private registry, and pull the image securely from a different host.

This tutorial doesn't cover containerizing your own application, but only how to create the registry where you can store your deployments. If you want to learn how to get started with Docker itself (as opposed to the registry), you may want to read the tutorial here.

This tutorial has been tested with all servers (one registry and one client) running Ubuntu 14.04, but may work with other Debian-based distros.

Docker Concepts

If you haven't used Docker before then it's worth taking a minute to go through a few of Docker's key concepts. If you're already using Docker and just want to know how to get started running your own registry, then please skip ahead to the next section.

For a refresher on how to use Docker, take a look at the excellent Docker Cheat Sheet here.

Docker at it's core is a way to separate an application and the dependencies needed to run it from the operating system itself. To make this possible Docker uses containers and images. A Docker image is basically a template for a filesystem. When you run a Docker image with the docker run command, an instance of this filesystem is made live, and runs on your system inside a Docker container. By default this container can't touch the original image itself, or the filesystem of the host where docker is running. It's a self-contained environment.

Whatever changes you make in the container are preserved in that container itself, and don't affect the original image. If you decide you want to keep those changes, then you can "commit" a container to a Docker image (via the docker commit command). This means you can then spawn new containers that start with the contents of your old container, without affecting the original container (or image). If you're familiar with git then the workflow should seem quite similar: you can create new branches (images in Docker parlance) from any container. Running an image is a bit like doing a git checkout.

To continue the analogy, running a private Docker registry is like running a private Git repository for your Docker images.

Step One - Install Prerequisites

You should create a user with sudo access on the registry server (and on the clients when you get that far).

The Docker registry is a Python application, so to get it up and running we need to install the Python development utilities and a few libraries:

sudo apt-get update

sudo apt-get -y install build-essential python-dev libevent-dev python-pip liblzma-dev

Step Two - Install and Configure Docker Registry

To install the latest stable release of the Docker registry (0.7.3 at the time of writing) we'll use Python's package management utility pip:

sudo pip install docker-registry

Docker-registry requires a configuration file.

pip by default installs this config file in a rather obscure location, which can differ depending how your system's Python is installed. So, to find the path, we'll attempt to run the registry and let it complain:

gunicorn --access-logfile - --debug -k gevent -b 0.0.0.0:5000 -w 1 docker_registry.wsgi:application

Since the config file isn't in the right place yet it will fail to start and spit out an error message that contains a FileNotFoundError that looks like this:

FileNotFoundError: Heads-up! File is missing: /usr/local/lib/python2.7/dist-packages/docker_registry/lib/../../config/config.yml

The registry includes a sample config file called config_sample.yml at the same path, so we can use the path it gave us to locate the sample file.

Copy the path from the error message (in this case /usr/local/lib/python2.7/dist-packages/ docker_registry/lib/../../config/config.yml), and remove the config.yml portion so we can change to that directory:

cd /usr/local/lib/python2.7/dist-packages/docker_registry/lib/../../config/

Now copy the config_sample.yml file to config.yml:

sudo cp config_sample.yml config.yml

Docker by default saves its data under the /tmp directory, which can lead to unpleasantness since the /tmp folder is cleared on reboot on many flavors of Linux. Let's create a more permanent folder to store our data:

sudo mkdir /var/docker-registry

Now we'll edit the config.yml file to update any references to /tmp to /var/docker-registry. First look for a line near the top of the file that starts with sqlalchemy_index_database:

sqlalchemy_index_database: _env:SQLALCHEMY_INDEX_DATABASE:sqlite:////tmp/docker-registry.db

Change it to point to /var/docker-registry like so:

sqlalchemy_index_database: _env:SQLALCHEMY_INDEX_DATABASE:sqlite:////var/docker-registry/docker-registry.db

Look a bit further down the file for the local: section, and repeat the process, changing this:

local: &local storage: local storage_path: _env:STORAGE_PATH:/tmp/registry

To this:

local: &local storage: local storage_path: _env:STORAGE_PATH:/var/docker-registry/registry

The other default values in the sample config are fine, so no need to change anything there. Feel free to look through them. If you want to do something more complex like using external storage for your Docker data, this file is the place to set it up. That's outside the scope of this tutorial though, so you'll have to check the docker-registry documentation if you want to go that route.

Now that the config is in the right place let's try to test the server again:

gunicorn --access-logfile - --debug -k gevent -b 0.0.0.0:5000 -w 1 docker_registry.wsgi:application

You should see output that looks like this:

2014-07-27 07:12:24 [29344] [INFO] Starting gunicorn 18.0 2014-07-27 07:12:24 [29344] [INFO] Listening at: http://0.0.0.0:5000 (29344) 2014-07-27 07:12:24 [29344] [INFO] Using worker: gevent 2014-07-27 07:12:24 [29349] [INFO] Booting worker with pid: 29349 2014-07-27 07:12:24,807 DEBUG: Will return docker-registry.drivers.file.Storage

Great! Now we have a Docker registry running. Go ahead and kill it with Ctrl+C.

At this point the registry isn't that useful yet - it won't start unless you type in the above gunicorn command. Also, Docker registry doesn't come with any built-in authentication mechanism, so it's insecure and completely open to the public right now.

Step Three - Start Docker Registry as a Service

Let's set the registry to start on system startup by creating an Upstart script.

First let's create a directory for the log files to live in:

sudo mkdir -p /var/log/docker-registry

Then use your favorite text editor to create an Upstart script:

sudo nano /etc/init/docker-registry.conf

Add the following contents to create the Upstart script:

description "Docker Registry"

start on runlevel [2345] stop on runlevel [016]

respawn respawn limit 10 5

script exec gunicorn --access-logfile /var/log/docker-registry/access.log --error-logfile /var/log/docker-registry/server.log -k gevent --max-requests 100 --graceful-timeout 3600 -t 3600 -b localhost:5000 -w 8 docker_registry.wsgi:application end script

For more about Upstart scripts, please read this tutorial.

If you run:

sudo service docker-registry start

You should see something like this:

docker-registry start/running, process 25303

You can verify that the server is running by taking a look at the server.log file like so:

tail /var/log/docker-registry/server.log

If all is well you'll see text similar to the output from our previous gunicorn test above.

Now that the server's running in the background, let's move on to configuring Nginx so the registry is secure.

Step Four - Secure Your Docker Registry with Nginx

The first step is to set up authentication so that not just anybody can log into our server.

Let's install Nginx and the apache2-utils package (which allows us to easily create authentication files that Nginx can read).

sudo apt-get -y install nginx apache2-utils

Now it's time to create our Docker users.

Create the first user as follows:

sudo htpasswd -c /etc/nginx/docker-registry.htpasswd USERNAME

Create a new password for this user when prompted.

If you want to add more users in the future, just re-run the above command without the c option:

sudo htpasswd /etc/nginx/docker-registry.htpasswd USERNAME_2

At this point we have a docker-registry.htpasswd file with our users set up, and a Docker registry available. You can take a peek at the file at any point if you want to view your users (and remove users if you want to revoke access).

Next we need to tell Nginx to use that authentication file, and to forward requests to our Docker registry.

Let's create an Nginx configuration file. Create a new docker-registry file, entering your sudo password if needed:

sudo nano /etc/nginx/sites-available/docker-registry

Add the following content. Comments are in-line. For more about Nginx virtual host configuration files, see this tutorial.

For versions of Nginx > 1.3.9 that include chunked transfer encoding support

Replace with appropriate values where necessary

upstream docker-registry { server localhost:5000; }

server { listen 8080; server_name my.docker.registry.com;

ssl on;

ssl_certificate /etc/ssl/certs/docker-registry;

ssl_certificate_key /etc/ssl/private/docker-registry;

proxy_set_header Host $http_host; # required for Docker client sake proxy_set_header X-Real-IP $remote_addr; # pass on real client IP

client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads

required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)

chunked_transfer_encoding on;

location / { # let Nginx know about our auth file auth_basic "Restricted"; auth_basic_user_file docker-registry.htpasswd;

 proxy_pass http://docker-registry;

} location /_ping { auth_basic off; proxy_pass http://docker-registry; } location /v1/_ping { auth_basic off; proxy_pass http://docker-registry; }

}

And link it up so that Nginx can use it:

sudo ln -s /etc/nginx/sites-available/docker-registry /etc/nginx/sites-enabled/docker-registry

Then restart Nginx to activate the virtual host configuration:

sudo service nginx restart

Let's make sure everything worked. Our Nginx server is listening on port 8080, while our original docker-registry server is listening on localhost port 5000.

We can use curl to see if everything is working:

curl localhost:5000

You should something like the following

"docker-registry server (dev) (v0.8.1)"

Great, so Docker is running. Now to check if Nginx worked:

curl localhost:8080

This time you'll get back the HTML of an unauthorized message:

<title>401 Authorization Required</title>

401 Authorization Required


nginx/1.4.6 (Ubuntu)

It's worthwhile to run these two test commands from a remote machine as well, using the server's IP address instead of localhost, to verify that your ports are set up correctly.

In the Upstart config file we told docker-registry to listen only on localhost, which means it shouldn't be accessible from the outside on port 5000. Nginx, on the other hand, is listening on port 8080 on all interfaces, and should be accessible from the outside. If it isn't then you may need to adjust your firewall permissions.

Good, so authentication is up. Let's try to log in now with one of the usernames you created earlier:

curl USERNAME:[email protected]:8080

If it worked correctly you should now see:

"docker-registry server (dev) (v0.8.1)"

Step Five - Set Up SSL

At this point we have the registry up and running behind Nginx with HTTP basic authentication working. However, the setup is still not very secure since the connections are unencrypted. You might have noticed the commented-out SSL lines in the Nginx config file we made earlier.

Let's enable them. First, open the Nginx configuration file for editing:

sudo nano /etc/nginx/sites-available/docker-registry

Use the arrow keys to move around and look for these lines:

server { listen 8080; server_name my.docker.registry.com;

  # ssl on;
  # ssl_certificate /etc/ssl/certs/docker-registry;
  # ssl_certificate_key /etc/ssl/private/docker-registry;

Uncomment the SSL lines by removing the # symbols in front of them. If you have a domain name set up for your server, change the server_name to your domain name while you're at it. When you're done the file should look like this:

server { listen 8080; server_name yourdomain.com;

  ssl on;
  ssl_certificate /etc/ssl/certs/docker-registry;
  ssl_certificate_key /etc/ssl/private/docker-registry;

Save the file. Nginx is now configured to use SSL and will look for the SSL certificate and key files at /etc/ssl/certs/docker-registry and /etc/ssl/private/docker-registry respectively.

If you already have an SSL certificate set up or are planning to buy one, then you can just copy the certificate and key files to the paths listed above (ssl_certificate and ssl_certificate_key).

You could also get a free signed SSL certificate.

Or, use a self-signed SSL certificate. Since Docker currently doesn't allow you to use self-signed SSL certificates this is a bit more complicated than usual, since we'll also have to set up our system to act as our own certificate signing authority.

Signing Your Own Certificate

First let's make a directory to store the new certificates and go there:

mkdir ~/certs cd ~/certs

Generate a new root key:

openssl genrsa -out devdockerCA.key 2048

Generate a root certificate (enter whatever you'd like at the prompts):

openssl req -x509 -new -nodes -key devdockerCA.key -days 10000 -out devdockerCA.crt

Then generate a key for your server (this is the file we'll later copy to /etc/ssl/private/ docker-registry for Nginx to use):

openssl genrsa -out dev-docker-registry.com.key 2048

Now we have to make a certificate signing request.

After you type this command OpenSSL will prompt you to answer a few questions. Write whatever you'd like for the first few, but when OpenSSL prompts you to enter the "Common Name" make sure to type in the domain of your server.

openssl req -new -key dev-docker-registry.com.key -out dev-docker-registry.com.csr

For example, if your Docker registry is going to be running on the domain www.ilovedocker.com, then your input should look like this:

Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []:www.ilovedocker.com Email Address []:

Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:

Do not enter a challenge password. Then we need to sign the certificate request:

openssl x509 -req -in dev-docker-registry.com.csr -CA devdockerCA.crt -CAkey devdockerCA.key -CAcreateserial -out dev-docker-registry.com.crt -days 10000

Now that we've generated all the files we need for our certificate to work, we need to copy them to the correct places.

First copy the certificate and key to the paths where Nginx is expecting them to be:

sudo cp dev-docker-registry.com.crt /etc/ssl/certs/docker-registry sudo cp dev-docker-registry.com.key /etc/ssl/private/docker-registry

Since the certificates we just generated aren't verified by any known certificate authority (e.g., VeriSign), we need to tell any clients that are going to be using this Docker registry that this is a legitimate certificate. Let's do this locally so that we can use Docker from the Docker registry server itself:

sudo mkdir /usr/local/share/ca-certificates/docker-dev-cert sudo cp devdockerCA.crt /usr/local/share/ca-certificates/docker-dev-cert sudo update-ca-certificates

You'll have to repeat this step for every machine that connects to this Docker registry! Otherwise you will get SSL errors and be unable to connect. These steps are shown in the client test section as well.

SSL Test

Let's restart Nginx to reload the configuration and SSL keys:

sudo service nginx restart

Do another curl test (only this time using https) to verify that our SSL setup is working properly. Keep in mind that for SSL to work correctly you will have to use the same domain name you typed into the Common Name field earlier while you were creating your SSL certificate.

curl https://USERNAME:[email protected]:8080

For example, if the user and password you set up were nik and test, and your SSL certificate is for www.ilovedocker.com, then you would type the following:

curl https://nik:[email protected]:8080

If all went well, you should see the familiar:

"docker-registry server (dev) (v0.8.1)"

If not, recheck the SSL steps and your Nginx configuration file to make sure everything is correct.

Now we have a Docker registry running behind an Nginx server which is providing authentication and encryption via SSL.

Step Six - Access Your Docker Registry from Another Machine

To access your Docker registry, first add the SSL certificate you created earlier to the new client machine. The file you want is located at ~/certs/devdockerCA.crt. You can copy it to the new machine directly or use the below instructions to copy and paste it:

On the registry server, view the certificate:

cat ~/certs/devdockerCA.crt

You'll get output that looks something like this:

-----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJANiXy7fHSPrmMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQwHhcNMTQwOTIxMDYwODE2WhcNNDIwMjA2MDYwODE2WjBF MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50 ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB CgKCAQEAuK4kNFaY3k/0RdKRK1XLj9+IrpR7WW5lrNaFB0OIiItHV9FjyuSWK2mj ObR1IWJNrVSqWvfZ/CLGay6Lp9DJvBbpT68dhuS5xbVw3bs3ghB24TntDYhHMAc8 GWor/ZQTzjccHUd1SJxt5mGXalNHUharkLd8mv4fAb7Mh/7AFP32W4X+scPE2bVH OJ1qH8ACo7pSVl1Ohcri6sMp01GoELyykpXu5azhuCnfXLRyuOvQb7llV5WyKhq+ SjcE3c2C+hCCC5g6IzRcMEg336Ktn5su+kK6c0hoD0PR/W0PtwgH4XlNdpVFqMST vthEG+Hv6xVGGH+nTszN7F9ugVMxewIDAQABo1AwTjAdBgNVHQ4EFgQULek+WVyK dJk3JIHoI4iVi0FPtdwwHwYDVR0jBBgwFoAULek+WVyKdJk3JIHoI4iVi0FPtdww DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAkignESZcgr4dBmVZqDwh YsrKeWSkj+5p9eW5hCHJ5Eg2X8oGTgItuLaLfyFWPS3MYWWMzggxgKMOQM+9o3+k oH5sUmraNzI3TmAtkqd/8isXzBUV661BbSV0obAgF/ul5v3Tl5uBbCXObC+NUikM O0C3fDmmeK799AM/hP5CTDehNaFXABGoVRMSlGYe8hZqap/Jm6AaKThV4g6n4F7M u5wYtI9YDMsxeVW6OP9ZfvpGZW/n/88MSFjMlBjFfFsorfRd6P5WADhdfA6CBECG LP83r7/MhqO06EOpsv4n2CJ3yoyqIr1L1+6C7Erl2em/jfOb/24y63dj/ATytt2H 6g== -----END CERTIFICATE-----

Copy that output to your clipboard and connect to your client machine.

On the client server, create the certificate directory:

sudo mkdir /usr/local/share/ca-certificates/docker-dev-cert

Open the certificate file for editing:

nano /usr/local/share/ca-certificates/docker-dev-cert/devdockerCA.crt

Paste the certificate contents.

Verify that the file saved to the client machine correctly by viewing the file:

cat /usr/local/share/ca-certificates/docker-dev-cert/devdockerCA.crt

If everything worked properly you'll see the same text from earlier:

-----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJANiXy7fHSPrmMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV ... ... LP83r7/MhqO06EOpsv4n2CJ3yoyqIr1L1+6C7Erl2em/jfOb/24y63dj/ATytt2H 6g== -----END CERTIFICATE-----

Now update the certificates:

sudo update-ca-certificates

You should get output that looks like the following (note the "1 added")

Updating certificates in /etc/ssl/certs... 1 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d....done.

If you don't have Docker installed on the client yet, do so now.

On most versions of Ubuntu you can quickly install a recent version of Docker by following the next few commands. If your client is on a different distro or you have issues then see Docker's installation documentation for other ways to install Docker.

Add the repository key:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9;

Create a file to list the Docker repository:

sudo nano /etc/apt/sources.list.d/docker.list

Add the following line to the file:

deb https://get.docker.io/ubuntu docker main

Update your package lists:

sudo apt-get update

Install Docker:

sudo apt-get install -y --force-yes lxc-docker

To make working with Docker a little easier, let's add our current user to the Docker group and re-open a new shell:

sudo gpasswd -a ${USER} docker sudo su -l $USER #(enter your password at the prompt if needed)

Restart Docker to make sure it reloads the system's CA certificates.

sudo service docker restart

You should now be able to log in to your Docker registry from the client machine:

docker login https://YOUR-HOSTNAME:8080

Note that you're using https:// and port 8080 here. Enter the username and password you set up earlier (enter whatever you'd like for email if prompted). You should see a Login Succeeded message.

At this point your Docker registry is up and running! Let's make a test image to push to the registry.

Step Seven - Publish to Your Docker Registry

On the client server, create a small empty image to push to our new registry.

docker run -t -i ubuntu /bin/bash

After it finishes downloading you'll be inside a Docker prompt. Let's make a quick change to the filesystem:

touch /SUCCESS

Exit out of the Docker container:

exit

Commit the change:

docker commit $(docker ps -lq) test-image

If you run docker images now, you'll see that you have a new test-image in the image list:

docker images

REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE test-image latest 1f3ce8008165 9 seconds ago 192.7 MB ubuntu trusty ba5877dc9bec 11 days ago 192.7 MB

This image only exists locally right now, so let's push it to the new registry we've created.

First, log in to the registry with Docker. Note that you want to use https:// and port 8080:

docker login https://:8080

Enter the username and password you set up earlier:

Username: USERNAME Password: PASSWORD Email: Account created. Please see the documentation of the registry http://localhost:5000/v1/ for instructions how to activate it.

Docker has an unusual mechanism for specifying which registry to push to. You have to tag an image with the private registry's location in order to push to it. Let's tag our image to our private registry:

docker tag test-image YOUR-DOMAIN:8080/test-image

Note that you are using the local name of the image first, then the tag you want to add to it. The tag is not using https://, just the domain, port, and image name.

Now we can push that image to our registry. This time we're using the tag name only:

docker push :8080/test-image

This will take a moment to upload to the registry server. You should see output that includes Image successfully pushed.

Step Eight - Pull from Your Docker Registry

To make sure everything worked let's go back to our original server (where you installed the Docker registry) and pull the image we just pushed from the client. You could also test this from a third server.

If Docker is not installed on your test pull server, go back and follow the installation instructions (and if it's a third server, the SSL instructions) from Step Six.

Log in with the username and password you set up previously.

docker login https://:8080

And now pull the image. You want just the "tag" image name, which includes the domain name, port, and image name (but not https://):

docker pull :8080/test-image

Docker will do some downloading and return you to the prompt. If you run the image on the new machine you'll see that the SUCCESS file we created earlier is there:

docker run -t -i :8080/test-image /bin/bash

List your files:

ls

You should see the SUCCESS file we created earlier:

SUCCESS bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var

Congratulations! You've just used your own private Docker registry to push and pull your first Docker container! Happy Docker-ing!

HeartedHeart 53 Subscribe Subscribed

Share nikvdp Author: Nik van der Ploeg

Spin up an SSD cloud server in under a minute.

Simple setup. Full root access. Straightforward pricing.

Deploy Server

Related Tutorials

  • The Docker Ecosystem: Scheduling and Orchestration
  • The Docker Ecosystem: An Overview of Containerization
  • How To Clean Up Your Docker Environment Using CloudSlang on a CoreOS Cluster
  • How To Manage Your Multi-Node Deployments with Rancher and Docker Machine on Ubuntu 14.04
  • How To Deploy Wordpress with Shipyard on Ubuntu 14.04

63 Comments

[ ] Log In to Comment Load

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Copyright © 2015 DigitalOcean™ Inc.

  • Community

  • Tutorials

  • Questions

  • Projects

  • Tags

  • Terms, Privacy, & Copyright

  • Security

  • Get Paid to Write

Sign Up

Not so fast, you must have an account before you can do that. Log In Sign Up

#+end_example *** DONE try docker push(see how fast), and use private dockerimage for your testing(see how easy) CLOSED: [2015-07-19 Sun 08:25] http://50.198.76.249:443/job/DockerDeploySandboxFido/4/console docker commit -m "initial" -a "Denny[email protected]" ac85c4441129 denny/privatetest:latest

docker tag denny/privatetest:latest www.testdocker.com:8080/privatetest docker push www.testdocker.com:8080/privatetest

make sure it's not pushed to official website

in another client, use docker pull

ssh -p 7022 [email protected] docker pull www.testdocker.com:8080/privatetest *** DONE [#A] automatically login to a private docker registry CLOSED: [2015-07-19 Sun 16:29] docker login -u mydocker -p dockerpasswd https://www.testdocker.com:8080

https://github.com/GoogleCloudPlatform/kubernetes/issues/4180 https://github.com/GoogleCloudPlatform/kubernetes/issues/4180 ** TODO docker use upstart https://github.com/docker/docker/issues/1024 http://blog.ianholden.com/using-docker-with-upstart/ ** DONE [#A] docker rmi none image: docker rmi $(docker images | grep "" | awk -F' ' '{print $3}') CLOSED: [2016-08-17 Wed 21:34] ** DONE [#A] docker remove orphaned volumes: docker volume rm $(docker volume ls -qf dangling=true) CLOSED: [2016-08-05 Fri 10:26] http://stackoverflow.com/questions/27812807/orphaned-docker-mounted-host-volumes/35130945#35130945

List all orphaned volumes:

docker volume ls -qf dangling=true

Eliminate all of them with:

docker volume rm $(docker volume ls -qf dangling=true) ** # --8<-------------------------- separator ------------------------>8-- ** DONE docker image: fail to restart sshd: end point is sshd, how to upgrade sshd? CLOSED: [2016-08-22 Mon 17:42]

  1. openssh-server: 1:6.6p1-2ubuntu2.7 --> 1:6.6p1-2ubuntu2.8
  2. Need to restart openssh, however docker fail to do that ** DONE Unlock Jenkins by automation CLOSED: [2016-08-24 Wed 17:31] http://stackoverflow.com/questions/35960883/how-to-unlock-jenkins

-Djenkins.install.runSetupWizard=false

echo "JAVA_ARGS="$JAVA_ARGS -Dhudson.diyChunking=false -Djenkins.install.runSetupWizard=false"" >> /etc/default/jenkins ** DONE docker list files in container CLOSED: [2016-08-27 Sat 12:03] docker export 9726a2a69a39 | docker run -i --rm ubuntu tar tvf - ** TODO [#A] start docker daemon at a fixed ip address: 172.17.0.1 https://docs.docker.com/v1.8/articles/networking/

How to manually fix ** DONE [#A] install chef with given version CLOSED: [2016-09-01 Thu 21:53] http://stackoverflow.com/questions/27657888/how-to-install-docker-specific-version

chef-solo --version #+END_EXAMPLE

sudo docker ps -a | grep Exit | cut -d ' ' -f 1 | xargs sudo docker rm ** DONE docker network CLOSED: [2016-09-05 Mon 13:19] docker network create --subnet=172.18.0.0/16 longruncluster

docker run -t -d -h kitchen-cluster-node1 --name kitchen-cluster-node1 --privileged -p 5122:22 --net=longruncluster --ip 172.18.0.22 -v /data/docker/longruncluster/couchbase1:/opt/couchbase XXX/mdm:latest /usr/sbin/sshd -D docker run -t -d -h kitchen-cluster-node2 --name kitchen-cluster-node2 --privileged -p 5123:22 --net=longruncluster --ip 172.18.0.23 -v /data/docker/longruncluster/couchbase2:/opt/couchbase XXX/mdm:latest /usr/sbin/sshd -D docker run -t -d -h kitchen-cluster-node3 --name kitchen-cluster-node3 --privileged -p 5124:22 --net=longruncluster --ip 172.18.0.24 -v /data/docker/longruncluster/couchbase3:/opt/couchbase XXX/mdm:latest /usr/sbin/sshd -D ** DONE docker-gc: remove old container CLOSED: [2016-09-06 Tue 20:28] https://github.com/spotify/docker-gc http://blog.amosti.net/docker-garbage-collection/

A simple Docker container and image garbage collection script.

  • Containers that exited more than an hour ago are removed.
  • Images that don't belong to any remaining container after that are removed. ** DONE docker login: docker login -u XXXreadonly -p TOTVSChangeMe1 -e [email protected] CLOSED: [2016-09-10 Sat 17:55] ** DONE docker start container with specific ip CLOSED: [2016-09-22 Thu 17:12] docker network create --subnet=172.18.0.0/16 longruncluster

docker stop kitchen-cluster-node1; docker rm kitchen-cluster-node1

docker run -t -d -h kitchen-cluster-node1 --name kitchen-cluster-node1 --privileged -p 5122:22 --net=longruncluster --ip 172.18.0.22 -v /data/docker/longruncluster/couchbase1:/opt/couchbase XXX/mdm:latest /usr/sbin/sshd -D

docker exec -it kitchen-cluster-node1 bash

ping 8.8.8.8

ufw allow in on br-a0e2eab8a571

docker network without iptable ** TODO [#A] networking across docker daemon http://techcrunch.com/2015/08/17/a-look-at-startup-opportunities-in-the-container-era/

Networking: Similarly, there is a need for container-aware networking services. Weave creates a virtual network that connects Docker containers deployed across multiple hosts and enables their automatic discovery. ** TODO [#A] docker pipework http://blog.opskumu.com/docker.html#section-5 ** TODO [#A] how docker expose ports works beneath :IMPORTANT: --iptables=false ** TODO docker installation warning: current kernel is not supported by the linux-image-extra-virtual package #+BEGIN_EXAMPLE [2016-09-16 06:17:52] Install docker: wget -qO- https://get.docker.com/ | sh

  • '[' -n /var/log/bootstrap_mdm_sandbox.log ']'
  • echo -ne '[2016-09-16 06:17:52] Install docker: wget -qO- https://get.docker.com/ | sh\n'
  • wget -qO- https://get.docker.com/
  • sh modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-x86_64-linode63/modules.dep.bin' Warning: current kernel is not supported by the linux-image-extra-virtual package. We have no AUFS support. Consider installing the packages linux-image-virtual kernel and linux-image-extra-virtual for AUFS support.
  • sleep 10
  • sh -c apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.r4fOP9crsQ --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D gpg: requesting key 2C52609D from hkp server ha.pool.sks-keyservers.net gpg: key 2C52609D: public key "Docker Release Tool (releasedocker) [email protected]" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) #+END_EXAMPLE ** docker mess up iptables https://www.reddit.com/r/devops/comments/530jde/docker_mess_up_firewall_setting/ ** DONE [#A] docker run fail in linode: reboot seem to solved the problem :IMPORTANT: CLOSED: [2017-10-30 Mon 12:59] [email protected]:~# docker run -t -d -h jenkins --name docker-jenkins --privileged -p 51022:9000 -p 51023:22 -p 51081:18000 -p 48080:18080 XXX/mdm_jenkins:latest /usr/sbin/sshd -D

60ef25a7ff7f176ed4dd37b4d0f04a1ca4df913339eed271f87073b3efd32d6d docker: Error response from daemon: driver failed programming external connectivity on endpoint docker-jenkins (a6fa21dcc3bec1f62fb211b49789eb72458358c6fb210006a804b8094c3f1061): iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.2 --dport 18080 -j ACCEPT: iptables: No chain/target/match by that name. (exit status 1).

#+BEGIN_EXAMPLE Bruno Volpato [12:33 PM] np

Denny Zhang [12:33 PM] I remember I have run into this before.

Let me check and reply afterwards

Bruno Volpato [12:34 PM] I played a few things on it, might be even good to restore original image from Linode too, if you don't want to investigate.. but maybe it's worthy to try to understand what's happening

Denny Zhang [12:51 PM] Checking now

Bruno Volpato [12:51 PM] that on 45.33.48.226

Denny Zhang [12:51 PM] ok

[12:55] Bruno

If I reboot docker daemon, it's fine

[12:59] Looks like when we have deleted all the containers, the iptable chain of DOCKER has been deleted automatically docker daemon.

When we try to start mdm-jenkins, it need to update iptable for port forwarding. But it has failed to find DOCKER chain.

If we restart docker daemon, the DOCKER chain will be created automatically.

I'm suspecting if we upgrade to latest docker daemon, we won't see this issue. #+END_EXAMPLE ** DONE docker add volume mount for an existing containers: so far, the only way is recreation. CLOSED: [2016-10-17 Mon 21:35] http://stackoverflow.com/questions/28302178/how-can-i-add-a-volume-to-an-existing-docker-container ** CANCELED Linode Ubuntu 16.04 fail to start docker CLOSED: [2016-10-25 Tue 07:34] #+BEGIN_EXAMPLE ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Sat 2016-10-22 02:32:11 UTC; 15s ago Docs: https://docs.docker.com Process: 18711 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE) Main PID: 18711 (code=exited, status=1/FAILURE)

Oct 22 02:32:10 ubuntu systemd[1]: Starting Docker Application Container Engine... Oct 22 02:32:10 ubuntu dockerd[18711]: time="2016-10-22T02:32:10.977348627Z" level=info msg="libcontainerd: new containerd process, pid: 18716" Oct 22 02:32:11 ubuntu dockerd[18711]: time="2016-10-22T02:32:11.007605056Z" level=error msg="devmapper: Unable to delete device: devicemapper: C Oct 22 02:32:11 ubuntu dockerd[18711]: time="2016-10-22T02:32:11.008171447Z" level=warning msg="devmapper: Usage of loopback devices is strongly Oct 22 02:32:11 ubuntu dockerd[18711]: time="2016-10-22T02:32:11.009150455Z" level=error msg="[graphdriver] prior storage driver "devicemapper" Oct 22 02:32:11 ubuntu dockerd[18711]: time="2016-10-22T02:32:11.009643465Z" level=fatal msg="Error starting daemon: error initializing graphdriv Oct 22 02:32:11 ubuntu systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE Oct 22 02:32:11 ubuntu systemd[1]: Failed to start Docker Application Container Engine. Oct 22 02:32:11 ubuntu systemd[1]: docker.service: Unit entered failed state. Oct 22 02:32:11 ubuntu systemd[1]: docker.service: Failed with result 'exit-code'. #+END_EXAMPLE

Container Engine... Oct 22 02:39:38 ubuntu dockerd[18861]: time="2016-10-22T02:39:38.414713165Z" level=info msg="libcontainerd: new containerd process, pid: 18869" Oct 22 02:39:38 ubuntu dockerd[18861]: time="2016-10-22T02:39:38.445351123Z" level=error msg="devmapper: Unable to delete device: devicemapper: Can't set task name /dev/mapper/docker-8:0-133292-pool" Oct 22 02:39:38 ubuntu dockerd[18861]: time="2016-10-22T02:39:38.446049105Z" level=warning msg="devmapper: Usage of loopback devices is strongly discouraged for production use. Please use --storage-opt dm.thinpooldev or use man docker to refer to dm.thinpooldev se Oct 22 02:39:38 ubuntu dockerd[18861]: time="2016-10-22T02:39:38.452081921Z" level=error msg="[graphdriver] prior storage driver "devicemapper" failed: devicemapper: Can't set task name /dev/mapper/docker-8:0-133292-pool" Oct 22 02:39:38 ubuntu dockerd[18861]: time="2016-10-22T02:39:38.452194187Z" level=fatal msg="Error starting daemon: error initializing graphdriver: devicemapper: Can't set task name /dev/mapper/docker-8:0-133292-pool" Oct 22 02:39:38 ubuntu systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE Oct 22 02:39:38 ubuntu systemd[1]: Failed to start Docker Application Container Engine. ** DONE start docker daemon with storage layer as overlay, instead of devicemapper :noexport: CLOSED: [2016-11-25 Fri 12:37] https://docs.docker.com/engine/userguide/storagedriver/selectadriver/

#+BEGIN_EXAMPLE vim /etc/default/docker

service docker stop

cp -r /var/lib/docker /root/

DOCKER_OPTS="--storage-driver=overlay $DOCKER_OPTS"

service docker start

docker info | grep -C 5 Storage #+END_EXAMPLE

service docker stop

dockerd --storage-driver=devicemapper & dockerd --storage-driver=overlay &

service docker start

#+BEGIN_EXAMPLE [email protected]:/home/denny# docker info | grep -C 5 Storage WARNING: Usage of loopback devices is strongly discouraged for production use. Use --storage-opt dm.thinpooldev to specify a custom block storage device. Running: 2 Paused: 0 Stopped: 6 Images: 70 Server Version: 1.12.3 WARNING: No swap limit support WARNING: No kernel memory limit support Storage Driver: devicemapper Pool Name: docker-8:0-125110-pool Pool Blocksize: 65.54 kB Base Device Size: 10.74 GB Backing Filesystem: ext4 #+END_EXAMPLE

#+BEGIN_EXAMPLE docker stop my-test; docker rm my-test docker run -t -d --privileged -h mytest --name my-test ubuntu:14.04 /bin/bash [email protected]:/home/denny# docker exec -it my-test bash [email protected]:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 378G 174G 186G 49% / tmpfs 12G 0 12G 0% /dev tmpfs 12G 0 12G 0% /sys/fs/cgroup /dev/root 378G 174G 186G 49% /etc/hosts shm 64M 0 64M 0% /dev/shm #+END_EXAMPLE ** TODO [#A] docker build image with privileged https://github.com/docker/docker/issues/1916 https://groups.google.com/forum/#!topic/docker-user/UK8-U4X20ww *** DONE fail to start docker daemon: Be sure to run the container in privileged mode. CLOSED: [2016-11-25 Fri 11:31] http://stackoverflow.com/questions/38808941/failed-to-connect-to-containerd

time="2016-11-25T03:08:08.281520750Z" level=info msg="libcontainerd: new containerd process, pid: 16" time="2016-11-25T03:08:08.281700083Z" level=fatal msg="Failed to connect to containerd. Please make sure containerd is installed in your PATH or you have specificed the correct address. Got error: write /proc/16/oom_score_adj: permission denied"

#+BEGIN_EXAMPLE [email protected]:/home/denny# docker run -t -d -h mytest --name my-test XXX/docker:v1.0 /bin/bash 3cd7faa784d75f1f2ff5cf4ebefcd209f2a30b77c78c3574d196cac2e16ff93d [email protected]:/home/denny# docker exec -it my-test bash [email protected]:/# [email protected]:/# /usr/bin/dockerd INFO[0000] libcontainerd: new containerd process, pid: 36 FATA[0000] Failed to connect to containerd. Please make sure containerd is installed in your PATH or you have specificed the correct address. Got error: write /proc/36/oom_score_adj: permission denied #+END_EXAMPLE ** DONE Docker hub autobuild: Get alerts, when docker hub autobuild fail CLOSED: [2017-02-03 Fri 09:29] https://forums.docker.com/t/docker-hub-webhook-on-build-failure/1166 https://github.com/docker/hub-feedback/issues/435

You can get an email when a build fails. Not as API-friendly as a webhook, but its something. You should find the setting here https://registry.hub.docker.com/account/notifications/87

#+BEGIN_EXAMPLE dennyzhang [12:29 AM] Ozgur, when you're available, could you help me with some docker hub setting?

Login to https://hub.docker.com and make some change, or grant me admin access temporarily.

Details: As we all know, we're using auto-build feature of docker hub.

When image build fails, we certainly want timely alerts. Right?

However, by default we won't have it:

  1. The auto build repo has web hooks. However the hooks only trigger, when build succeed.

https://forums.docker.com/t/docker-hub-webhook-on-build-failure/1166/7

  1. People can get email notifications, but by default this feature is not enabled.

Login to Dashboard -> Account Settings -> Notifications Docker Forums Docker Hub Webhook on build failure Desired fields: - build status - size of final image (if success) - build duration - url to a log dump if too large, otherwise the text of the build log - if failure, which instruction did it fail on? Also useful to have (but more work) - listing of dockerfile instructions w/ individual durations + log outputs

(edited)

dennyzhang [12:29 AM] uploaded this image: Enable docker notification Add Comment

dennyzhang [12:31 AM] Once enabled, the email related the account will get alerts.

It's better we bind more email addresses to that account.

Thus we can have more eyes on these failures, and remove SPOF(single point of failure). 🙂 (edited)

ozgur.v.amac [4:54 AM] @dennyzhang You are already in the admin team for all builds I have created. If you do not see the Webhooks tab when you click into each, then maybe it is because I have created the build with my personal docker account and not penroz account. If that is the case, then maybe you can try creating an automated build when switched to penroz under the dropdown. You can go ahead delete which ever build you choose, and recreate it, since we are on a 5 images plan. That is the other thing I wanted to solve, we want to use tags instead of a new build per Dockerfile e.g. instead of penroz/idp:latest and penroz/shib:latest, we can do penroz/idp:jetty and penroz/idp:shib or even more conservative penroz/base:idp, penroz/base:shib, penroz/base:oauth2 etc. for all depending on base and do some other pattern for the rest. Basically, we can start from scratch with automated builds, and I leave it up to you how you build them. About the notifications tab under penroz, I do not see that one either. Maybe we need @brandon.chen to hand over the password to penroz account for that one 😁 Question for you: If we use the notifications setting under each of our accounts, would that have the same effect? To be honest "docker hub" is not well-known territory :thinking_face: , let's learn the best practice here together. Sound good? Thank you again 😊 ✌️ All else fails we will do a hangouts session with you start of next week to figure all this out real-time. #+END_EXAMPLE ** TODO ubuntu docker run iptables No chain/target/match by that name https://github.com/docker/docker/issues/1871 #+BEGIN_EXAMPLE [email protected]:~# docker run --name myjenkins -p 48084:8080 -p 50000:50000 -v /root/jenkins:/var/jenkins_home jenkins docker: Error response from daemon: driver failed programming external connectivity on endpoint myjenkins (ebc6368db5f7888de0454c5b2a36ce1508aee74b35d4ba8499d28d42b2a55178): (iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.2 --dport 50000 -j ACCEPT: iptables: No chain/target/match by that name. (exit status 1)). #+END_EXAMPLE

iptables -t nat -N DOCKER iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER *** sudo iptables -L -n -t nat #+BEGIN_EXAMPLE [email protected]:~# sudo iptables -L -n -t nat sudo: unable to resolve host shibgeek-demo-2722 Chain PREROUTING (policy ACCEPT) target prot opt source destination DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT) target prot opt source destination

Chain OUTPUT (policy ACCEPT) target prot opt source destination DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 172.18.0.0/16 0.0.0.0/0 MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0 MASQUERADE tcp -- 172.18.0.13 172.18.0.13 tcp dpt:80

Chain DOCKER (2 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.18.0.13:80 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:50000 to:172.17.0.2:50000 #+END_EXAMPLE ** DONE [#A] docker don't use root CLOSED: [2017-02-20 Mon 17:29] http://www.projectatomic.io/blog/2015/08/why-we-dont-let-non-root-users-run-docker-in-centos-fedora-or-rhel/ http://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo

Add a new unprivileged user: adduser test-user Make sure to add this new user to the docker group: sudo usermod -aG docker test-user Drop your currently privileged vagrant shell and log in into the test-user account: sudo su test-user cd ~ The following commands were all entered by this new user to build the docker container:

Then run "docker run" or "docker-compose" ** DONE docker remove volumes: docker volume rm XXX CLOSED: [2017-02-25 Sat 00:02]

ls -lth /var/lib/docker/volumes/ | grep demoenv https://docs.docker.com/engine/reference/commandline/volume_rm/ ** DONE docker run change container entrypoint CLOSED: [2017-03-11 Sat 10:16] https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options

docker stop my-test; docker rm my-test docker run -t -d --privileged -h mytest --name my-test --entrypoint=/bin/sh brozton_idp:latest

docker exec -it my-test sh ** DONE Docker copy VS add: ADD can do more than COPY CLOSED: [2017-03-10 Fri 18:10] http://stackoverflow.com/questions/24958140/what-is-the-difference-between-the-copy-and-add-commands-in-a-dockerfile

ADD can do more than COPY:

ADD allows to be an URL If the parameter of ADD is an archive in a recognised compression format, it will be unpacked ** DONE [#A] docker run container change entrypiont CLOSED: [2017-03-13 Mon 23:20] https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime docker stop my-test; docker rm my-test docker run -ti --privileged -h mytest --name my-test --entrypoint /bin/bash brozton_shib:latest ** DONE docker: Reusing an Image with a Non-root User: http://www.projectatomic.io/docs/docker-image-author-guidance/ CLOSED: [2017-03-15 Wed 15:06] USER root RUN yum install -y USER swuser ** # --8<-------------------------- separator ------------------------>8-- ** DONE docker setup ftp server CLOSED: [2017-03-22 Wed 19:21] *** docker-entrypoint.sh #!/bin/sh

Quit, if mandatory envs are not set

if [ -z "$USERNAME" ] || [ -z "$PASSWORD" ]; then echo "ERROR: mandatory env variables are unset" && env exit 1 fi

/launch *** Dockerfile ########## How To Use Docker Image ###############

Image Name:

Git link:

Docker hub link:

Description:

##################################################

Base Docker image: https://github.com/dennyzhang/devops_docker_image/blob/master/ftp/Dockerfile

FROM denny/proftpd:v1

LABEL maintainer "Denny[email protected]"

ADD ./README.md /data/proftpd/ftp/README.md ADD ./docker-entrypoint.sh /docker-entrypoint.sh

RUN apt-get update -y && apt-get install -y curl &&
chmod o+x /*.sh

HEALTHCHECK --interval=2m --timeout=3s
CMD curl ftp://localhost 2>&1 | grep "curl: (67) Access denied: 530" || exit 1

ENTRYPOINT ["/docker-entrypoint.sh"] *** docker-compose.yml version: '2' services: ftp-upload: container_name: ftp-upload build: context: . volumes: - ftp_volume:/data/proftpd/ftp ports: - "21:21" - "20:20" environment: USERNAME: ${FTP_USERNAME} PASSWORD: ${FTP_PASSWORD} volumes: ftp_volume: ** DONE docker build without cache: docker build --no-cache -f ci/Dockerfile-shellcheck --rm . CLOSED: [2017-04-09 Sun 22:21] ** DONE install package without recommend, and make image compact: apt-get install -y --no-install-recommends lsof bash CLOSED: [2017-04-24 Mon 15:46] apt-get install -y --no-install-recommends lsof bash

#+BEGIN_EXAMPLE

install selenium python sdk

RUN apt-get -y update && apt-get install -y --no-install-recommends python python-pip && \

Download seleinum page load test scripts

pip install selenium=3.4.0 && \

Cleanup to make image small

apt-get -y remove && apt-get -y autoremove && rm -rf /var/cache/apk/* && \

Verify docker image

python --version | grep 2.7.12 &&
pip --version | grep 8.1.1 &&
pip list | grep selenium.*3.4.0 #+END_EXAMPLE ** DONE docker stats show container name: docker stats --no-stream $(docker ps | awk '{if(NR>1) print $NF}') CLOSED: [2017-04-30 Sun 11:31] http://stackoverflow.com/questions/30732313/is-there-any-way-to-display-container-names-in-docker-stats

docker stats --format "table {{.Name}}\t{{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" ** DONE [#A] Measure the RAM usage for each containers: docker stats --format "table {{.Name}}\t{{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" CLOSED: [2017-04-30 Sun 11:32] http://stackoverflow.com/questions/30732313/is-there-any-way-to-display-container-names-in-docker-stats http://stackoverflow.com/questions/18956063/memory-usage-of-docker-containers docker stats --no-stream

docker stats --no-stream --format

[email protected]:/var/lib/docker# docker stats --no-stream CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 2e3a4bbbfb89 0.00% 6.621 MiB / 7.793 GiB 0.08% 21.7 MB / 30 MB 0 B / 0 B 4 37b41a0197c4 0.21% 990 MiB / 7.793 GiB 12.41% 87.7 MB / 32.6 MB 0 B / 0 B 56 d4c75f75b140 0.30% 196.4 MiB / 7.793 GiB 2.46% 132 MB / 118 MB 0 B / 0 B 57 572d07b25e7d 0.36% 407.6 MiB / 7.793 GiB 5.11% 117 MB / 127 MB 0 B / 0 B 76 68f0d73fef06 0.00% 5.586 MiB / 7.793 GiB 0.07% 13.8 kB / 6.88 kB 0 B / 0 B 3 185a66b58478 0.10% 923.3 MiB / 7.793 GiB 11.57% 40.1 MB / 18.9 MB 0 B / 0 B 33 5540e7e66bd8 0.09% 883.1 MiB / 7.793 GiB 11.07% 24.2 MB / 27.4 MB 0 B / 0 B 34 650f39a078ee 0.20% 1.116 GiB / 7.793 GiB 14.32% 2.04 MB / 3.22 MB 0 B / 0 B 66 db0994357672 0.06% 698.7 MiB / 7.793 GiB 8.76% 13.9 kB / 6.88 kB 0 B / 0 B 26 a15b7711c7d6 0.08% 1.574 MiB / 7.793 GiB 0.02% 13.8 kB / 6.88 kB 0 B / 0 B 3 6fd0ef75a941 0.07% 27.95 MiB / 7.793 GiB 0.35% 21.9 kB / 10.5 kB 0 B / 0 B 17 9bbb6012d12e 0.47% 839.6 MiB / 7.793 GiB 10.52% 13.8 kB / 6.88 kB 0 B / 0 B 59 ac5b5020a611 0.00% 3.348 MiB / 7.793 GiB 0.04% 14.1 kB / 6.88 kB 0 B / 0 B 4 5afb9fc39faf 0.05% 186.5 MiB / 7.793 GiB 2.34% 54.4 MB / 133 MB 0 B / 0 B 58 1775d210d2a5 0.07% 1.166 GiB / 7.793 GiB 14.96% 5.39 MB / 2.18 MB 0 B / 0 B 43 ** DONE Embedded Docker DNS server: 127.0.0.11 CLOSED: [2017-04-30 Sun 17:00] https://docs.docker.com/engine/userguide/networking/ http://stackoverflow.com/questions/35744650/docker-network-nginx-resolver ** BYPASS [#A] docker build image with interactive way CLOSED: [2017-05-04 Thu 15:42] https://unix.stackexchange.com/questions/264315/can-i-build-a-docker-container-from-dockerfile-in-an-interactive-way-with-alloca https://github.com/moby/moby/issues/1669

docker build -f jenkins_v1_2.dockerfile -t XXX/jenkins:v1.2 --rm=false .

docker build -f jenkins_v1_2.dockerfile -t XXX/jenkins:v1.2 --rm=true . #+BEGIN_EXAMPLE Docker does not support interactive builds for good reasons as explained in this issue.

If you really need to do this, you can use docker commit like so:

docker build -t thirsty_darwin_base /path/to/Dockerfile docker run -it --name=thirsty_darwin_changes thirsty_darwin_base /bin/bash

do interactive stuff in the shell, then exit

docker commit thirsty_darwin_changes thirsty_darwin Now thirsty_darwin has your interactive changes. #+END_EXAMPLE ** DONE apk add --update curl CLOSED: [2017-05-14 Sun 16:18] ** DONE Docker - An error occurred trying to connect: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json: read unix @->/var/run/docker.sock: read: connection reset by peer CLOSED: [2017-05-17 Wed 22:54] https://medium.com/@rasheedamir/docker-an-error-occurred-trying-to-connect-get-http-2fvar-2frun-2fdocker-sock-v1-24-680c61991bc2 ** DONE [#A] Docker, stop messing with my iptables rules: DOCKER_OPTS --iptables=false CLOSED: [2016-08-10 Wed 17:21] https://fralef.me/docker-and-iptables.html echo "DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --iptables=false"" >> /etc/default/docker ** DONE docker port will be widely open CLOSED: [2017-05-19 Fri 17:28] #+BEGIN_EXAMPLE ozgur.v.amac [11:49 AM] @dennyzhang Do you know which ports are open right now to the public on our demo VM?

dennyzhang [11:51 AM]

Status: active

To                         Action      From
--                         ------      ----
22,80,443/tcp              ALLOW       Anywhere
2702/tcp                   ALLOW       Anywhere
Anywhere on docker0        ALLOW       Anywhere
Anywhere                   ALLOW       45.33.87.74
48084/tcp                  ALLOW       Anywhere
8081/tcp                   ALLOW       Anywhere
22,80,443/tcp (v6)         ALLOW       Anywhere (v6)
2702/tcp (v6)              ALLOW       Anywhere (v6)
Anywhere (v6) on docker0   ALLOW       Anywhere (v6)
48084/tcp (v6)             ALLOW       Anywhere (v6)
8081/tcp (v6)              ALLOW       Anywhere (v6)

[11:51]

tcp        0      0 0.0.0.0:2702            0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:8081            0.0.0.0:*               LISTEN
tcp6       0      0 :::2702                 :::*                    LISTEN
tcp6       0      0 :::80                   :::*                    LISTEN
tcp6       0      0 :::50000                :::*                    LISTEN
tcp6       0      0 :::48084                :::*                    LISTEN

[11:52] @ozgur.v.amac

Does that answer your question?

ozgur.v.amac [11:56 AM] @dennyzhang Ok. So here is the situation: We have a problem serving / for launch and /piper for piper on one nginx proxy. So I am going add one more proxy just for piper. I need a public port to map it to. This is one issue. Another one is ichannel running on port 9000 needs to be exposed as to public port as well, because piper needs to communicate with it (client to server).

dennyzhang [11:57 AM] What's the problem with /launch and /piper within nginx prox?

ozgur.v.amac [11:58 AM] There is no /launch. / serves launch and /piper serves piper content, but we do not have the correct config to serve piper's dynamic content from a subpath like /piper. So far we have always tested it via / path so it works fine when we use /

dennyzhang [12:00 PM] Instead of opening another port, do you think we can improve the nginx configuration to achieve that? (edited)

ozgur.v.amac [12:01 PM] I am thinking www.shibgeek.com:3333 serving piper to temporarily get around it. Either that or we can add a subdomain for piper and use that. Maybe adding a subdomain is better e.g. piper.shibgeek.com

dennyzhang [12:03 PM] Understand.

I think subdomain is better. Looks like it would involve multiple changes.

Maybe you can bypass this, refresh the demo env, then file a ticket. We should be able to fix it in the next around.

ozgur.v.amac [12:04 PM] @dennyzhang Demo VM refresh will not work without these changes.

dennyzhang [12:04 PM] Yes, I mean add an extra port mapping (edited)

[12:05] Probably you already notice. Let me describe it again.

Docker port mapping is insecure. It allows all internet requests by default.

Take 50000 port for example. We use that port to collect logs. Right?

But anyone from WAN, can access it

We can improve the security.

Change from

- 50000:50000

To

- 172.17.0.1:50000:50000

Or hack in the way how Docker interacts with iptables.

Might need some time to propose a solution. But it's a known vulnerability of Docker expose. #+END_EXAMPLE ** HALF Running Docker containers as non root https://blog.csanchez.org/2017/01/31/running-docker-containers-as-non-root/ ** DONE docker-compose with extra host binding CLOSED: [2017-06-07 Wed 18:10] https://docs.docker.com/compose/compose-file/compose-file-v2/ extra_hosts Add hostname mappings. Use the same values as the docker client --add-host parameter.

extra_hosts:

  • "somehost:162.242.195.82"
  • "otherhost:50.31.209.229" ** DONE docker keep container longrun in docker-compose.yml CLOSED: [2017-06-29 Thu 18:45] volumes:
    • $PWD/kitchen/profile_bundle.sh:/etc/profile.d/profile_bundle.sh

    TODO: better way for this

    entrypoint: ["tail", "-f", "/dev/null"] ** DONE docker depends_on: doesn't gurantee the depeneded containers are up and running CLOSED: [2017-07-07 Fri 17:57]
  • docker Controlling startup order in Compose https://docs.docker.com/compose/startup-order/

Compose always starts containers in dependency order, but it will not wait for it's up and ready. ** DONE [#A] switch to root and run some command: docker exec -u root -it $container_name sh :IMPORTANT: CLOSED: [2017-07-17 Mon 15:30] ** DONE [#A] Setup apache with docker :IMPORTANT: CLOSED: [2017-04-04 Tue 23:55] *** DONE basic setup CLOSED: [2017-03-21 Tue 18:03] adduser upload

Inject ssh key to /home/upload/.ssh/

http_repo_dir=/data/httpd/repo mkdir -p ${http_repo_dir}/upload chown upload:upload "$http_repo_dir"

chmod 777 docker stop http-repo; docker rm http-repo docker run -dit -h http-repo --name http-repo -p 80:80 -v $http_repo_dir:/usr/local/apache2/htdocs/ denny/httpd:v1

http://138.68.21.243:80

apt-get update -y apt-get install -y --no-install-recommends lsof *** HALF enable SSL cd /data git clone [email protected]:lrpdevops/mdmdevops-XXX.git

prepare ssl

http_repo_dir=/data/httpd/repo mkdir -p "$http_repo_dir" chown upload:upload "$http_repo_dir"

working_dir="/data/mdmdevops-XXX/misc/repo.carol.ai" conf_directory="$working_dir/conf/" mkdir -p $conf_directory

vim $conf_directory/upload_ai.conf

ssl_directory="/data/httpd/ssl" mkdir -p $ssl_directory

vim $ssl_directory/server.crt vim $ssl_directory/server.key vim $ssl_directory/server.ca-bundle

docker stop http-repo; docker rm http-repo http_repo_dir=/data/httpd/repo working_dir="/data/mdmdevops-XXX/misc/repo.carol.ai" ssl_directory="/data/httpd/ssl" docker run -dit -h http-repo --name http-repo -p 443:443 -p 80:80
-v $conf_directory/upload_ai.conf:/usr/local/apache2/conf/upload_ai.conf
-v $ssl_directory/:/usr/local/apache2/ssl
-v $http_repo_dir:/usr/local/apache2/htdocs/ denny/httpd:v1 /bin/bash

docker cp $working_dir/httpd.conf http-repo:/usr/local/apache2/conf/httpd.conf docker cp $working_dir/httpd-ssl.conf http-repo:/usr/local/apache2/conf/httpd-ssl.conf

docker exec -it http-repo bash

httpd-foreground

/usr/local/apache2/conf/httpd.conf && vim /usr/local/apache2/conf/httpd.conf

exit

docker stop http-repo; docker start http-repo

docker logs http-repo

ls -lth /usr/local/apache2/ssl ls -lth /usr/local/apache2/conf/ ls -lth /usr/local/apache2/logs curl http://localhost:80 curl https://localhost:443

curl https://repo.carol.ai:443

lsof -i tcp:443 *** TODO enable multiple vhosts *** TODO enable password protection *** TODO enable php https://github.com/docker-library/php/blob/e573f8f7fda5d7378bae9c6a936a298b850c4076/5.6/apache/Dockerfile ** DONE mac use docker CLOSED: [2017-04-24 Mon 09:43] https://docs.docker.com/docker-for-mac/install/#download-docker-for-mac ** DONE mac use docker locally CLOSED: [2017-02-14 Tue 10:12] https://docs.docker.com/docker-for-mac/ ** DONE Configure automated builds on Docker Hub CLOSED: [2017-01-29 Sun 10:05] https://docs.docker.com/docker-hub/builds/

https://forums.docker.com/t/automated-build-doesnt-find-dockerfile/1383/4

Usually the sequence is something like this (let us know where the unexpected delay happened and how long it was):

  1. You create a Github repo
  2. You edit your code and push to Github.
  3. You connect your Github account to your Docker Hub account
  4. You configure your Docker Hub account to make an Automated Build from your Github repo
  5. When you save, Docker Hub puts your build into a queue ("Pending")
  6. Docker Hub pulls your code and looks for a Dockerfile (and grabs your Readme.md too)
  7. Docker Hub builds your code ("Building")
  8. Docker Hub pushes your image to your Docker Hub repo ("Pushing"), along with build logs and your Readme.md
  9. Docker Hub updates your list of tags
  10. Docker Hub updates your Readme/Information tab
  11. Docker hub marks the pushed completed ("Finished"). *** there will be uncertainly of image build delay https://forums.docker.com/t/automated-build-doesnt-find-dockerfile/1383/8 ** DONE [#A] integrate bitbucket/github with docker process CLOSED: [2017-01-22 Sun 13:58] Demo: https://blog.bitbucket.org/2016/05/24/introducing-bitbucket-pipelines-beta-continuous-delivery-built-within-bitbucket/

https://confluence.atlassian.com/bitbucket/get-started-with-bitbucket-cloud-675385635.html

https://ig.nore.me/2016/05/bitbucket-pipelines-a-first-look/

  • automate the process: bitbucket-pipelines.yml
  • reduce effort by reusing existing modules
  • save cost: subscription fee VS extra server

Automate process by scripts -> move scripts to Jenkins -> move scripts to bitbucket pipeline *** Configure bitbucket-pipelines.yml https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html *** TODO how much bitbucket cloud cost https://bitbucket.org/product/pricing?tab=host-in-the-cloud

  • Free for 5 servers
  • 10 users: $10 per month *** requirement #+BEGIN_EXAMPLE ozgur.v.amac [8:17 AM] @dennyzhang What is your opinion about Bitbucket pipelines? Do you think we can use it to make images available to docker hub somehow? I am asking because I have not had a chance to figure out.

dennyzhang [8:53 AM] Morning, ozgur. What do you mean by Bitbucket pipelines? Do you mean git hook?

[8:55] Usually I use Jenkins to wrap up the automation. It might give us more control and more powerful. I can give you a demo, if you like. Surely, I'm open-minded to new ideas

ozgur.v.amac [9:40 AM] Yes, Jenkins is a very good option. I was just wondering since we are on bitbucket and already utilizing the pipelines feature, it might be good learn about it. They do claim docker hub deployment compabilitity, but I just don't know how yet. Just a thought to explore.

dennyzhang [10:00 AM] That's would be nice!

Yeah, then let's try bitbucket pieplines.

I know for sure that it would be capable and integrate well with docker practice.

[10:01] @ozgur.v.amac How about I do some POC first, then get back to you early next week?

ozgur.v.amac [10:03 AM] @dennyzhang That would be awesome. Thank you for your help 👍

dennyzhang [10:04 AM] Good to learn it together, ozgur!

Could you post me your thought and expectation in advance?

ozgur.v.amac [10:10 AM] Certainly, some background here: Right now we have our images in docker hub built directly from bitbucket repository integration. Problem is the artifacts are mixed with source code. Classic devops problem as you might have guessed. I was hoping to utilize a build system to store artifacts and push and/or pull to/from docker hub to build images. I am open to other suggestions on how to solve this problem as well. Please feel free to explore and let me know if you want me to elaborate the expectation further.

dennyzhang [10:12 AM] Nice, ozgur. got it. Will update asap. #+END_EXAMPLE ** DONE Ubuntu 16.04 configure DOCKER_OPTS: /lib/systemd/system/ CLOSED: [2017-07-10 Mon 10:14] https://github.com/moby/moby/issues/9889 https://github.com/moby/moby/issues/25357 https://docs.docker.com/engine/admin/systemd/#httphttps-proxy

The /etc/default/docker file is only used on systems using "upstart" and "SysVInit", not on systems using systemd.

check /lib/systemd/system/docker.service #+BEGIN_EXAMPLE [Service] EnvironmentFile=/etc/default/docker ExecStart=/usr/bin/docker -d $DOCKER_OPTS -H fd:// #+END_EXAMPLE ** DONE Pod is grouping of one or more containers. CLOSED: [2017-07-13 Thu 13:31] Pod share the same ip address, the same localhost, IPC ** DONE docker start container with pid configured CLOSED: [2017-07-30 Sun 09:06] https://medium.com/@rothgar/how-to-debug-a-running-docker-container-from-a-separate-container-983f11740dc6 #+BEGIN_EXAMPLE docker run -t --pid=container:container1
--net=container:container1
--cap-add sys_admin
--cap-add sys_ptrace
strace #+END_EXAMPLE ** DONE docker share volumes across containers: volumes_from CLOSED: [2017-08-30 Wed 16:22] https://stackoverflow.com/questions/44284484/docker-compose-share-named-volume-between-multiple-containers #+BEGIN_EXAMPLE ui: container_name: ui image: soterianetworks/kumku-u:base_${IMG_TAG_POSTFIX} networks: - network_application volumes: - launch_home:/usr/share/nginx/html

proxy: container_name: proxy image: soterianetworks/devops:proxy_${IMG_TAG_POSTFIX} networks: - network_application depends_on: - ui - idp - oauth2 - gateway ports: - "${HTTP_PORT}:80" volumes_from: - ui:ro environment: HOST_NAME: ${HOST_NAME}

volumes: idp_home: db_home: cache_home: launch_home: #+END_EXAMPLE ** DONE docker can't ping google iptables: remove: --iptables=false CLOSED: [2018-04-03 Tue 10:14] https://github.com/moby/moby/issues/28241 ** DONE dockerfile configure PATH: ENV PATH="/opt/gtk/bin:${PATH}" CLOSED: [2018-04-10 Tue 16:25] https://stackoverflow.com/questions/27093612/in-a-dockerfile-how-to-update-path-environment-variable ** DONE debian8 install RStudio: conda install -c r r-essentials CLOSED: [2018-04-10 Tue 17:23] https://www.rstudio.com/products/rstudio/download/ ** DONE docker run with different parameters CLOSED: [2018-07-02 Mon 22:41] docker run -t -d -h "$container_name" --name "$container_name"
-v "${PWD}/cmd:/go/cmd" -v "${PWD}/pkg:/go/pkg"
-v "${PWD}/tests:/go/tests"
golang:1.10.3 bash -c "cd /go && tests/build_code.sh build_code" ** DONE docker run issue: The path is not shared from OS X and is not known to Docker. CLOSED: [2018-07-03 Tue 14:12] https://github.com/localstack/localstack/issues/480

#+BEGIN_EXAMPLE bash-3.2$ docker run -it --volume /usr/local/Cellar/go/packages:/go --volume /tmp/302271400:/output --entrypoint=/bin/sh golang:latest docker: Error response from daemon: Mounts denied: The path /usr/local/Cellar/go/packages is not shared from OS X and is not known to Docker. You can configure shared paths from Docker -> Preferences... -> File Sharing. See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info. #+END_EXAMPLE ** DONE [#A] mac docker volume folder: Where is /var/lib/docker on Mac/OS X CLOSED: [2018-04-05 Thu 14:55] https://forums.docker.com/t/host-path-of-volume/12277/10 https://stackoverflow.com/questions/38532483/where-is-var-lib-docker-on-mac-os-x screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty Then you can run ls /var/lib/docker/volumes ** HALF docker images command error in Ubuntu OS #+BEGIN_EXAMPLE [email protected]:/opt/mdmdevops/env_setup/repo.carol.ai/jenkins.carol.ai# docker images Error response from daemon: layer does not exist #+END_EXAMPLE https://github.com/moby/moby/issues/21215 https://github.com/coreos/bugs/issues/1808 ** TODO docker cp: no such directory https://docs.docker.com/engine/reference/commandline/cp/#extended-description

cd /tmp docker cp ./fix-mappings-reindex-2.0.jar ci_repo:/var/www/repo/ docker cp ./fix-mappings-reindex-2.0.jar ci_repo:/

#+BEGIN_EXAMPLE [email protected]:/tmp# ls -lth ./fix-mappings-reindex-2.0.jar -rw-r--r-- 1 root root 36M Aug 31 13:39 ./fix-mappings-reindex-2.0.jar [email protected]:/tmp# docker cp ./fix-mappings-reindex-2.0.jar ci_repo:/ no such directory #+END_EXAMPLE

#+BEGIN_EXAMPLE [email protected]:/home/denny/mdmdevops/scripts/gui_login/scripts# docker cp /tmp/fix-mappings-reindex-2.0.jar ci_repo:/var/www/repo/ no such directory [email protected]:/home/denny/mdmdevops/scripts/gui_login/scripts# ls -lth /tmp/fix-mappings-reindex-2.0.jar -rw-r--r-- 1 root root 36M Aug 31 13:39 /tmp/fix-mappings-reindex-2.0.jar [email protected]:/home/denny/mdmdevops/scripts/gui_login/scripts# docker exec ci_repo ls -lth /var/www/repo/ total 41580 -rw-r--r-- 1 root root 40.6M Aug 29 16:51 fix-mappings-reindex-2.0.jar drwxr-xr-x 2 1000 1000 4.0K Aug 23 20:16 master drwxr-xr-x 2 1000 1000 4.0K Aug 22 22:21 1.73 drwxr-xr-x 2 1000 1000 4.0K Aug 14 21:14 nexus-port-8080 drwxr-xr-x 2 1000 1000 4.0K Aug 7 21:31 1.72 drwxr-xr-x 2 1000 1000 4.0K Jul 26 01:29 1.70 drwxr-xr-x 2 1000 1000 4.0K Jul 25 01:55 1.71 #+END_EXAMPLE ** TODO docker hub trigger autobuild by commit message pattern ** TODO [#A] docker container change port mapping and bring it up and running https://stackoverflow.com/questions/19335444/how-do-i-assign-a-port-mapping-to-an-existing-docker-container

  1. Change hostconfig.json directly for port mapping at /var/lib/docker/containers/[hash_of_the_container]/hostconfig.json

  2. docker commit

#+BEGIN_EXAMPLE [email protected]:/home/denny/devops_docker_image/codecheck# docker start docker-jenkins Error response from daemon: driver failed programming external connectivity on endpoint docker-jenkins (49319c2c320acafdd8930ab99f3410c2b3cd9ccb2635f76abd4a5e2261df6196): Bind for 0.0.0.0:48080 failed: port is already allocated Error: failed to start containers: docker-jenkins #+END_EXAMPLE ** TODO Networking with Kubernetes https://www.youtube.com/watch?v=WwQ62OyCNz4

nothing to commit, working directory clean ** TODO [#A] Docker logging driver: aggreate logs from different containers into a centralized location https://docs.docker.com/engine/admin/logging/overview/ *** DONE Default logging is json-file. /var/lib/docker/containers/$container_id/*-json.log CLOSED: [2017-03-25 Sat 14:16] docker info |grep 'Logging Driver' docker inspect -f '{{.HostConfig.LogConfig.Type}}' *** TODO Logging to 2 drivers: Can I log to both json and rsyslog? *** TODO [#A] Use gelf to collect log into elasticsearch eventually https://docs.docker.com/engine/admin/logging/overview/#gelf *** TODO Use ELK to collect logs https://www.linux.com/learn/how-manage-logs-docker-environment-compose-and-elk *** TODO docker start logstash container https://www.elastic.co/guide/en/logstash/current/docker.html *** # --8<-------------------------- separator ------------------------>8-- *** TODO When syslog is down, I just lose my log messages: make sure we keep logging to local files *** TODO udp server has to be a fixed ip: Use a dedicated network with fixed ip *** TODO We will have 2 containers: logstash and elasticsearch

  • [#B] cAdvisor: basic docker monitoring :noexport: https://github.com/google/cadvisor ** hello world sudo docker run
    --volume=/:/rootfs:ro
    --volume=/var/run:/var/run:rw
    --volume=/sys:/sys:ro
    --volume=/var/lib/docker/:/var/lib/docker:ro
    --publish=8080:8080
    --detach=true
    --name=cadvisor
    google/cadvisor:latest

http://localhost:8080 ** DONE docker stats CLOSED: [2017-08-08 Tue 22:59] ** TODO Get more history timeline ** TODO Get monitoring and alerts: Get alerts, healthcheck failed or memory is over 500MB

systemctl status firewalld systemctl stop firewalld systemctl disable firewalld ** docker compose network https://docs.docker.com/compose/networking/ ** restart docker: systemctl restart docker, service docker restart ** misc #+BEGIN_EXAMPLE [email protected]:# brctl show bridge name bridge id STP enabled interfaces br-2b7b9df1ac12 8000.0242055c1b2a no docker0 8000.0242929d2203 no veth4c6c073 veth8d276b6 [email protected]:# man brctl [email protected]:~# docker network ls NETWORK ID NAME DRIVER SCOPE 455090dd5f58 bridge bridge local e8c3b3da66fe host host local 2b7b9df1ac12 longruncluster bridge local 80aadf9b7c6c none null local #+END_EXAMPLE ** TODO docker iptables issue: docker-compose create network https://sanenthusiast.com/tag/failed-iptables-no-chaintargetmatch-by-that-name/

https://forums.docker.com/t/getting-error-during-network-create/18989 http://stackoverflow.com/questions/31667160/running-docker-container-iptables-no-chain-target-match-by-that-name

-bash-4.2$ sudo /usr/local/bin/docker-compose up -d Creating network "aamaco_default" with the default driver ERROR: Failed to Setup IP tables: Unable to enable SKIP DNAT rule: (iptables failed: iptables --wait -t nat -I DOCKER -i br-97c34c80a837 -j RETURN: iptables: No chain/target/match by that name. (exit status 1))

iptables --wait -t nat -I DOCKER -i br-97c34c80a837 -j RETURN *** From Brandon #+BEGIN_EXAMPLE -----Original Message----- From: Thurston, Robert Sent: Tuesday, January 17, 2017 9:17 AM To: Chen, Brandon; Amac, Ozgur Cc: Wang, Teddy Subject: RE: Partner Portal - SSO Hop perf and prod

Firewalld is recommended in RHEL7; I have opened port 8080:

[[email protected] a_amaco]# firewall-cmd --list-ports 443/tcp 80/tcp 28002/tcp 7080/tcp 8080/tcp 2144/tcp 9000/tcp 28001/tcp 10050/tcp 7443/tcp

[[email protected] a_amaco]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 INPUT_direct all -- 0.0.0.0/0 0.0.0.0/0 INPUT_ZONES_SOURCE all -- 0.0.0.0/0 0.0.0.0/0 INPUT_ZONES all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 FORWARD_direct all -- 0.0.0.0/0 0.0.0.0/0 FORWARD_IN_ZONES_SOURCE all -- 0.0.0.0/0 0.0.0.0/0 FORWARD_IN_ZONES all -- 0.0.0.0/0 0.0.0.0/0 FORWARD_OUT_ZONES_SOURCE all -- 0.0.0.0/0 0.0.0.0/0 FORWARD_OUT_ZONES all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT) target prot opt source destination OUTPUT_direct all -- 0.0.0.0/0 0.0.0.0/0

Chain FORWARD_IN_ZONES (1 references) target prot opt source destination FWDI_public all -- 0.0.0.0/0 0.0.0.0/0 [goto]

Chain FORWARD_IN_ZONES_SOURCE (1 references) target prot opt source destination

Chain FORWARD_OUT_ZONES (1 references) target prot opt source destination FWDO_public all -- 0.0.0.0/0 0.0.0.0/0 [goto]

Chain FORWARD_OUT_ZONES_SOURCE (1 references) target prot opt source destination

Chain FORWARD_direct (1 references) target prot opt source destination

Chain FWDI_public (1 references) target prot opt source destination FWDI_public_log all -- 0.0.0.0/0 0.0.0.0/0 FWDI_public_deny all -- 0.0.0.0/0 0.0.0.0/0 FWDI_public_allow all -- 0.0.0.0/0 0.0.0.0/0

Chain FWDI_public_allow (1 references) target prot opt source destination

Chain FWDI_public_deny (1 references) target prot opt source destination

Chain FWDI_public_log (1 references) target prot opt source destination

Chain FWDO_public (1 references) target prot opt source destination FWDO_public_log all -- 0.0.0.0/0 0.0.0.0/0 FWDO_public_deny all -- 0.0.0.0/0 0.0.0.0/0 FWDO_public_allow all -- 0.0.0.0/0 0.0.0.0/0

Chain FWDO_public_allow (1 references) target prot opt source destination

Chain FWDO_public_deny (1 references) target prot opt source destination

Chain FWDO_public_log (1 references) target prot opt source destination

Chain INPUT_ZONES (1 references) target prot opt source destination IN_public all -- 0.0.0.0/0 0.0.0.0/0 [goto]

Chain INPUT_ZONES_SOURCE (1 references) target prot opt source destination

Chain INPUT_direct (1 references) target prot opt source destination

Chain IN_public (1 references) target prot opt source destination IN_public_log all -- 0.0.0.0/0 0.0.0.0/0 IN_public_deny all -- 0.0.0.0/0 0.0.0.0/0 IN_public_allow all -- 0.0.0.0/0 0.0.0.0/0

Chain IN_public_allow (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:28002 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7080 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2144 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9000 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:28001 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:10050 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7443 ctstate NEW

Chain IN_public_deny (1 references) target prot opt source destination

Chain IN_public_log (1 references) target prot opt source destination

Chain OUTPUT_direct (1 references) target prot opt source destination

-----Original Message----- From: Chen, Brandon Sent: Monday, January 16, 2017 6:56 PM To: Amac, Ozgur; Thurston, Robert Cc: Wang, Teddy Subject: RE: Partner Portal - SSO Hop perf and prod

Should we check if the portl 8080 is open in iptables? I tried below but was denied.

-bash-4.2$ sudo iptables -l

We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things:

#1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility.

[sudo] password for a_chenb: Sorry, user a_chenb is not allowed to execute '/sbin/iptables -l' as root on sdelldevtst01.

-----Original Message----- From: Amac, Ozgur Sent: Monday, January 16, 2017 6:25 PM To: Thurston, Robert Cc: Wang, Teddy; Chen, Brandon Subject: RE: Partner Portal - SSO Hop perf and prod

Rob,

I suspect it is this issue:

https://github.com/docker/docker/issues/16137

Without root privilege I cannot confirm though.

Can you help us out?

Thanks, Özgür


From: Thurston, Robert Sent: Monday, January 16, 2017 4:28 PM To: Amac, Ozgur Cc: Wang, Teddy; Chen, Brandon Subject: RE: Partner Portal - SSO Hop perf and prod

I get the same error, when I run it as root, so I don't think it is a permission issue.

Rob

[

-----Original Message----- From: Amac, Ozgur Sent: Monday, January 16, 2017 2:34 PM To: Thurston, Robert Cc: Wang, Teddy; Chen, Brandon Subject: RE: Partner Portal - SSO Hop perf and prod

Alright. We have advanced to next stage:

-bash-4.2$ sudo /usr/local/bin/docker-compose up -d Creating network "aamaco_default" with the default driver ERROR: Failed to Setup IP tables: Unable to enable SKIP DNAT rule: (iptables failed: iptables --wait -t nat -I DOCKER -i br-97c34c80a837 -j RETURN: iptables: No chain/target/match by that name. (exit status 1))

I am guessing this is because my user is not allowed to create an NAT on that VM for the docker network?

Özgür #+END_EXAMPLE

https://github.com/docker/machine ** install https://docs.docker.com/machine/install-machine/

curl -L https://github.com/docker/machine/releases/download/v0.10.0/docker-machine-`uname -s-uname -m` >/tmp/docker-machine && chmod +x /tmp/docker-machine && sudo cp /tmp/docker-machine /usr/local/bin/docker-machine ** create linode vm with docker-machine https://github.com/taoh/docker-machine-linode

cd /home/denny/docker-machine-linode export GOPATH=/home/denny/docker-machine-linode

go get github.com/taoh/docker-machine-linode

cd $GOPATH/src/github.com/taoh/docker-machine-linode

make make install

docker-machine create -d linode --linode-api-key="rVwhQwj28Z1LozLQeiVPDg70q192YlDdvwKYEo2JUWEVwagzQErUi93TkUGSU9LT" --linode-root-pass "DevOpsChangeMeDenny1"

  • Dockerize Serivces :noexport: ** start jenkins via docker-compose docker-compose up -d

cd /var/lib/docker/volumes/ chmod 777 -R *_volume_backup *_volume_jobs *_volume_workspace *** > ./docker-compose.yml && vim docker-compose.yml #+BEGIN_EXAMPLE version: '2' services: my_jenkins: container_name: my_jenkins hostname: my_jenkins # Base Docker image: https://github.com/dennyzhang/devops_docker_image/blob/tag_v6/jenkins/Dockerfile_1_0 image: denny/jenkins:1.0 ports: - "8080:8080/tcp" environment: JENKINS_TIMEZONE: "America/New_York" JAVA_OPTS: -Djenkins.install.runSetupWizard=false volumes: - volume_jobs:/var/jenkins_home/jobs - volume_workspace:/var/jenkins_home/workspace - volume_backup:/var/jenkins_home/backup

volumes: volume_jobs: volume_backup: volume_workspace: #+END_EXAMPLE ** DONE run wordpress by docker CLOSED: [2016-10-24 Mon 22:47] https://www.sitepoint.com/how-to-use-the-official-docker-wordpress-image/ https://hub.docker.com/_/wordpress/ https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-and-phpmyadmin-with-docker-compose-on-ubuntu-14-04 https://www.digitalocean.com/community/tutorials/how-to-dockerise-and-deploy-multiple-wordpress-applications-on-ubuntu http://wade.be/development/2016/05/02/docker.html https://vexxhost.com/resources/tutorials/how-to-install-wordpress-and-phpmyadmin-with-docker-compose-on-ubuntu-14-04/ https://visible.vc/engineering/docker-environment-for-wordpress/

wget -qO- https://get.docker.com/ | sh docker pull wordpress docker pull mysql

export mysql_password=denny123 docker run --name wordpressdb -e MYSQL_ROOT_PASSWORD=$mysql_password -e MYSQL_DATABASE=wordpress -d mysql:5.7 docker run -e WORDPRESS_DB_PASSWORD=$mysql_password -p 80:80 -d --name wordpress --link wordpressdb:mysql wordpress http://injenkins.fluigdata.com:8080

web: image: wordpress links: - mysql environment: - WORDPRESS_DB_PASSWORD=password ports: - "127.0.0.3:8080:80" mysql: image: mysql:5.7 environment: - MYSQL_ROOT_PASSWORD=password - MYSQL_DATABASE=wordpress

docker run --name some-wordpress --link some-mysql:mysql -d wordpress ** DONE Use Docker to setup a ftp server: support people to transfer big files CLOSED: [2017-03-21 Tue 13:53] *** TODO support unicode: Chinese characters *** TODO [#A] Enhance the data security: different users can't view others' sharing. http://www.proftpd.org/docs/howto/AuthFiles.html

docker exec -it proftpd_ftp_1 bash

ftp_username="ftp_XXX" mkdir -p /var/ftp/$ftp_username ftpasswd --passwd --file=/etc/proftpd/user.d/${ftp_username}.passwd --name=$ftp_username --uid=1001 --home=/var/ftp/$ftp_username --shell=/bin/false

XXX123ChangeMe

ls -lth /etc/proftpd/user.d/ ls -lth /etc/proftpd/conf.d/

#+BEGIN_EXAMPLE [email protected]:/etc/proftpd# ftpasswd --passwd --name=$ftp_username --uid=1001 --home=/var/ftp/$ftp_username --shell=/bin/false ftpasswd: --passwd: missing --gid argument: default gid set to uid ftpasswd: creating passwd entry for user ftp_XXX

ftpasswd: /bin/false is not among the valid system shells. Use of ftpasswd: "RequireValidShell off" may be required, and the PAM ftpasswd: module configuration may need to be adjusted.

Password: Re-type password:

ftpasswd: entry created #+END_EXAMPLE *** TODO Common mointor: disk usage, download link, dns setting curl -u ftp_XXX:XXX123ChangeMe ftp://ftp.carol.ai:21/hosts

#+BEGIN_EXAMPLE If I add a secret file in my first layer, then use the secret file in my second layer, and the finally remove my secret file in the third layer, and then build with the --squash flag.

Will there be any way now to get the secret file?

Answer: Your image won't have the secret file.

How --squash works:

Once the build is complete, Docker creates a new image loading the diffs from each layer into a single new layer and references all the parent's layers.

In other words: when squashing, Docker will take all the filesystem layers produced by a build and collapse them into a single new layer.

This can simplify the process of creating minimal container images, but may result in slightly higher overhead when images are moved around (because squashed layers can no longer be shared between images). Docker still caches individual layers to make subsequent builds fast.

Please note this feature squashes all the newly built layers into a single layer, it is not squashing to scratch. #+END_EXAMPLE

  • DONE docker exec into one container :noexport: CLOSED: [2020-06-08 Mon 10:04]

https://docs.docker.com/engine/reference/commandline/ps/ docker ps --filter name=nginx --format "table {{.ID}}"

docker exec -it $(docker ps --filter name=nginx --format "table {{.ID}}" | tail -1) sh

  • docker :noexport:

Start docker daemon

docker -d

start a container with an interactive shell

docker run -ti <image_name> /bin/bash

"shell" into a running container (docker-1.3+)

docker exec -ti <container_name> bash

inspect a running container

docker inspect <container_name> (or <container_id>)

Get the process ID for a container

Source: https://github.com/jpetazzo/nsenter

docker inspect --format {{.State.Pid}} <container_name_or_ID>

List the current mounted volumes for a container (and pretty print)

Source:

http://nathanleclaire.com/blog/2014/07/12/10-docker-tips-and-tricks-that-will-make-you-sing-a-whale-song-of-joy/

docker inspect --format='{{json .Volumes}}' <container_id> | python -mjson.tool

Copy files/folders between a container and your host

docker cp foo.txt mycontainer:/foo.txt

  • [#A] docker-compose :noexport:
  • DONE doc: docker compose export udp :noexport: CLOSED: [2018-07-20 Fri 14:16]
  • TODO opensource improvement: CheatSheet: https://github.com/eon01/DockerCheatSheet :noexport:
  • TODO [#A] opensource improvement: docker cheatsheet: https://github.com/wsargent/docker-cheat-sheet :noexport:
  • HALF good to manage docker and docker-compose samples :noexport:
  • Docker: a breakthrough of software delivery :noexport:Tool: :PROPERTIES: :type: OpenStack_Cloud :END: | Name | Summary | |----------------------------------------------------------------------------------+---------------------------------------------------------------| | docker pull ubuntu | Download a pre-built image, searching https://index.docker.io | | docker run -i -t ubuntu /bin/bash | login to a container | | /var/lib/docker/containers/227.../config.lxc | lxc configuration | | docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done" | start a new daemon container | | docker logs e970...96 | Check container log | | docker attach e970...96 | Attach to a running container | | docker attach -sig-proxy=false e970...96 | When sig-proxy false, (Ctrl-C) won't stop container. | | docker stop e970...96 | stop a container | | docker run -p 22 -p 80 -t -i /supervisor | Start a container from your own image | | docker run -d -p 27017 -m="1g" denny/mongodb --noprealloc --smallfiles | |

ps -ef | grep docker: find container id ** basic use http://www.docker.io/the_whole_story/ #+begin_example Docker interests me because it allows simple environment isolation and repeatability. I can create a run-time environment once, package it up, then run it again on any other machine. Furthermore, everything that runs in that environment is isolated from the underlying host (much like a virtual machine). And best of all, everything is fast and simple. #+end_example ** [question] 如何退出docker container, 而不停掉这个container #+begin_example [[email protected] ~]# sudo docker run -p 22 -p 80 -t -i denny/supervisord sudo docker run -p 22 -p 80 -t -i denny/supervisord 2013-12-11 06:21:34,459 CRIT Supervisor running as root (no user in config file) 2013-12-11 06:21:34,459 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing 2013-12-11 06:21:34,485 INFO RPC interface 'supervisor' initialized 2013-12-11 06:21:34,485 WARN cElementTree not installed, using slower XML parser for XML-RPC 2013-12-11 06:21:34,485 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2013-12-11 06:21:34,485 INFO supervisord started with pid 1 2013-12-11 06:21:35,490 INFO spawned: 'sshd' with pid 5 2013-12-11 06:21:35,497 INFO spawned: 'apache2' with pid 6 2013-12-11 06:21:36,541 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2013-12-11 06:21:36,541 INFO success: apache2 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) #+end_example ** [#A] [question] why Docker's container use less resource than vm? http://www.infoq.com/articles/docker-containers/

http://docs.docker.io/en/latest/faq/

http://stackoverflow.com/questions/16047306/how-is-docker-io-different-from-a-normal-virtual-machine/16048358#16048358

  • A full virtualized system usually takes minutes to start, LXC containers take seconds, and sometimes even less than a second.

  • VMs are best used to allocate chunks of hardware resources. Containers operate at the process level

  • LXC enables differnet containers to share a lot of the host operating system resources. *** misc :noexport: #+begin_example Docker currently uses LinuX Containers (LXC), which run in the same operating system as its host. This allows it to share a lot of the host operating system resources. It also uses AuFS for the file system. It also manages the networking for you as well.

AuFS is a layered file system, so you can have a read only part, and a write part, and merge those together. So you could have the common parts of the operating system as read only, which are shared amongst all of your containers, and then give each container its own mount for writing.

So let's say you have a container image that is 1GB in size. If you wanted to use a Full VM, you would need to have 1GB times x number of VMs you want. With LXC and AuFS you can share the bulk of the 1GB and if you have 1000 containers you still might only have a little over 1GB of space for the containers OS, assuming they are all running the same OS image.

A full virtualized system gets its own set of resources allocated to it, and does minimal sharing. You get more isolation, but it is much heavier (requires more resources).

With LXC you get less isolation, but they are more lightweight and require less resources. So you could easily run 1000's on a host, and it doesn't even blink. Try doing that with Xen, and unless you have a really big host, I don't think it is possible.

A full virtualized system usually takes minutes to start, LXC containers take seconds, and sometimes even less than a second.

There are pros and cons for each type of virtualized system. If you want full isolation with guaranteed resources then a full VM is the way to go. If you just want to isolate processes from each other and want to run a ton of them on a reasonably sized host, then LXC might be the way to go.

For more information check out these set of blog posts which do a good job of explaining now LXC works: http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part

I feel foolish for asking, but why is deploying software to a docker image (if that's the right term) easier than simply deploying to a consistent production environment?

Deploying a consistent production environment is easier said than done. Even if you use tools like chef and puppet, there are always OS updates and other things that change between hosts and environments.

What docker does is it gives you the ability to snapshot the OS into a common image, and makes it easy to deploy on other docker hosts. Locally, dev, qa, prod, etc, all the same image. Sure you can do this with other tools, but not as easily or fast.

This is great for unit testing, lets say you have 1000 tests and they need to connect to a database, and in order to not break anything you need to run serially so that the tests don't step on each other (run each test in a transaction and roll back). With Docker you could create an image of your database, and then run all the tests in parallel since you know they will all be running against the same snapshot of the database. Since they are running in parallel and in LXC containers they could run all on the same box at the same time, and your tests will finish much faster. Try doing that with a full VM.

Edit: From comments...

Interesting! I suppose I'm still confused by the notion of "snapshot[ting] the OS". How does one do that without, well, making an image of the OS?

Well, let's see if I can explain. You start with a base image, and then make your changes, and commit those changes using docker, and it creates an image. This image contains only the differences from the base. When you want to run your image, you also need the base, and it layers your image on top of the base using a layered file system, in this case AUFS. AUFS merges the different layers together and you get what you want, and you just need to run it. you can keep adding more and more images(layers) and it will keep only saving the diffs. #+end_example *** filesystem ls -lt /var/lib/docker/graph/

ls -lt /var/lib/docker/containers

AuFS is a layered file system, so you can have a read only part, and a write part, and merge those together.

A full virtualized system gets its own set of resources allocated to it, and does minimal sharing.

When you docker run an image. AUFS will 'merge' all layers into one usage file system. *** network get ip:

grep lxc.network.ipv4 /var/lib/docker/containers/c8f8bde235962390e7d797eddc54da5b55bd5ba47b3625e7c5975c352abb52ad/config.lxc **** iptables-save #+begin_example <8b6a1c6cc89d0d4a6800f7a7694aaf0462388b3d15b6d676]# iptables-save iptables-save

Generated by iptables-save v1.4.7 on Wed Dec 11 03:18:18 2013

*nat :PREROUTING ACCEPT [40:3416] :POSTROUTING ACCEPT [310:18631] :OUTPUT ACCEPT [310:18631] :DOCKER - [0:0] :neutron-openvswi-OUTPUT - [0:0] :neutron-openvswi-POSTROUTING - [0:0] :neutron-openvswi-PREROUTING - [0:0] :neutron-openvswi-float-snat - [0:0] :neutron-openvswi-snat - [0:0] :neutron-postrouting-bottom - [0:0] -A PREROUTING -j neutron-openvswi-PREROUTING -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -j neutron-openvswi-POSTROUTING -A POSTROUTING -j neutron-postrouting-bottom -A POSTROUTING -s 172.17.0.0/16 ! -d 172.17.0.0/16 -j MASQUERADE -A OUTPUT -j neutron-openvswi-OUTPUT -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A DOCKER ! -i docker0 -p tcp -m tcp --dport 49160 -j DNAT --to-destination 172.17.0.28:8080 -A neutron-openvswi-snat -j neutron-openvswi-float-snat -A neutron-postrouting-bottom -j neutron-openvswi-snat COMMIT

Completed on Wed Dec 11 03:18:18 2013

Generated by iptables-save v1.4.7 on Wed Dec 11 03:18:18 2013

*mangle :PREROUTING ACCEPT [1686528:951517614] :INPUT ACCEPT [878227:456857332] :FORWARD ACCEPT [802843:494440141] :OUTPUT ACCEPT [238768:41617390] :POSTROUTING ACCEPT [1041611:536057531] COMMIT

Completed on Wed Dec 11 03:18:18 2013

Generated by iptables-save v1.4.7 on Wed Dec 11 03:18:18 2013

*filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [233828:41379304] :neutron-filter-top - [0:0] :neutron-openvswi-FORWARD - [0:0] :neutron-openvswi-INPUT - [0:0] :neutron-openvswi-OUTPUT - [0:0] :neutron-openvswi-i19a9e9d1-9 - [0:0] :neutron-openvswi-ifbf44c00-1 - [0:0] :neutron-openvswi-local - [0:0] :neutron-openvswi-o19a9e9d1-9 - [0:0] :neutron-openvswi-ofbf44c00-1 - [0:0] :neutron-openvswi-s19a9e9d1-9 - [0:0] :neutron-openvswi-sfbf44c00-1 - [0:0] :neutron-openvswi-sg-chain - [0:0] :neutron-openvswi-sg-fallback - [0:0] -A INPUT -s 192.168.209.130/32 -p tcp -m tcp --dport 9696 -j ACCEPT -A INPUT -j neutron-openvswi-INPUT -A INPUT -s 192.168.209.131/32 -p tcp -m multiport --dports 5900:5999 -m comment --comment "001 nova compute incoming 192.168.209.131" -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -j neutron-filter-top -A FORWARD -j neutron-openvswi-FORWARD -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -j neutron-filter-top -A OUTPUT -j neutron-openvswi-OUTPUT -A neutron-filter-top -j neutron-openvswi-local -A neutron-openvswi-FORWARD -m physdev --physdev-out tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-sg-chain -A neutron-openvswi-FORWARD -m physdev --physdev-in tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-sg-chain -A neutron-openvswi-FORWARD -m physdev --physdev-out tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-sg-chain -A neutron-openvswi-FORWARD -m physdev --physdev-in tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-sg-chain -A neutron-openvswi-INPUT -m physdev --physdev-in tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-ofbf44c00-1 -A neutron-openvswi-INPUT -m physdev --physdev-in tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-o19a9e9d1-9 -A neutron-openvswi-i19a9e9d1-9 -m state --state INVALID -j DROP -A neutron-openvswi-i19a9e9d1-9 -m state --state RELATED,ESTABLISHED -j RETURN -A neutron-openvswi-i19a9e9d1-9 -p icmp -j RETURN -A neutron-openvswi-i19a9e9d1-9 -s 10.0.0.3/32 -j RETURN -A neutron-openvswi-i19a9e9d1-9 -p tcp -m tcp --dport 22 -j RETURN -A neutron-openvswi-i19a9e9d1-9 -s 10.0.0.2/32 -p udp -m udp --sport 67 --dport 68 -j RETURN -A neutron-openvswi-i19a9e9d1-9 -j neutron-openvswi-sg-fallback -A neutron-openvswi-ifbf44c00-1 -m state --state INVALID -j DROP -A neutron-openvswi-ifbf44c00-1 -m state --state RELATED,ESTABLISHED -j RETURN -A neutron-openvswi-ifbf44c00-1 -p icmp -j RETURN -A neutron-openvswi-ifbf44c00-1 -s 10.0.0.4/32 -j RETURN -A neutron-openvswi-ifbf44c00-1 -p tcp -m tcp --dport 22 -j RETURN -A neutron-openvswi-ifbf44c00-1 -s 10.0.0.2/32 -p udp -m udp --sport 67 --dport 68 -j RETURN -A neutron-openvswi-ifbf44c00-1 -j neutron-openvswi-sg-fallback -A neutron-openvswi-o19a9e9d1-9 -p udp -m udp --sport 68 --dport 67 -j RETURN -A neutron-openvswi-o19a9e9d1-9 -j neutron-openvswi-s19a9e9d1-9 -A neutron-openvswi-o19a9e9d1-9 -p udp -m udp --sport 67 --dport 68 -j DROP -A neutron-openvswi-o19a9e9d1-9 -m state --state INVALID -j DROP -A neutron-openvswi-o19a9e9d1-9 -m state --state RELATED,ESTABLISHED -j RETURN -A neutron-openvswi-o19a9e9d1-9 -j RETURN -A neutron-openvswi-o19a9e9d1-9 -j neutron-openvswi-sg-fallback -A neutron-openvswi-ofbf44c00-1 -p udp -m udp --sport 68 --dport 67 -j RETURN -A neutron-openvswi-ofbf44c00-1 -j neutron-openvswi-sfbf44c00-1 -A neutron-openvswi-ofbf44c00-1 -p udp -m udp --sport 67 --dport 68 -j DROP -A neutron-openvswi-ofbf44c00-1 -m state --state INVALID -j DROP -A neutron-openvswi-ofbf44c00-1 -m state --state RELATED,ESTABLISHED -j RETURN -A neutron-openvswi-ofbf44c00-1 -j RETURN -A neutron-openvswi-ofbf44c00-1 -j neutron-openvswi-sg-fallback -A neutron-openvswi-s19a9e9d1-9 -s 10.0.0.4/32 -m mac --mac-source FA:16:3E:32:8F:60 -j RETURN -A neutron-openvswi-s19a9e9d1-9 -j DROP -A neutron-openvswi-sfbf44c00-1 -s 10.0.0.3/32 -m mac --mac-source FA:16:3E:AE:F8:7C -j RETURN -A neutron-openvswi-sfbf44c00-1 -j DROP -A neutron-openvswi-sg-chain -m physdev --physdev-out tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-ifbf44c00-1 -A neutron-openvswi-sg-chain -m physdev --physdev-in tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-ofbf44c00-1 -A neutron-openvswi-sg-chain -m physdev --physdev-out tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-i19a9e9d1-9 -A neutron-openvswi-sg-chain -m physdev --physdev-in tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-o19a9e9d1-9 -A neutron-openvswi-sg-chain -j ACCEPT -A neutron-openvswi-sg-fallback -j DROP COMMIT

Completed on Wed Dec 11 03:18:18 2013

#+end_example **** ip a #+begin_example <8b6a1c6cc89d0d4a6800f7a7694aaf0462388b3d15b6d676]# ip a ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a4:54:fd brd ff:ff:ff:ff:ff:ff inet 192.168.209.130/24 brd 192.168.209.255 scope global eth0 inet6 fe80::20c:29ff:fea4:54fd/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a4:54:07 brd ff:ff:ff:ff:ff:ff inet 172.16.162.130/24 brd 172.16.162.255 scope global eth1 inet6 fe80::20c:29ff:fea4:5407/64 scope link valid_lft forever preferred_lft forever 4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 62:87:ad:0a:dd:2a brd ff:ff:ff:ff:ff:ff 5: br-int: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 66:62:4c:ea:fd:4a brd ff:ff:ff:ff:ff:ff inet6 fe80::584d:7cff:fe2d:7807/64 scope link valid_lft forever preferred_lft forever 6: br-tun: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 32:ea:70:aa:9f:4f brd ff:ff:ff:ff:ff:ff inet6 fe80::3041:d1ff:fe64:97e8/64 scope link valid_lft forever preferred_lft forever 7: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether b2:1c:42:2f:19:ec brd ff:ff:ff:ff:ff:ff 9: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether fe:aa:44:84:21:cc brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 inet6 fe80::147e:ff:fe31:8ba7/64 scope link valid_lft forever preferred_lft forever 89: vethnRwtyr: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:aa:44:84:21:cc brd ff:ff:ff:ff:ff:ff inet6 fe80::fcaa:44ff:fe84:21cc/64 scope link valid_lft forever preferred_lft forever #+end_example **** [#A] Docker leverage NAT network, for container to talk with the internet Network namespace: a container gets its own virtual network device and virtual IP (so it can bind to whatever port it likes without taking up its hosts ports).

-A DOCKER ! -i docker0 -p tcp -m tcp --dport 49160 -j DNAT --to-destination 172.17.0.28:8080 ***** 启动一个vnet: 172.17.42.1 #+begin_example [[email protected] ~]#ip a ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a4:54:fd brd ff:ff:ff:ff:ff:ff inet 192.168.209.130/24 brd 192.168.209.255 scope global eth0 inet6 fe80::20c:29ff:fea4:54fd/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a4:54:07 brd ff:ff:ff:ff:ff:ff inet 172.16.162.130/24 brd 172.16.162.255 scope global eth1 inet6 fe80::20c:29ff:fea4:5407/64 scope link valid_lft forever preferred_lft forever 4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 62:87:ad:0a:dd:2a brd ff:ff:ff:ff:ff:ff 5: br-int: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 66:62:4c:ea:fd:4a brd ff:ff:ff:ff:ff:ff inet6 fe80::584d:7cff:fe2d:7807/64 scope link valid_lft forever preferred_lft forever 6: br-tun: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 32:ea:70:aa:9f:4f brd ff:ff:ff:ff:ff:ff inet6 fe80::3041:d1ff:fe64:97e8/64 scope link valid_lft forever preferred_lft forever 7: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether b2:1c:42:2f:19:ec brd ff:ff:ff:ff:ff:ff 9: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether fe:4f:40:06:ff:07 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 inet6 fe80::147e:ff:fe31:8ba7/64 scope link valid_lft forever preferred_lft forever 11: vethV9wjWn: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:4f:40:06:ff:07 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc4f:40ff:fe06:ff07/64 scope link valid_lft forever preferred_lft forever [[email protected] ~]# route -n route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.209.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 172.16.162.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 0.0.0.0 192.168.209.2 0.0.0.0 UG 0 0 0 eth0 [[email protected] ~]# ps -ef | grep docker ps -ef | grep docker root 2907 1 6 23:05 pts/0 00:00:40 /usr/bin/docker -d root 3006 2784 0 23:05 pts/0 00:00:00 docker run -i -t ubuntu /bin/bash root 3780 2907 0 23:13 pts/2 00:00:00 lxc-start -n 22778cacd971d2a3f695a6bc5792bb9238c97bbfa13599d4272749838e6ec34b -f /var/lib/docker/containers/22778cacd971d2a3f695a6bc5792bb9238c97bbfa13599d4272749838e6ec34b/config.lxc -- /.dockerinit -g 172.17.42.1 -- /bin/bash root 5303 3620 0 23:16 pts/1 00:00:00 grep docker #+end_example ***** Docker通过iptables实现NAT http://docs.docker.io/en/latest/examples/python_web_app/ #+begin_example [[email protected] ~]# ipables-save ipables-save -bash: ipables-save: command not found [[email protected] ~]# iptables-save iptables-save

Generated by iptables-save v1.4.7 on Wed Dec 11 01:19:28 2013

*nat :PREROUTING ACCEPT [1:64] :POSTROUTING ACCEPT [4:252] :OUTPUT ACCEPT [4:252] :DOCKER - [0:0] :neutron-openvswi-OUTPUT - [0:0] :neutron-openvswi-POSTROUTING - [0:0] :neutron-openvswi-PREROUTING - [0:0] :neutron-openvswi-float-snat - [0:0] :neutron-openvswi-snat - [0:0] :neutron-postrouting-bottom - [0:0] -A PREROUTING -j neutron-openvswi-PREROUTING -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -j neutron-openvswi-POSTROUTING -A POSTROUTING -j neutron-postrouting-bottom -A POSTROUTING -s 172.17.0.0/16 ! -d 172.17.0.0/16 -j MASQUERADE -A OUTPUT -j neutron-openvswi-OUTPUT -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A DOCKER ! -i docker0 -p tcp -m tcp --dport 49160 -j DNAT --to-destination 172.17.0.28:8080 -A DOCKER ! -i docker0 -p tcp -m tcp --dport 49154 -j DNAT --to-destination 172.17.0.81:8087 -A DOCKER ! -i docker0 -p tcp -m tcp --dport 49155 -j DNAT --to-destination 172.17.0.81:8098 -A neutron-openvswi-snat -j neutron-openvswi-float-snat -A neutron-postrouting-bottom -j neutron-openvswi-snat COMMIT

Completed on Wed Dec 11 01:19:28 2013

Generated by iptables-save v1.4.7 on Wed Dec 11 01:19:28 2013

*mangle :PREROUTING ACCEPT [877603:886207084] :INPUT ACCEPT [369741:436288542] :FORWARD ACCEPT [502404:449698401] :OUTPUT ACCEPT [157968:7927848] :POSTROUTING ACCEPT [660372:457626249] COMMIT

Completed on Wed Dec 11 01:19:28 2013

Generated by iptables-save v1.4.7 on Wed Dec 11 01:19:28 2013

*filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [153028:7689762] :neutron-filter-top - [0:0] :neutron-openvswi-FORWARD - [0:0] :neutron-openvswi-INPUT - [0:0] :neutron-openvswi-OUTPUT - [0:0] :neutron-openvswi-i19a9e9d1-9 - [0:0] :neutron-openvswi-ifbf44c00-1 - [0:0] :neutron-openvswi-local - [0:0] :neutron-openvswi-o19a9e9d1-9 - [0:0] :neutron-openvswi-ofbf44c00-1 - [0:0] :neutron-openvswi-s19a9e9d1-9 - [0:0] :neutron-openvswi-sfbf44c00-1 - [0:0] :neutron-openvswi-sg-chain - [0:0] :neutron-openvswi-sg-fallback - [0:0] -A INPUT -s 192.168.209.130/32 -p tcp -m tcp --dport 9696 -j ACCEPT -A INPUT -j neutron-openvswi-INPUT -A INPUT -s 192.168.209.131/32 -p tcp -m multiport --dports 5900:5999 -m comment --comment "001 nova compute incoming 192.168.209.131" -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -j neutron-filter-top -A FORWARD -j neutron-openvswi-FORWARD -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -j neutron-filter-top -A OUTPUT -j neutron-openvswi-OUTPUT -A neutron-filter-top -j neutron-openvswi-local -A neutron-openvswi-FORWARD -m physdev --physdev-out tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-sg-chain -A neutron-openvswi-FORWARD -m physdev --physdev-in tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-sg-chain -A neutron-openvswi-FORWARD -m physdev --physdev-out tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-sg-chain -A neutron-openvswi-FORWARD -m physdev --physdev-in tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-sg-chain -A neutron-openvswi-INPUT -m physdev --physdev-in tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-ofbf44c00-1 -A neutron-openvswi-INPUT -m physdev --physdev-in tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-o19a9e9d1-9 -A neutron-openvswi-i19a9e9d1-9 -m state --state INVALID -j DROP -A neutron-openvswi-i19a9e9d1-9 -m state --state RELATED,ESTABLISHED -j RETURN -A neutron-openvswi-i19a9e9d1-9 -p icmp -j RETURN -A neutron-openvswi-i19a9e9d1-9 -s 10.0.0.3/32 -j RETURN -A neutron-openvswi-i19a9e9d1-9 -p tcp -m tcp --dport 22 -j RETURN -A neutron-openvswi-i19a9e9d1-9 -s 10.0.0.2/32 -p udp -m udp --sport 67 --dport 68 -j RETURN -A neutron-openvswi-i19a9e9d1-9 -j neutron-openvswi-sg-fallback -A neutron-openvswi-ifbf44c00-1 -m state --state INVALID -j DROP -A neutron-openvswi-ifbf44c00-1 -m state --state RELATED,ESTABLISHED -j RETURN -A neutron-openvswi-ifbf44c00-1 -p icmp -j RETURN -A neutron-openvswi-ifbf44c00-1 -s 10.0.0.4/32 -j RETURN -A neutron-openvswi-ifbf44c00-1 -p tcp -m tcp --dport 22 -j RETURN -A neutron-openvswi-ifbf44c00-1 -s 10.0.0.2/32 -p udp -m udp --sport 67 --dport 68 -j RETURN -A neutron-openvswi-ifbf44c00-1 -j neutron-openvswi-sg-fallback -A neutron-openvswi-o19a9e9d1-9 -p udp -m udp --sport 68 --dport 67 -j RETURN -A neutron-openvswi-o19a9e9d1-9 -j neutron-openvswi-s19a9e9d1-9 -A neutron-openvswi-o19a9e9d1-9 -p udp -m udp --sport 67 --dport 68 -j DROP -A neutron-openvswi-o19a9e9d1-9 -m state --state INVALID -j DROP -A neutron-openvswi-o19a9e9d1-9 -m state --state RELATED,ESTABLISHED -j RETURN -A neutron-openvswi-o19a9e9d1-9 -j RETURN -A neutron-openvswi-o19a9e9d1-9 -j neutron-openvswi-sg-fallback -A neutron-openvswi-ofbf44c00-1 -p udp -m udp --sport 68 --dport 67 -j RETURN -A neutron-openvswi-ofbf44c00-1 -j neutron-openvswi-sfbf44c00-1 -A neutron-openvswi-ofbf44c00-1 -p udp -m udp --sport 67 --dport 68 -j DROP -A neutron-openvswi-ofbf44c00-1 -m state --state INVALID -j DROP -A neutron-openvswi-ofbf44c00-1 -m state --state RELATED,ESTABLISHED -j RETURN -A neutron-openvswi-ofbf44c00-1 -j RETURN -A neutron-openvswi-ofbf44c00-1 -j neutron-openvswi-sg-fallback -A neutron-openvswi-s19a9e9d1-9 -s 10.0.0.4/32 -m mac --mac-source FA:16:3E:32:8F:60 -j RETURN -A neutron-openvswi-s19a9e9d1-9 -j DROP -A neutron-openvswi-sfbf44c00-1 -s 10.0.0.3/32 -m mac --mac-source FA:16:3E:AE:F8:7C -j RETURN -A neutron-openvswi-sfbf44c00-1 -j DROP -A neutron-openvswi-sg-chain -m physdev --physdev-out tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-ifbf44c00-1 -A neutron-openvswi-sg-chain -m physdev --physdev-in tapfbf44c00-1c --physdev-is-bridged -j neutron-openvswi-ofbf44c00-1 -A neutron-openvswi-sg-chain -m physdev --physdev-out tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-i19a9e9d1-9 -A neutron-openvswi-sg-chain -m physdev --physdev-in tap19a9e9d1-99 --physdev-is-bridged -j neutron-openvswi-o19a9e9d1-9 -A neutron-openvswi-sg-chain -j ACCEPT -A neutron-openvswi-sg-fallback -j DROP COMMIT

Completed on Wed Dec 11 01:19:28 2013

[[email protected] ~]# #+end_example ** TODO What LXC is capable for VM management ** [#A] [question] Find out how many resource a docker container use? ** DONE [#A] How to define resource quota used by docker, like cpu, memory, disk capacity: 让Docker传递相关系数给LXC CLOSED: [2013-12-12 Thu 22:54] ** [#A] [question] How to login to container, to run extra command? ** # --8<-------------------------- separator ------------------------>8-- ** [#A] [question] How to reuse the deploy logic of Docker to VM, or bare mental server?? ** [#A] [question] What's difference between Docker and heat?

  • Docker build #+BEGIN_SRC sh cd "$dir" && cat "$dockerfile" | docker build -t "$docker_image" -f - . #+END_SRC

  • Use google docker registry #+BEGIN_EXAMPLE docker build -t gcr.io/<MY_REPO>/<MY_IMAGE>:<MY_TAG> . gcloud docker -- push gcr.io/<MY_REPO>/<MY_IMAGE>:<MY_TAG> #+END_EXAMPLE

  • Install test kit

[[https://github.com/dennyzhang/cheatsheet-docker-A4/blob/master/container-install-devkit.sh][container-install-devkit.sh]] #+BEGIN_EXAMPLE apt-get -y update apt-get install -y curl netcat

curl -L https://raw.githubusercontent.com/dennyzhang/cheatsheet-docker-A4/master/container-install-devkit.sh | bash #+END_EXAMPLE

Remove intermediate containers generated during docker build #+BEGIN_EXAMPLE docker ps -a | grep "/bin/sh -c" | awk -F' ' '{print $1}' | xargs docker rm #+END_EXAMPLE

Remove Image with string #+BEGIN_EXAMPLE echo "Remove docker images with string" if docker images | grep none | tee; then docker rmi $(docker images | grep "" | awk -F' ' '{print $3}') | tee fi #+END_EXAMPLE

  • TODO docker exec hangs :noexport: #+BEGIN_EXAMPLE [[email protected] ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 438d7d70561b denny/devops-blog:kubernetes "/bin/sh -c $WORDP..." 2 days ago Up 2 days (unhealthy) 80/tcp, 0.0.0.0:8087->8087/tcp ecs-task-denny-blog-kubernetes-2-kubernetes-feeffec39e86ed84dd01 6d75be4177eb denny/devops-blog:latest "/bin/sh -c $WORDP..." 3 days ago Up 3 days (unhealthy) 80/tcp, 0.0.0.0:8083->8083/tcp ecs-task-denny-blog-devops-11-blog-devops-feb494ef80e5d4da1000 fe63a88a229d denny/devops-blog:cheatsheet "/bin/sh -c $WORDP..." 5 days ago Up 5 days (unhealthy) 80/tcp, 0.0.0.0:8084->8084/tcp ecs-task-denny-blog-cheatsheet-2-cheatsheet-9abe9de79dea9d97af01 cf6a4f6d7adf denny/devops-blog:nginx-proxy "/bin/sh -c /docke..." 8 days ago Up 8 days (unhealthy) 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp ecs-task-denny-proxy-9-nginx-proxy-98e59ab9dcefece71400 d7d3c2058a44 denny/devops-blog:quiz "/bin/sh -c $WORDP..." 8 days ago Up 8 days (unhealthy) 80/tcp, 0.0.0.0:8086->8086/tcp ecs-task-denny-blog-quiz-2-quiz-f8bee9e9eafde1e67700 277030bf7270 denny/devops-blog:architect "/bin/sh -c $WORDP..." 8 days ago Up 8 days (unhealthy) 80/tcp, 0.0.0.0:8085->8085/tcp ecs-task-denny-blog-architect-1-architect-9e8be4d2a0f3cf81f701 a2cb165947d0 denny/devops-blog:pull-git-cdn "/bin/sh -c 'sh /r..." 8 days ago Up 8 days ecs-task-denny-git-pull-3-pull-git-cdn-b2c2e9c9b5b8c2c19a01 13b0a1738ec5 denny/devops-blog:code "/bin/sh -c $WORDP..." 8 days ago Up 8 days (unhealthy) 80/tcp, 0.0.0.0:8082->8082/tcp ecs-task-denny-blog-code-1-code-ecfad4fcb9b3b0fd8801 0471ce98d0ce denny/slackin:latest "/bin/sh -c './bin..." 8 days ago Up 8 days 0.0.0.0:3000->3000/tcp ecs-task-denny-slackin-2-slackin-d28bf3bf999d93f74e00 68447aa2fe70 amazon/amazon-ecs-agent:latest "/agent" 12 days ago Up 8 days ecs-agent [[email protected] ~]$ docker exec -it 13b0a1738ec5 sh rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:83: executing setns process caused "exit status 15"" #+END_EXAMPLE
  • TODO docker cheatsheet: https://bitbucket.org/devops_sysops/cheatsheetcollection/src/a4b5d9acc0a852254a2eb8719068f9361d99e426/Containers/Docker.md?fileviewer=file-view-default :noexport:
  • --8<-------------------------- separator ------------------------>8-- :noexport:

  • [#A] Dockerfile :noexport: ** Cheatsheet https://jimmysong.io/cheatsheets/dockerfile ** DONE Dockerfile entrypoint VS CMD CLOSED: [2017-03-15 Wed 16:40] http://www.projectatomic.io/docs/docker-image-author-guidance/
  • CMD simply sets a command to run in the image if no arguments are passed to docker run,
  • ENTRYPOINT is meant to make your image behave like a binary.

CMD ["ls", "/" ]

ENTRYPOINT $HOME/docker-entrypoint.sh

Be careful with using ENTRYPOINT; it will make it more difficult to get a shell inside your image. ** dockerfile example #+BEGIN_EXAMPLE ########## How To Build Docker Image #############

Build image from Dockerfile. docker build -t XXX/mdm:v2 --rm=false .

Run docker intermediate container: docker run -t -i XXX/mdm:v2 /bin/bash

Commit local image: docker commit -m "Initial version" -a "Denny Zhang[email protected]" 9187d029f904 XXX/mdm:v2

Remove intermediate container: docker stop $(docker ps -a -q); docker rm $(docker ps -a -q)

Release to docker hub: docker push XXX/mdm:v2

##################################################

########## How To Use Docker Image ###############

Install docker utility

Download docker image: docker pull XXX/mdm:v2

Boot docker container: docker run -t -P -d XXX/mdm:v2 /bin/bash

##################################################

FROM ubuntu:14.04 MAINTAINER TOTVS Labs [email protected] RUN apt-get -yqq update

########################################################################################

Enable Chef

RUN apt-get -yqq install curl lsb-release RUN curl -L https://getchef.com/chef/install.sh | bash

########################################################################################

Start sshd: http://docs.docker.com/examples/running_ssh_service/

RUN apt-get install -y openssh-server RUN mkdir /var/run/sshd RUN echo 'root:TOTVSFoobar1!' | chpasswd RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

SSH login fix. Otherwise user is kicked off after login

RUN sed '[email protected]\srequired\s[email protected] optional [email protected]' -i /etc/pam.d/sshd

RUN mkdir -p /root/.ssh RUN echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGVkT4Ka/Pt6M/xREwYWatYyBqaBgDVS1bCy7CViZ5VGr1z+sNwI2cBoRwWxqHwvOgfAm+Wbzwqs+WNvXW6GDZ1kjayh2YnBN5UBYZjpNQK9tmO8KHQwX29UvOaOJ6HIEWOJB9ylyUoWL+WwNf71arpXULBW6skx9fp9F5rHuB0UmQ+omhJGs6+PRSLAEzWaQvtxmm7CuZ7LgslNKskkqx/6CHlQPq2qchRVN5xvnZPuFWgF6cvWvK7kylAQsv8hQtFGsE9Rw1itjisCBVILzEC2mAjg5SqeEB0i7QwdlRr4jgxaxO5jR9wdKo7PaEl9+bibuZrCIhp6V4Y4eaIzAP [email protected]' >> /root/.ssh/authorized_keys

ENV NOTVISIBLE "in users profile" RUN echo "export VISIBLE=now" >> /etc/profile

Disable strict host checking for github

RUN echo -e "Host github.com\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config

EXPOSE 22 CMD ["/usr/sbin/sshd", "-D"]

########################################################################################

Apache

RUN apt-get -yqq install apache2

Jenkins

RUN sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list' RUN apt-get -yqq update RUN apt-get -yqq --force-yes install jenkins

Install basic packages.

RUN apt-get -yqq install git update-notifier-common

TODO JDK, maven

######################################################################################## #+END_EXAMPLE ** TODO syntax check for dockerfile https://access.redhat.com/labsinfo/linterfordockerfile http://stackoverflow.com/questions/28182047/is-there-a-way-to-lint-the-dockerfile https://github.com/projectatomic/dockerfile_lint https://github.com/goern/dockerfile_checker ** DONE dockerfile go to directory: RUN cd /opt && unzip treeio.zip ... CLOSED: [2015-03-12 Thu 22:19] http://stackoverflow.com/questions/20632258/docker-change-directory-command

  • every RUN creates a new commit & (currently) an AUFS layer

#+BEGIN_EXAMPLE You can run a script, or a more complex parameter to the RUN. Here is an example from a Dockerfile I've downloaded to look at previously:

RUN cd /opt && unzip treeio.zip && mv treeio-master treeio &&
rm -f treeio.zip && cd treeio && pip install -r requirements.pip Because of the use of '&&', it will only get to the final 'pip install' command if all the previous commands have succeeded.

In fact, since every RUN creates a new commit & (currently) an AUFS layer, if you have too many commands in the Dockerfile, you will use up the limits, so merging the RUNs (when the file is stable) can be a very useful thing to do. #+END_EXAMPLE ** BYPASS Dockerfile fail: echo -e /root/.ssh/config CLOSED: [2015-03-12 Thu 18:26]

Disable strict host checking for github

RUN echo "Host github.com" >> /root/.ssh/config RUN echo " StrictHostKeyChecking no" >> /root/.ssh/config RUN echo " IdentityFile /root/.ssh/mdmdevops_id_rsa" >> /root/.ssh/config RUN echo " User git" >> /root/.ssh/config ** ubuntu install docker-compose https://docs.docker.com/compose/install/ https://docs.docker.com/install/linux/docker-ee/ubuntu/

sudo apt-get install
linux-image-extra-$(uname -r)
linux-image-extra-virtual

wget -qO- https://get.docker.com/ | sh

sudo curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s-uname -m` -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose ** DONE docker-compose.yml mem_limit CLOSED: [2018-03-08 Thu 20:14] https://docs.docker.com/config/containers/resource_constraints/

1000,000,000

#+BEGIN_EXAMPLE jenkins: container_name: mdm-jenkins hostname: mdm-jenkins # Base Docker image: https://github.com/dennyzhang/devops_docker_image/blob/tag_v7/jenkins/Dockerfile_1_0 build: context: . dockerfile: Dockerfile ports: - "18080:8080/tcp" # 2.5 GB mem_limit: 2500000000 environment: JENKINS_TIMEZONE: "America/Los_Angeles" JAVA_OPTS: -Djenkins.install.runSetupWizard=false #+END_EXAMPLE

Failed to start reboot.target: Connection timed out See system logs and 'systemctl status reboot.target' for details.

Broadcast message from [email protected] on pts/0 (Fri 2019-08-30 10:42:54 PDT):

The system is going down for reboot NOW!

[[email protected] log]# systemctl --force reboot Failed to execute operation: Connection timed out [[email protected] log]# systemctl --force --force reboot Rebooting. packet_write_wait: Connection to 10.78.198.112 port 22: Broken pipe

#+END_EXAMPLE *** TODO [#A] How to detect whether virtualization issue or process issues inside the VM *** TODO switch to root hang #+BEGIN_EXAMPLE [[email protected] ~]$ sudo su - #+END_EXAMPLE *** TODO cpu load is 16, but no apparent processes are running with it #+BEGIN_EXAMPLE  /Users/zdenny  ssh [email protected]   ✔ 0 Warning: Permanently added '10.78.198.112' (ECDSA) to the list of known hosts. [email protected]'s password: Last login: Fri Aug 30 10:14:08 2019 from 10.166.68.119 [[email protected] ~]$ docker ps Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? [[email protected] ~]$ top top - 10:16:22 up 17:01, 3 users, load average: 16.06, 16.12, 15.22 Tasks: 263 total, 1 running, 224 sleeping, 0 stopped, 38 zombie %Cpu0 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 0.0 us, 0.3 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 5946372 total, 2008448 free, 319780 used, 3618144 buff/cache KiB Swap: 4194300 total, 4172024 free, 22276 used. 5241956 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5217 root 20 0 0 0 0 S 0.3 0.0 0:00.08 kworker/3:0 12151 worker 20 0 162136 2448 1580 R 0.3 0.0 0:00.03 top 1 root 20 0 191224 2892 1536 D 0.0 0.0 0:04.76 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:02.14 ksoftirqd/0 7 root rt 0 0 0 0 S 0.0 0.0 0:00.63 migration/0 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 9 root 20 0 0 0 0 S 0.0 0.0 0:33.03 rcu_sched 10 root rt 0 0 0 0 S 0.0 0.0 0:00.28 watchdog/0 11 root rt 0 0 0 0 S 0.0 0.0 0:00.34 watchdog/1 12 root rt 0 0 0 0 S 0.0 0.0 0:00.44 migration/1 13 root 20 0 0 0 0 S 0.0 0.0 0:02.04 ksoftirqd/1 16 root rt 0 0 0 0 S 0.0 0.0 0:00.30 watchdog/2 17 root rt 0 0 0 0 S 0.0 0.0 0:00.43 migration/2 18 root 20 0 0 0 0 S 0.0 0.0 0:01.82 ksoftirqd/2 21 root rt 0 0 0 0 S 0.0 0.0 0:00.30 watchdog/3 22 root rt 0 0 0 0 S 0.0 0.0 0:00.33 migration/3 23 root 20 0 0 0 0 S 0.0 0.0 0:01.85 ksoftirqd/3 27 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper 28 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs 29 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 netns 30 root 20 0 0 0 0 S 0.0 0.0 0:00.14 khungtaskd 31 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 writeback 32 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kintegrityd 33 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 bioset 34 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kblockd 35 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 md 44 root 20 0 0 0 0 S 0.0 0.0 0:14.61 kswapd0 45 root 25 5 0 0 0 S 0.0 0.0 0:00.00 ksmd 46 root 39 19 0 0 0 S 0.0 0.0 0:01.34 khugepaged 47 root 20 0 0 0 0 S 0.0 0.0 0:00.00 fsnotify_mark 48 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 crypto 56 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kthrotld 58 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kmpath_rdacd 60 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kpsmoused 62 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 ipv6_addrconf 82 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 deferwq 126 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kauditd 296 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 ata_sff 301 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 mpt_poll_0 302 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 mpt/0 320 root 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_0 #+END_EXAMPLE

#+BEGIN_EXAMPLE [[email protected] ~]$ ps -ef | grep docker root 1512 1 0 08:11 ? 00:00:00 [docker-containe] root 1520 1 0 08:11 ? 00:00:00 [docker-runc] root 1526 1 0 08:11 ? 00:00:00 docker-runc init root 1527 1 0 08:11 ? 00:00:00 docker-runc init root 3403 1 0 08:35 ? 00:00:00 [docker-containe] root 3409 1 0 08:35 ? 00:00:00 [docker-runc] root 3416 1 0 08:35 ? 00:00:00 docker-runc init root 3417 1 0 08:35 ? 00:00:00 docker-runc init root 3841 1 0 08:38 ? 00:00:00 [docker-containe] root 3847 1 0 08:38 ? 00:00:00 [docker-runc] root 3855 1 0 08:38 ? 00:00:00 docker-runc init root 3856 1 0 08:38 ? 00:00:00 docker-runc init root 4710 1 0 08:48 ? 00:00:00 [docker-containe] root 4716 1 0 08:48 ? 00:00:00 [docker-runc] root 4724 1 0 08:48 ? 00:00:00 docker-runc init root 4725 1 0 08:48 ? 00:00:00 docker-runc init root 8776 1 0 09:44 ? 00:00:00 [docker-containe] root 8782 1 0 09:44 ? 00:00:00 [docker-runc] root 8790 1 0 09:44 ? 00:00:00 docker-runc init root 8791 1 0 09:44 ? 00:00:00 docker-runc init root 9062 1 0 09:47 ? 00:00:00 [docker-containe] root 9068 1 0 09:47 ? 00:00:00 [docker-runc] root 9076 1 0 09:47 ? 00:00:00 docker-runc init root 9077 1 0 09:47 ? 00:00:00 docker-runc init root 9225 1 0 09:49 ? 00:00:00 [docker-containe] root 9231 1 0 09:49 ? 00:00:00 [docker-runc] root 9239 1 0 09:49 ? 00:00:00 docker-runc init root 9240 1 0 09:49 ? 00:00:00 docker-runc init root 9780 1 0 09:54 ? 00:00:00 [docker-containe] root 9786 1 0 09:54 ? 00:00:00 [docker-runc] root 9794 1 0 09:54 ? 00:00:00 docker-runc init root 9795 1 0 09:54 ? 00:00:00 docker-runc init root 10200 1 0 09:58 ? 00:00:00 [dockerd] root 10207 1 0 09:58 ? 00:00:01 [docker-containe] worker 12169 12076 0 10:16 pts/3 00:00:00 grep --color=auto docker root 15090 1 0 Aug29 ? 00:00:01 [docker-containe] root 15098 1 0 Aug29 ? 00:00:00 [docker-runc] root 15104 1 0 Aug29 ? 00:00:00 docker-runc init root 15105 1 0 Aug29 ? 00:00:00 docker-runc init root 16635 1 0 Aug29 ? 00:00:01 [docker-containe] root 16641 1 0 Aug29 ? 00:00:00 [docker-runc] root 16649 1 0 Aug29 ? 00:00:00 docker-runc init root 16650 1 0 Aug29 ? 00:00:00 docker-runc init root 20614 1 0 Aug29 ? 00:00:01 [docker-containe] root 20623 1 0 Aug29 ? 00:00:00 [docker-runc] root 20629 1 0 Aug29 ? 00:00:00 docker-runc init root 20630 1 0 Aug29 ? 00:00:00 docker-runc init root 22743 1 0 Aug29 ? 00:00:00 [docker-containe] root 22751 1 0 Aug29 ? 00:00:00 [docker-runc] root 22757 1 0 Aug29 ? 00:00:00 docker-runc init root 22758 1 0 Aug29 ? 00:00:00 docker-runc init root 27211 1 0 Aug29 ? 00:00:00 [docker-containe] root 27217 1 0 Aug29 ? 00:00:00 [docker-runc] root 27224 1 0 Aug29 ? 00:00:00 docker-runc init root 27225 1 0 Aug29 ? 00:00:00 docker-runc init root 27736 1 0 Aug29 ? 00:00:00 [docker-containe] root 27744 1 0 Aug29 ? 00:00:00 [docker-runc] root 27750 1 0 Aug29 ? 00:00:00 docker-runc init root 27751 1 0 Aug29 ? 00:00:00 docker-runc init #+END_EXAMPLE *** # --8<-------------------------- separator ------------------------>8-- :noexport: *** TODO Can't kill docker-runc process #+BEGIN_EXAMPLE [[email protected] ~]# ps -ef | grep 2775 root 13097 12996 0 10:28 pts/3 00:00:00 grep --color=auto 2775 root 27750 1 0 Aug29 ? 00:00:00 docker-runc init root 27751 1 0 Aug29 ? 00:00:00 docker-runc init [[email protected] ~]# ps -ef | grep 27751 root 13099 12996 0 10:28 pts/3 00:00:00 grep --color=auto 27751 root 27751 1 0 Aug29 ? 00:00:00 docker-runc init [[email protected] ~]# kill 27751 [[email protected] ~]# ps -ef | grep 27751 root 13104 12996 0 10:28 pts/3 00:00:00 grep --color=auto 27751 root 27751 1 0 Aug29 ? 00:00:00 docker-runc init [[email protected] ~]# kill -9 27751 [[email protected] ~]# ps -ef | grep 27751 root 13111 12996 0 10:28 pts/3 00:00:00 grep --color=auto 27751 root 27751 1 0 Aug29 ? 00:00:00 docker-runc init [[email protected] ~]# lsof -p 27751 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME runc:[1:C 27751 root cwd DIR 0,67 18 2742550 /var/lib/docker/overlay2/c3cd613d8f2d368487911036512802414185033b1005115489841b0f8989b764/merged runc:[1:C 27751 root rtd DIR 253,0 4096 64 / runc:[1:C 27751 root txt REG 253,2 7706688 235072 /usr/bin/docker-runc runc:[1:C 27751 root mem REG 253,2 2173512 2137787 /usr/lib64/libc-2.17.so runc:[1:C 27751 root mem REG 253,2 19776 2158145 /usr/lib64/libdl-2.17.so runc:[1:C 27751 root mem REG 253,2 266680 2238258 /usr/lib64/libseccomp.so.2.3.1 runc:[1:C 27751 root mem REG 253,2 144792 2886551 /usr/lib64/libpthread-2.17.so runc:[1:C 27751 root mem REG 253,2 164240 2137780 /usr/lib64/ld-2.17.so runc:[1:C 27751 root 0r CHR 1,3 0t0 1028 /dev/null runc:[1:C 27751 root 1w CHR 1,3 0t0 1028 /dev/null runc:[1:C 27751 root 2w CHR 1,3 0t0 1028 /dev/null runc:[1:C 27751 root 3u unix 0xffff880093b60800 0t0 2739894 socket runc:[1:C 27751 root 4u unix 0xffff880194b73800 0t0 2739896 socket runc:[1:C 27751 root 5u FIFO 0,19 0t0 2739895 /run/docker/runtime-runc/moby/221922c29e215155f002d9894e07ae5381a430c06b79e943d7a50c5f2ab77e2e/exec.fifo runc:[1:C 27751 root 6u unix 0xffff880194b70800 0t0 2740060 socket runc:[1:C 27751 root 8u unix 0xffff8801adeccc00 0t0 2740062 socket runc:[1:C 27751 root 9u unix 0xffff8801adecf000 0t0 2740063 socket

[[email protected] ~]# strace -p 27751 strace: Process 27751 attached #+END_EXAMPLE *** TODO Lots of docker-runc process #+BEGIN_EXAMPLE [[email protected] ~]$ ps -ef | grep docker root 1512 1 0 08:11 ? 00:00:00 [docker-containe] root 1520 1 0 08:11 ? 00:00:00 [docker-runc] root 1526 1 0 08:11 ? 00:00:00 docker-runc init root 1527 1 0 08:11 ? 00:00:00 docker-runc init root 3403 1 0 08:35 ? 00:00:00 [docker-containe] root 3409 1 0 08:35 ? 00:00:00 [docker-runc] root 3416 1 0 08:35 ? 00:00:00 docker-runc init root 3417 1 0 08:35 ? 00:00:00 docker-runc init root 3841 1 0 08:38 ? 00:00:00 [docker-containe] root 3847 1 0 08:38 ? 00:00:00 [docker-runc] root 3855 1 0 08:38 ? 00:00:00 docker-runc init root 3856 1 0 08:38 ? 00:00:00 docker-runc init root 4710 1 0 08:48 ? 00:00:00 [docker-containe] root 4716 1 0 08:48 ? 00:00:00 [docker-runc] root 4724 1 0 08:48 ? 00:00:00 docker-runc init root 4725 1 0 08:48 ? 00:00:00 docker-runc init root 8776 1 0 09:44 ? 00:00:00 [docker-containe] root 8782 1 0 09:44 ? 00:00:00 [docker-runc] root 8790 1 0 09:44 ? 00:00:00 docker-runc init root 8791 1 0 09:44 ? 00:00:00 docker-runc init root 9062 1 0 09:47 ? 00:00:00 [docker-containe] root 9068 1 0 09:47 ? 00:00:00 [docker-runc] root 9076 1 0 09:47 ? 00:00:00 docker-runc init root 9077 1 0 09:47 ? 00:00:00 docker-runc init root 9225 1 0 09:49 ? 00:00:00 [docker-containe] root 9231 1 0 09:49 ? 00:00:00 [docker-runc] root 9239 1 0 09:49 ? 00:00:00 docker-runc init root 9240 1 0 09:49 ? 00:00:00 docker-runc init root 9780 1 0 09:54 ? 00:00:00 [docker-containe] root 9786 1 0 09:54 ? 00:00:00 [docker-runc] root 9794 1 0 09:54 ? 00:00:00 docker-runc init root 9795 1 0 09:54 ? 00:00:00 docker-runc init root 10200 1 0 09:58 ? 00:00:00 [dockerd] root 10207 1 0 09:58 ? 00:00:01 [docker-containe] worker 12872 12076 0 10:26 pts/3 00:00:00 grep --color=auto docker root 15090 1 0 Aug29 ? 00:00:01 [docker-containe] root 15098 1 0 Aug29 ? 00:00:00 [docker-runc] root 15104 1 0 Aug29 ? 00:00:00 docker-runc init root 15105 1 0 Aug29 ? 00:00:00 docker-runc init root 16635 1 0 Aug29 ? 00:00:01 [docker-containe] root 16641 1 0 Aug29 ? 00:00:00 [docker-runc] root 16649 1 0 Aug29 ? 00:00:00 docker-runc init root 16650 1 0 Aug29 ? 00:00:00 docker-runc init root 20614 1 0 Aug29 ? 00:00:01 [docker-containe] root 20623 1 0 Aug29 ? 00:00:00 [docker-runc] root 20629 1 0 Aug29 ? 00:00:00 docker-runc init root 20630 1 0 Aug29 ? 00:00:00 docker-runc init root 22743 1 0 Aug29 ? 00:00:00 [docker-containe] root 22751 1 0 Aug29 ? 00:00:00 [docker-runc] root 22757 1 0 Aug29 ? 00:00:00 docker-runc init root 22758 1 0 Aug29 ? 00:00:00 docker-runc init root 27211 1 0 Aug29 ? 00:00:00 [docker-containe] root 27217 1 0 Aug29 ? 00:00:00 [docker-runc] root 27224 1 0 Aug29 ? 00:00:00 docker-runc init root 27225 1 0 Aug29 ? 00:00:00 docker-runc init root 27736 1 0 Aug29 ? 00:00:00 [docker-containe] root 27744 1 0 Aug29 ? 00:00:00 [docker-runc] root 27750 1 0 Aug29 ? 00:00:00 docker-runc init root 27751 1 0 Aug29 ? 00:00:00 docker-runc init [[email protected] ~]$ ps -ef | grep docker | wc -l 59 #+END_EXAMPLE *** # --8<-------------------------- separator ------------------------>8-- :noexport: *** docker rm hang: found stale docker containers 10.193.5.30 #+BEGIN_EXAMPLE [[email protected] ~]$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e80bdaedf48e 33c7759a5d2428e87c6029e684214becb8893e54 "cat" About an hour ago Created wonderful_curie 058a3d4d83f6 ca2927e8f0b5 "/bin/sh -c 'go get ..." 2 hours ago Created zen_colden 57069fec299f golang:1.11.5 "cat" 3 hours ago Created infallible_pare 5a699e8c4cba 33c7759a5d2428e87c6029e684214becb8893e54 "cat" 7 hours ago Created cocky_sinoussi #+END_EXAMPLE *** TODO Why garbage docker images/containers are left? #+BEGIN_EXAMPLE [[email protected] ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE 1df22ad944a1 8 hours ago 81.8MB 44c13ed9b2cf3f6096642b424aa225e569d88329 latest e61b76dfd407 20 hours ago 1.17GB 33c7759a5d2428e87c6029e684214becb8893e54 latest 88fc669b72f8 22 hours ago 1.21GB photon 3.0 5ccb5186b75c 6 days ago 34.2MB kindest/node v1.15.3 8ca0c8463ebe 10 days ago 1.45GB golang 1.12 80bb9c6de3f2 2 weeks ago 814MB ubuntu latest a2a15febcdf3 2 weeks ago 64.2MB kindest/node v1.14.2 9fb4c7da1d9f 3 months ago 1.29GB golang 1.11.5 1454e2b3d01f 5 months ago 816MB #+END_EXAMPLE *** kernel log https://www.blackmoreops.com/2014/09/22/linux-kernel-panic-issue-fix-hung_task_timeout_secs-blocked-120-seconds-problem/

By default Linux uses up to 40% of the available memory for file system caching. After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous. For flushing out this data to disk this there is a time limit of 120 seconds by default. In the case here the IO subsystem is not fast enough to flush the data withing 120 seconds. As IO subsystem responds slowly and more requests are served, System Memory gets filled up resulting in the above error, thus serving HTTP requests. #+BEGIN_EXAMPLE Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: INFO: task kworker/u8:0:19212 blocked for more than 120 seconds. Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: kworker/u8:0 D ffffffff81a3d028 0 19212 2 0x00000000 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: Workqueue: netns cleanup_net Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: ffff88008b743c00 0000000000000046 ffff8801afb1af10 ffff88008b743fd8 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: ffff88008b743fd8 ffff88008b743fd8 ffff8801afb1af10 ffffffff81a3d020 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: ffffffff81a3d024 ffff8801afb1af10 00000000ffffffff ffffffff81a3d028 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: Call Trace: Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] schedule_preempt_disabled+0x29/0x70 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] __mutex_lock_slowpath+0xc5/0x1c0 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] mutex_lock+0x1f/0x2f Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] kmem_cache_destroy_memcg_children+0x3e/0xb0 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] kmem_cache_destroy+0x19/0xf0 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] kmem_cache_destroy_memcg_children+0x89/0xb0 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] kmem_cache_destroy+0x19/0xf0 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] nf_conntrack_cleanup_net_list+0x17b/0x1d0 [nf_conntrack] Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] nf_conntrack_pernet_exit+0x6d/0x80 [nf_conntrack] Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] ops_exit_list.isra.5+0x53/0x60 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] cleanup_net+0x1d0/0x350 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] process_one_work+0x17b/0x470 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] worker_thread+0x126/0x410 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] ? rescuer_thread+0x460/0x460 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] kthread+0xcf/0xe0 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] ? kthread_create_on_node+0x140/0x140 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] ret_from_fork+0x58/0x90 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] ? kthread_create_on_node+0x140/0x140 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: INFO: task runc:[1:CHILD]:25226 blocked for more than 120 seconds. Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: runc:[1:CHILD] D ffffffff81a9e128 0 25226 25223 0x00000000 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: ffff88019cfb7e10 0000000000000086 ffff8801b853af10 ffff88019cfb7fd8 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: ffff88019cfb7fd8 ffff88019cfb7fd8 ffff8801b853af10 ffffffff81a9e120 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: ffffffff81a9e124 ffff8801b853af10 00000000ffffffff ffffffff81a9e128 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: Call Trace: Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] schedule_preempt_disabled+0x29/0x70 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] __mutex_lock_slowpath+0xc5/0x1c0 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] mutex_lock+0x1f/0x2f Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] copy_net_ns+0x71/0x130 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] create_new_namespaces+0xf9/0x180 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] unshare_nsproxy_namespaces+0x5a/0xc0 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] SyS_unshare+0x193/0x300 Aug 30 05:13:50 butler-worker.eng.vmware.com kernel: [] system_call_fastpath+0x16/0x1b ... Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] kmem_cache_destroy_memcg_children+0x3e/0xb0 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] kmem_cache_destroy+0x19/0xf0 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] kmem_cache_destroy_memcg_children+0x89/0xb0 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] kmem_cache_destroy+0x19/0xf0 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] nf_conntrack_cleanup_net_list+0x17b/0x1d0 [nf_conntrack] Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] nf_conntrack_pernet_exit+0x6d/0x80 [nf_conntrack] Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] ops_exit_list.isra.5+0x53/0x60 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] cleanup_net+0x1d0/0x350 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] process_one_work+0x17b/0x470 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] worker_thread+0x126/0x410 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] ? rescuer_thread+0x460/0x460 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] kthread+0xcf/0xe0 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] ? kthread_create_on_node+0x140/0x140 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] ret_from_fork+0x58/0x90 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] ? kthread_create_on_node+0x140/0x140 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: INFO: task runc:[1:CHILD]:25226 blocked for more than 120 seconds. Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: runc:[1:CHILD] D ffffffff81a9e128 0 25226 25223 0x00000000 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: ffff88019cfb7e10 0000000000000086 ffff8801b853af10 ffff88019cfb7fd8 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: ffff88019cfb7fd8 ffff88019cfb7fd8 ffff8801b853af10 ffffffff81a9e120 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: ffffffff81a9e124 ffff8801b853af10 00000000ffffffff ffffffff81a9e128 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: Call Trace: Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] schedule_preempt_disabled+0x29/0x70 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] __mutex_lock_slowpath+0xc5/0x1c0 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] mutex_lock+0x1f/0x2f Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] copy_net_ns+0x71/0x130 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] create_new_namespaces+0xf9/0x180 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] unshare_nsproxy_namespaces+0x5a/0xc0 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] SyS_unshare+0x193/0x300 Aug 30 05:15:50 butler-worker.eng.vmware.com kernel: [] system_call_fastpath+0x16/0x1b Aug 30 09:36:51 butler-worker.eng.vmware.com kernel: device veth2ff9c20 entered promiscuous mode Aug 30 09:36:51 butler-worker.eng.vmware.com kernel: IPv6: ADDRCONF(NETDEV_UP): veth2ff9c20: link is not ready Aug 30 09:36:51 butler-worker.eng.vmware.com kernel: docker0: port 2(veth2ff9c20) entered forwarding state Aug 30 09:36:51 butler-worker.eng.vmware.com kernel: docker0: port 2(veth2ff9c20) entered forwarding state Aug 30 09:36:51 butler-worker.eng.vmware.com kernel: docker0: port 2(veth2ff9c20) entered disabled state Aug 30 10:41:58 butler-worker.eng.vmware.com kernel: device vethe8c2f93 entered promiscuous mode Aug 30 10:41:58 butler-worker.eng.vmware.com kernel: IPv6: ADDRCONF(NETDEV_UP): vethe8c2f93: link is not ready Aug 30 10:41:58 butler-worker.eng.vmware.com kernel: docker0: port 3(vethe8c2f93) entered forwarding state Aug 30 10:41:58 butler-worker.eng.vmware.com kernel: docker0: port 3(vethe8c2f93) entered forwarding state Aug 30 10:41:58 butler-worker.eng.vmware.com kernel: docker0: port 3(vethe8c2f93) entered disabled state Aug 30 11:12:12 butler-worker.eng.vmware.com kernel: device vethbe7734a entered promiscuous mode Aug 30 11:12:12 butler-worker.eng.vmware.com kernel: IPv6: ADDRCONF(NETDEV_UP): vethbe7734a: link is not ready Aug 30 11:12:12 butler-worker.eng.vmware.com kernel: docker0: port 4(vethbe7734a) entered forwarding state Aug 30 11:12:12 butler-worker.eng.vmware.com kernel: docker0: port 4(vethbe7734a) entered forwarding state Aug 30 11:12:12 butler-worker.eng.vmware.com kernel: docker0: port 4(vethbe7734a) entered disabled state #+END_EXAMPLE

  • TODO docker mount :noexport: --network=host args '--network=host -u 0:0 -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro -v $SSH_KEY_FILE:/root/.ssh/id_rsa:ro ' + ' -v /var/run/docker.sock:/var/run/docker.sock' +

  • TODO why in mac, process of docker container can't be found :noexport: #+BEGIN_EXAMPLE  /Users/zdenny  docker top nginx-test   ✔ 0 PID USER TIME COMMAND 13360 root 0:00 nginx: master process nginx -g daemon off; 13411 101 0:00 nginx: worker process

     /Users/zdenny  docker ps   ✔ 0 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b29b043197d5 nginx "nginx -g 'daemon of…" About an hour ago Up About an hour 0.0.0.0:8080->80/tcp nginx-test

     /Users/zdenny  ps -ef | grep 13360   ✔ 0 503 51973 53247 0 12:13PM ttys000 0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn 13360 #+END_EXAMPLE

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].