All Projects → lxc → Lxcfs

lxc / Lxcfs

Licence: other
FUSE filesystem for LXC

Programming Languages

c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to Lxcfs

Lxc Pkg Ubuntu
LXC Ubuntu packaging
Stars: ✭ 11 (-98.17%)
Mutual labels:  containers, lxc
Distrobuilder
System container image builder for LXC and LXD
Stars: ✭ 211 (-64.95%)
Mutual labels:  containers, lxc
Addon Lxdone
Allows OpenNebula to manage Linux Containers via LXD
Stars: ✭ 36 (-94.02%)
Mutual labels:  containers, lxc
Amicontained
Container introspection tool. Find out what container runtime is being used as well as features available.
Stars: ✭ 638 (+5.98%)
Mutual labels:  containers, lxc
Go Lxc
Go bindings for liblxc
Stars: ✭ 336 (-44.19%)
Mutual labels:  containers, lxc
Lxc Ci
LXC continuous integration and build scripts
Stars: ✭ 110 (-81.73%)
Mutual labels:  containers, lxc
Ruby Lxc
ruby bindings for liblxc
Stars: ✭ 115 (-80.9%)
Mutual labels:  containers, lxc
Lxd
Powerful system container and virtual machine manager
Stars: ✭ 3,115 (+417.44%)
Mutual labels:  containers, lxc
Lxc
LXC - Linux Containers
Stars: ✭ 3,583 (+495.18%)
Mutual labels:  containers, lxc
Lxdmosaic
Web interface to manage multiple instance of lxd
Stars: ✭ 270 (-55.15%)
Mutual labels:  containers, lxc
Vas Quod
🚡 Minimal linux container runtime.
Stars: ✭ 404 (-32.89%)
Mutual labels:  containers, lxc
Lxdock
Build and orchestrate your development environments with LXD - a.k.a. Vagrant is Too Heavy™
Stars: ✭ 350 (-41.86%)
Mutual labels:  containers, lxc
Lxdui
LXDUI is a web UI for the native Linux container technology LXD/LXC
Stars: ✭ 443 (-26.41%)
Mutual labels:  containers, lxc
Awesome Cloudrun
👓 ⏩ A curated list of resources about all things Cloud Run
Stars: ✭ 521 (-13.46%)
Mutual labels:  containers
Conprof
Continuous profiling for performance analysis of CPU, memory over time.
Stars: ✭ 571 (-5.15%)
Mutual labels:  containers
Securefs
Filesystem in userspace (FUSE) with transparent authenticated encryption
Stars: ✭ 518 (-13.95%)
Mutual labels:  fuse-filesystem
Kubernetes For Java Developers
A Day in Java Developer’s Life, with a taste of Kubernetes
Stars: ✭ 514 (-14.62%)
Mutual labels:  containers
Go Health
Library for enabling asynchronous health checks in your service
Stars: ✭ 588 (-2.33%)
Mutual labels:  containers
Athenz
Open source platform for X.509 certificate based service authentication and fine grained access control in dynamic infrastructures. Athenz supports provisioning and configuration (centralized authorization) use cases as well as serving/runtime (decentralized authorization) use cases.
Stars: ✭ 570 (-5.32%)
Mutual labels:  containers
Tern
Tern is a software composition analysis tool and Python library that generates a Software Bill of Materials for container images and Dockerfiles. The SBoM that Tern generates will give you a layer-by-layer view of what's inside your container in a variety of formats including human-readable, JSON, HTML, SPDX and more.
Stars: ✭ 505 (-16.11%)
Mutual labels:  containers

lxcfs

Introduction

LXCFS is a small FUSE filesystem written with the intention of making Linux containers feel more like a virtual machine. It started as a side-project of LXC but is useable by any runtime.

LXCFS will take care that the information provided by crucial files in procfs such as:

/proc/cpuinfo
/proc/diskstats
/proc/meminfo
/proc/stat
/proc/swaps
/proc/uptime
/proc/slabinfo
/sys/devices/system/cpu/online

are container aware such that the values displayed (e.g. in /proc/uptime) really reflect how long the container is running and not how long the host is running.

Prior to the implementation of cgroup namespaces by Serge Hallyn LXCFS also provided a container aware cgroupfs tree. It took care that the container only had access to cgroups underneath it's own cgroups and thus provided additional safety. For systems without support for cgroup namespaces LXCFS will still provide this feature but it is mostly considered deprecated.

Upgrading LXCFS without restart

LXCFS is split into a shared library (a libtool module, to be precise) liblxcfs and a simple binary lxcfs. When upgrading to a newer version of LXCFS the lxcfs binary will not be restarted. Instead it will detect that a new version of the shared library is available and will reload it using dlclose(3) and dlopen(3). This design was chosen so that the fuse main loop that LXCFS uses will not need to be restarted. If it were then all containers using LXCFS would need to be restarted since they would otherwise be left with broken fuse mounts.

To force a reload of the shared library at the next possible instance simply send SIGUSR1 to the pid of the running LXCFS process. This can be as simple as doing:

kill -s USR1 $(pidof lxcfs)

musl

To achieve smooth upgrades through shared library reloads LXCFS also relies on the fact that when dlclose(3) drops the last reference to the shared library destructors are run and when dlopen(3) is called constructors are run. While this is true for glibc it is not true for musl (See the section Unloading libraries.). So users of LXCFS on musl are advised to restart LXCFS completely and all containers making use of it.

Building

Build lxcfs as follows:

yum install fuse fuse-lib fuse-devel
git clone git://github.com/lxc/lxcfs
cd lxcfs
./bootstrap.sh
./configure
make
make install

Usage

The recommended command to run lxcfs is:

sudo mkdir -p /var/lib/lxcfs
sudo lxcfs /var/lib/lxcfs

A container runtime wishing to use LXCFS should then bind mount the approriate files into the correct places on container startup.

LXC

In order to use lxcfs with systemd-based containers, you can either use LXC 1.1 in which case it should work automatically, or otherwise, copy the lxc.mount.hook and lxc.reboot.hook files (once built) from this tree to /usr/share/lxcfs, make sure it is executable, then add the following lines to your container configuration:

lxc.mount.auto = cgroup:mixed
lxc.autodev = 1
lxc.kmsg = 0
lxc.include = /usr/share/lxc/config/common.conf.d/00-lxcfs.conf

Using with Docker

docker run -it -m 256m --memory-swap 256m \
      -v /var/lib/lxcfs/proc/cpuinfo:/proc/cpuinfo:rw \
      -v /var/lib/lxcfs/proc/diskstats:/proc/diskstats:rw \
      -v /var/lib/lxcfs/proc/meminfo:/proc/meminfo:rw \
      -v /var/lib/lxcfs/proc/stat:/proc/stat:rw \
      -v /var/lib/lxcfs/proc/swaps:/proc/swaps:rw \
      -v /var/lib/lxcfs/proc/uptime:/proc/uptime:rw \
      -v /var/lib/lxcfs/proc/slabinfo:/proc/slabinfo:rw \
      ubuntu:18.04 /bin/bash

In a system with swap enabled, the parameter "-u" can be used to set all values in "meminfo" that refer to the swap to 0.

sudo lxcfs -u /var/lib/lxcfs

Swap handling

If you noticed LXCFS not showing any SWAP in your container despite having SWAP on your system, please read this section carefully and look for instructions on how to enable SWAP accounting for your distribution.

Swap cgroup handling on Linux is very confusing and there just isn't a perfect way for LXCFS to handle it.

Terminology used below:

  • RAM refers to memory.usage_in_bytes and memory.limit_in_bytes
  • RAM+SWAP refers to memory.memsw.usage_in_bytes and memory.memsw.limit_in_bytes

The main issues are:

  • SWAP accounting is often opt-in and, requiring a special kernel boot time option (swapaccount=1) and/or special kernel build options (CONFIG_MEMCG_SWAP).

  • Both a RAM limit and a RAM+SWAP limit can be set. The delta however isn't the available SWAP space as the kernel is still free to SWAP as much of the RAM as it feels like. This makes it impossible to render a SWAP device size as using the delta between RAM and RAM+SWAP for that wouldn't account for the kernel swapping more pages, leading to swap usage exceeding swap total.

  • It's impossible to disable SWAP in a given container. The closest that can be done is setting swappiness down to 0 which severly limits the risk of swapping pages but doesn't eliminate it.

As a result, LXCFS had to make some compromise which go as follow:

  • When SWAP accounting isn't enabled, no SWAP space is reported at all. This is simply because there is no way to know the SWAP consumption. The container may very much be using some SWAP though, there's just no way to know how much of it and showing a SWAP device would require some kind of SWAP usage to be reported. Showing the host value would be completely wrong, showing a 0 value would be equallty wrong.

  • Because SWAP usage for a given container can exceed the delta between RAM and RAM+SWAP, the SWAP size is always reported to be the smaller of the RAM+SWAP limit or the host SWAP device itself. This ensures that at no point SWAP usage will be allowed to exceed the SWAP size.

  • If the swappiness is set to 0 and there is no SWAP usage, no SWAP is reported. However if there is SWAP usage, then a SWAP device of the size of the usage (100% full) is reported. This provides adequate reporting of the memory consumption while preventing applications from assuming more SWAP is available.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].