All Projects → openebs → lvm-localpv

openebs / lvm-localpv

Licence: Apache-2.0 license
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.

Programming Languages

go
31211 projects - #10 most used programming language
shell
77523 projects
Makefile
30231 projects
Jinja
831 projects

Projects that are alternatives of or similar to lvm-localpv

carina
Carina: an high performance and ops-free local storage for kubernetes
Stars: ✭ 256 (+197.67%)
Mutual labels:  storage, csi
beegfs-csi-driver
The BeeGFS Container Storage Interface (CSI) driver provides high performing and scalable storage for workloads running in Kubernetes.
Stars: ✭ 32 (-62.79%)
Mutual labels:  storage, csi-driver
moosefs-csi
Container Storage Interface (CSI) for MooseFS
Stars: ✭ 44 (-48.84%)
Mutual labels:  storage, csi
jiva
CAS Data Engine - iSCSI Distributed Block Storage Controller built-in Go
Stars: ✭ 121 (+40.7%)
Mutual labels:  storage, csi-driver
synology-csi
Container Storage Interface (CSI) for Synology
Stars: ✭ 136 (+58.14%)
Mutual labels:  storage, csi
linode-blockstorage-csi-driver
Container Storage Interface (CSI) Driver for Linode Block Storage
Stars: ✭ 50 (-41.86%)
Mutual labels:  storage, csi
ibm-spectrum-scale-csi
The IBM Spectrum Scale Container Storage Interface (CSI) project enables container orchestrators, such as Kubernetes and OpenShift, to manage the life-cycle of persistent storage.
Stars: ✭ 41 (-52.33%)
Mutual labels:  storage, csi
storage-box
Intuitive and easy-to-use storage box.
Stars: ✭ 26 (-69.77%)
Mutual labels:  storage
storage
Storage Standard
Stars: ✭ 92 (+6.98%)
Mutual labels:  storage
mconfig
a lightweight distributed configuration center
Stars: ✭ 13 (-84.88%)
Mutual labels:  storage
SuperCoreAPI
The best way to create a Plugin
Stars: ✭ 17 (-80.23%)
Mutual labels:  storage
sdi-mipi-bridge
Antmicro's open hardware 3G SDI into MIPI CSI-2 converter
Stars: ✭ 31 (-63.95%)
Mutual labels:  csi
blobit
BlobIt - a Distributed Large Object Storage
Stars: ✭ 29 (-66.28%)
Mutual labels:  storage
storage
Mongoose-like schema validation, collections and documents on browser (client-side)
Stars: ✭ 17 (-80.23%)
Mutual labels:  storage
vultr-csi
Container Storage Interface (CSI) Driver for Vultr Block Storage
Stars: ✭ 22 (-74.42%)
Mutual labels:  csi-driver
storage-abstraction
Provides an abstraction layer for interacting with a storage; the storage can be local or in the cloud.
Stars: ✭ 36 (-58.14%)
Mutual labels:  storage
iOS-Shared-CoreData-Storage-for-App-Groups
iOS Shared CoreData Storage for App Groups
Stars: ✭ 48 (-44.19%)
Mutual labels:  storage
laravel-ovh
Wrapper for OVH Object Storage integration with laravel
Stars: ✭ 30 (-65.12%)
Mutual labels:  storage
DBMSology
The Paper List on Design and Implmentation of System Software
Stars: ✭ 67 (-22.09%)
Mutual labels:  storage
ScopedStorageDemo
medium.com/better-programming/all-you-need-to-know-about-scoped-storage-in-android-10-e621f40bc8b9
Stars: ✭ 44 (-48.84%)
Mutual labels:  storage

OpenEBS LVM CSI Driver

FOSSA Status CII Best Practices Slack Community Meetings Go Report FOSSA Status

OpenEBS Logo

CSI driver for provisioning Local PVs backed by LVM and more.

Project Status

LVM-LocalPV CSI Driver is declared GA in August 2021 with the release version as 0.8.0.

Project Tracker

See roadmap.

Usage

Prerequisites

Before installing LVM driver please make sure your Kubernetes Cluster must meet the following prerequisites:

  1. all the nodes must have lvm2 utils installed and the dm-snapshot kernel module loaded
  2. volume group has been setup for provisioning the volume
  3. You have access to install RBAC components into kube-system namespace. The OpenEBS LVM driver components are installed in kube-system namespace to allow them to be flagged as system critical components.

Supported System

K8S : 1.20+

OS : Ubuntu

LVM version : LVM 2

Setup

Find the disk which you want to use for the LVM, for testing you can use the loopback device

truncate -s 1024G /tmp/disk.img
sudo losetup -f /tmp/disk.img --show

Create the Volume group on all the nodes, which will be used by the LVM Driver for provisioning the volumes

sudo pvcreate /dev/loop0
sudo vgcreate lvmvg /dev/loop0       ## here lvmvg is the volume group name to be created

Installation

We can install the latest release of OpenEBS LVM driver by running the following command.

$ kubectl apply -f https://openebs.github.io/charts/lvm-operator.yaml

If you want to fetch a versioned manifest, you can use the manifests for a specific OpenEBS release version, for example:

$ kubectl apply -f https://raw.githubusercontent.com/openebs/charts/gh-pages/versioned/3.0.0/lvm-operator.yaml

NOTE: For some Kubernetes distributions, the kubelet directory must be changed at all relevant places in the YAML powering the operator (both the openebs-lvm-controller and openebs-lvm-node).

  • For microk8s, we need to change the kubelet directory to /var/snap/microk8s/common/var/lib/kubelet/, we need to replace /var/lib/kubelet/ with /var/snap/microk8s/common/var/lib/kubelet/ at all the places in the operator yaml and then we can apply it on microk8s.

  • For k0s, the default directory (/var/lib/kubelet) should be changed to /var/lib/k0s/kubelet.

  • For RancherOS, the default directory (/var/lib/kubelet) should be changed to /opt/rke/var/lib/kubelet.

Verify that the LVM driver Components are installed and running using below command :

$ kubectl get pods -n kube-system -l role=openebs-lvm

Depending on number of nodes, you will see one lvm-controller pod and lvm-node daemonset running on the nodes.

NAME                       READY   STATUS    RESTARTS   AGE
openebs-lvm-controller-0   5/5     Running   0          35s
openebs-lvm-node-54slv     2/2     Running   0          35s
openebs-lvm-node-9vg28     2/2     Running   0          35s
openebs-lvm-node-qbv57     2/2     Running   0          35s

Once LVM driver is successfully installed, we can provision volumes.

Deployment

1. Create a Storage class

$ cat sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-lvmpv
parameters:
  storage: "lvm"
  volgroup: "lvmvg"
provisioner: local.csi.openebs.io

Check the doc on storageclasses to know all the supported parameters for LVM-LocalPV

VolumeGroup Availability

If LVM volume group is available on certain nodes only, then make use of topology to tell the list of nodes where we have the volgroup available. As shown in the below storage class, we can use allowedTopologies to describe volume group availability on nodes.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-lvmpv
allowVolumeExpansion: true
parameters:
  storage: "lvm"
  volgroup: "lvmvg"
provisioner: local.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
  - key: kubernetes.io/hostname
    values:
      - lvmpv-node1
      - lvmpv-node2

The above storage class tells that volume group "lvmvg" is available on nodes lvmpv-node1 and lvmpv-node2 only. The LVM driver will create volumes on those nodes only.

Please note that the provisioner name for LVM driver is "local.csi.openebs.io", we have to use this while creating the storage class so that the volume provisioning/deprovisioning request can come to LVM driver.

2. Create the PVC

$ cat pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-lvmpv
spec:
  storageClassName: openebs-lvmpv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi

Create a PVC using the storage class created for the LVM driver.

3. Deploy the application

Create the deployment yaml using the pvc backed by LVM storage.

$ cat fio.yaml

apiVersion: v1
kind: Pod
metadata:
  name: fio
spec:
  restartPolicy: Never
  containers:
  - name: perfrunner
    image: openebs/tests-fio
    command: ["/bin/bash"]
    args: ["-c", "while true ;do sleep 50; done"]
    volumeMounts:
       - mountPath: /datadir
         name: fio-vol
    tty: true
  volumes:
  - name: fio-vol
    persistentVolumeClaim:
      claimName: csi-lvmpv

After the deployment of the application, we can go to the node and see that the lvm volume is being used by the application for reading/writting the data and space is consumed from the LVM. Please note that to check the provisioned volumes on the node, we need to run pvscan --cache command to update the lvm cache and then we can use lvdisplay and all other lvm commands on the node.

4. Deprovisioning

for deprovisioning the volume we can delete the application which is using the volume and then we can go ahead and delete the pv, as part of deletion of pv this volume will also be deleted from the volume group and data will be freed.

$ kubectl delete -f fio.yaml
pod "fio" deleted
$ kubectl delete -f pvc.yaml
persistentvolumeclaim "csi-lvmpv" deleted

Features

  • Access Modes
    • ReadWriteOnce
    • ReadOnlyMany
    • ReadWriteMany
  • Volume modes
    • Filesystem mode
    • Block mode
  • Supports fsTypes: ext4, btrfs, xfs
  • Volume metrics
  • Topology
  • Snapshot
  • Clone
  • Volume Resize
  • Thin Provision
  • Backup/Restore
  • Ephemeral inline volume

Limitation

  • Resize of volumes with snapshot is not supported

License

FOSSA Status

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].