Group: Member 
Post Group: Newbie
Posts: 8
Status: 
|
I'm encountering odd behavior with an LVM-based KVM hypervisor configuration on a host system after creating a new Linux VPS using an ISO which I've been unable to contribute to the intended behavior of KVM, unintended behavior of Virtualizor or just my particular storage configuration itself. Hoping someone in the community may have experienced this before or have a workaround solution.
Description: Upon deletion of a VPS that has been installed using a Linux ISO with an LVM partition during initial setup results in an orphaned disk being present on the host system's LVM table once the VPS has been successfully deleted from the Virtual Server list. Attempts to delete the orphaned disk result in the immediate error, "Error deleting disk(s)", which no associated errors listed in Tasks. To remove the orphaned disk, reboot of the hardware host is necessary which subsequently allows the orphaned disk to be removed after.
LVM configuration is currently a Thin LVM (RAW) storage pool.
Using latest Virtualizor version 3.0.3p7. Kernel 3.10.0-1127.13.1.el7.x86_64
I suspect the behavior of LVM and KVM, not necessarily Thin LVM or pool in this particular case. Creating an LVM partitioned VPS appears to modify the host LVM tables and creates an associated LVM physical volume, volume group and logical volume. Upon deletion of the VPS, all three storage paths remain until reboot of the host which results in the PV and VG objects being removed automatically which subsequently allows the LV orphaned disk to be deleted.
While I'm under the impression that modification of the hosts LVM table by KVM for an LVM-based VPS is expected, rebooting a production hardware node each time a client re-installs which ultimately would exhaust the storage pool as disks continue to persist could be problematic. Ideally I would prefer a way to avoid this, or at least determine a workaround that could be used to clean up the storage over a period of time.
Thanks
|