Creating a QRetail cash register container for Proxmox

Contents

Container overview

Containers in Proxmox are an interesting thing. They’re not a full VM in the sense they inhabit their own isolated environment down to the BIOS level; instead they’re a Linux system running on the CPU and in the RAM of the host computer, but with its file system and processes isolated (“contained”) from the host’s.

Proxmox stages a container from a template file, which contains the contents of the root file system, by allocating a new LV and unpacking the template into it. Proxmox has several templates available for download. Because the containers use the host’s kernel they have to match the host’s OS: one can’t set up a Windows or BSD container on a Linux system.

When setting up a container, the user specifies a few attributes for it:

  • The root password
  • An ssh public key to set up in ~root/.ssh
  • The template to use when creating the root file system
  • The amount of disc space to allocate to the root file system
  • The number of CPU cores on the host system to allocate to the container
  • The amount of RAM to allocate to the container
  • Network parameters, including a static IP or DHCP
  • DNS servers for the container to use, or use the host’s servers

When the container starts for the first time, Proxmox dynamically modifies the VM’s setup by doing the following:

  • Modifies the hostname to match the name given to the container
  • Updates the IP address if the user selected a static IP when configuring the container
  • Sets the network parameters
  • Mounts any additional directories or volumes within the container

Running a cash register as a container

That got me to thinking: could I set up a QRetail cash register in a container, then stage, for example, a dozen cash registers all in one go?

An issue I ran into at the outset is the template file used when staging a container is rather immutable. The proper way to do this is to perform a two step process:

  • Stage the container, which contains a base operating system
  • Configure the container for use by running a utility such as Puppet, Chef, or Ansible

The first part is dead simle: Proxomox does that for me. But seond part … not so much. I didn’t want to take a week out of the project to set up a confguration system.

So instead I decided to use the supplied CentOS 7 tempplate, get it ready to run QRetail, then build a new template from the result.

Creating the cash register’s container template

Display the current set of logical volumes

root@raven:~# lvs
  LV            VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve       twi-aotz-- 121.80g             84.55  42.49
  root          pve       -wi-ao----   4.00g
  srs           pve       -wi-a-----   8.00g
  swap          pve       -wi-ao----   7.00g
  vm-100-disk-1 pve       Vwi-a-tz--   6.00g data        16.53
  vm-101-disk-1 pve       Vwi-a-tz--   6.00g data        13.13
  vm-102-disk-1 pve       Vwi-a-tz--   8.00g data        38.67
  vm-103-disk-2 pve       Vwi-a-tz--  29.30g data        16.93
  vm-103-disk-3 pve       Vwi-a-tz--  29.30g data        37.05
  vm-104-disk-1 pve       Vwi-a-tz--  16.00g data        78.17
  vm-105-disk-1 pve       Vwi-a-tz--   8.00g data        38.67
  vm-106-disk-1 pve       Vwi-a-tz--   8.00g data        69.73
  vm-107-disk-1 pve       Vwi-a-tz--   4.00g data        50.36  <-- CentOS-7-QRetail-CR
  vm-108-disk-1 pve       Vwi-a-tz--  16.00g data        79.42
  vm-109-disk-1 pve       Vwi-a-tz--   8.00g data        63.88
  vm-110-disk-1 pve       Vwi-a-tz--  16.00g data        49.10
  vm-111-disk-1 pve       Vwi-a-tz--  12.00g data        71.65
  vm-112-disk-1 pve       Vwi-a-tz--   6.00g data        37.65
  vm-113-disk-1 pve       Vwi-a-tz--  10.00g data        59.70
  vm-114-disk-1 pve       Vwi-a-tz--   4.00g data        44.11
  vm-115-disk-1 pve       Vwi-a-tz--   8.00g data        94.23
  vm-116-disk-1 pve       Vwi-a-tz--   8.00g data        91.28
  projects      pve_extra -wi-ao----  10.00g

Determine the filesystems on the disk

root@raven:~# fdisk -l /dev/pve/vm-107-disk-1root@raven:~# fdisk -l /dev/pve/vm-107-disk-1
Disk /dev/pve/vm-107-disk-1: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x000095a0

Device                   Boot   Start     End Sectors Size Id Type
/dev/pve/vm-107-disk-1p1 *       2048 2099199 2097152   1G 83 Linux
/dev/pve/vm-107-disk-1p2      2099200 8388607 6289408   3G 8e Linux LVM

Create a loop device pointing to the VM’s volume group

root@raven:~# losetup -o $((512*2099200)) /dev/loop0 /dev/pve/vm-107-disk-1

Scan for volume groups

root@raven:~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "pve_extra" using metadata type lvm2
  Found volume group "pve" using metadata type lvm2
  Found volume group "centos" using metadata type lvm2

Activate the newly found volume group

root@raven:~# vgchange -a y centos
  2 logical volume(s) in volume group "centos" now active

Display the logical volumes

root@raven:~# lvs | grep centos
  root          centos    -wi-a-----   2.59g
  swap          centos    -wi-a----- 412.00m

Create a mount point and mount the VM’s root logical volume

root@raven:~# mkdir /tmp/ROOT && mount /dev/centos/root /tmp/ROOT

Check space usage

root@raven:~# df -h | grep -e ^Filesystem -e ROOT
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root         2.6G  1.5G  1.2G  58% /tmp/ROOT

Create a tar archive of the file system

root@raven:~# cd /tmp/ROOT; time tar cf /var/tmp/vm-107-root-fs.tar *
real    1m13.614s
user    0m0.325s
sys     0m2.600s

Compress the tar file

root@raven:~# pv /var/tmp/vm-107-root-fs.tar | xz >>/var/lib/vz/template/cache/centos-7-amd64-qretail-CR.tar.xz
1.38GiB 0:07:21 [3.21MiB/s] [========================================>] 100%

Clean up

cd
rm -f /var/tmp/vm-107-root-fs   # Delete the intermediate tar file
umount /tmp/ROOT                # Unmount the file system
rmdir /tmp/ROOT                 # Remove the mount point
vgchange -a n centos            # Deactivate the volume group
losetup -d /dev/loop0           # Clean up the loop device

Automatically setting the cash register name

As part of this, I realised I needed to set the name of the cash register, and thus the extension of the data files it creates, based on the name of the system on which it’s running. I did that by adding the following lines to qretail/linux/bin/qr-pos-cr.sh:

# Change VMX in CONFIGUR.SYS to the last three characters of the hostname
if grep -q VMX CONFIGUR.SYS
then
    NEW_ID="${HOSTNAME/.*/}"
    P=$((${#NEW_ID} - 3))
    NEW_ID="$(echo ${NEW_ID:$P} | tr '[a-z]' '[A-Z]')"
    cd /opt/qretail
    LANG=C; sed --in-place "s/VMX/$NEW_ID/g" CONFIGUR.SYS
    cd - &>/dev/null
fi

Issue: containers don’t like mounting NFS file systems

cores=1, hostname=cr-vma, memory=512,net0="bridge=vmbr0,name-eth0,ip=dhcp",
nodename=raven, ostemplate=local:vztmpl/centos-7-amd64-qretail.tar.xz,
rootfs=local-lvm:4, ssh-public-keys="ssh-rsa AAAAB3N...", swap=512, vmid=117

audit:
  type=1400 audit(1560829547.467:44):
  apparmor="DENIED"
  operation="mount"
  info="failed type match"
  error=-13
  profile="lxc-container-default-with-nfs"
  name="/opt/qretail/INDAT/"
  pid=6359
  comm="mount.nfs"
  fstype="nfs"
  srcname="heron:/opt/qretail/INDAT"

Now, if only I had a way of transferring files [using scp instead of NFS file mounts] …