kyomu.org
Finding morsels of satisfaction in meaninglessness for some number of years that is greater than zero.KVM headless host for on an old AMD box
This has not been the simplest build in my history of builds. Having decided that building a new Ryzen based mini-itx server would be nice but just a bit too expensive, I scavenged together various bits from my PC pile and built a half decent machine on which to run KVM for VMs and docker for container based applications. It's satisfying to get an old (10 years) motherboard and CPU going with a bunch of old drives and build something useful. It's fast enough for what I need but there were one or two hurdles that I needed to get over.
First the specs:
- Gigabyte GA-MA790FXT-UD5P
- CPU: AMD Phemom™ II X3 720 (BIOS tricked into running four cores) @3200MHz
- RAM: 8GB in four 2GB modules DDR3 1066MHz
- Ethernet: twin RTL8111/8161/8411 GbE
- Video: NVIDIA GeForce 8400
- Storage (system): 80GB Intel SSD
- Storage (KVM storage pool): 1x WDC 1TB 7200RPM spinning rust machine
- Storage (video and backups): 2x Seagate 500GB 7200RPM spinning rust machines
Not the most powerful machine in the world but more than enough to run a few VMs and a ~Unifi Video server~.
Running headless
The machine will live in a cupboard a long way from a DVI cable and so all configuration is focused on being able to operate over SSH. I may choose to setup remote GUI tools at some point for convenience but this is a good opportunity to finally get to know KVM properly so I'll be configuring and managing from the shell. Sadly, getting OpenBSD to install at the console is a PiTA due to virt-install not understand OpenBSD very well but installation via vnc is not the end of the world.
Base build
The base build is a straightforward installation of Debian. After installation, I did my usual updates to sshd.conf (disabling root login, posswords etc.), applied SSH keys to authorized keys, updated and setup auto-updating. I installed:
- Mosh for long-running processes from workstations that like to go to sleep
- tmux for use when I'm not running it on my workstation or logging in interactively
- cryptsetup for mounting LUKS encrypted RAID volume
Storage: 1TB WDC drive for KVM pool
After partitioning and formatting, Idded the following to fstab:
/dev/sda1 /opt/kvm ext4 defaults 0 0
The mode and permissions of the volume need to be changed but it's best done after KVM is installed so that the relevant groups are in place.
Storage: 500GB Seagate mirror
The mirror was created in another machine many moons ago and has been brought across to this one. It's straightforward to import:
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ pvscan
This thing is LUKS encrypted so needs to be mounted after boot with:
$ sudo cryptsetup luksOpen /dev/mapper/[devicename] [name_for_enc_volume]
Added the following to /etc/fstab
after checking /dev/mapper
for the device name:
/dev/mapper/[name_for_enc_volume] /opt/backups ext4 defaults 0 0
Network: Base configuration
Both interfaces which show as enp1s0 and enp2s0 will be connected to my management / server VLAN. The management interface is configured statically along with DNS settings. The server interface is configured as manual without an IP address as it will ultimately be added to a bridge. During build, the /etc/network/interfaces was as follows:
# Management interface
auto enp1s0
iface enp1s0 inet static
address [xxx.xxx.xxx.xxx]
netmask [xxx.xxx.xxx.xxx]
gateway [xxx.xxx.xxx.xxx]
dns-nameservers [xxx.xxx.xxx.xxx]
dns-search [domain.tld]
# Server interface
auto enp2s0
inface enp2s0 inet manual
Installing KVM
I would pretend that I kwew what I was doing and did all these preparatory steps in advance. That would be a lie. But were I rebuilding the machine, I would indeed, do all these things advance prior to actually configuring KVM.
1. Install the KVM packages
$ apt install qemu-kvm libvirt-clients libvirt-daemon-system virt-install
This pulls in a stack of dependencies including the bridge utils.
2. Setup user and permissions
Added my non-root user to the new libvirt and libvirt-qemu groups. In practice a lot of setup and VM creation still requires root but that's something to investigate down the line.
$ chown -R root:libvirt /opt/kvm
$ chmod -R 6770 /opt/kvm
Are those the right permissions? I don't know and should look into it but they seem to work and aren't insanely permissive.
3. Created some directories
I want an iso directory for storing isos. Beyond that, I'll create individual directories for each VM I create within OS specific subfolders. Check they get root:libvirt as ownership.
4. Setup bridged networking
I don't want NAT in use as I manage my home network through DHCP and DNS. That means I need to use bridged networking for VMs. The bridge has enp2s0 as it's real interface which itself doesn't receive an IP address directly. To create the bridge, I added the following to /etc/network/interfaces
# KVM server bridge
auto kvmbr0
iface kvmbr0 inet dhcp
bridge_ports enp2s0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
Test with:
$ sudo ifup kvmbr0
and
$ ip addr
NOTE: Need the virsh command to check what interfaces are assigned on the bridge as they don't show up at the system level with ip addr
.
Configuring KVM
The core infrastructure pieces are now in place and it's time to configure KVM by assigning them to it.
1. Creating a storage pool for VMs
There are three kinds of storage pool in KVM:
- Dedicated storage device pools which give KVM direct access to the storage device from which partitions are allocated directly to VMs.
- Partition-based storage pools which allocate a formatted partition to KVM. KVM manages the mounting of the device still.
- Directory-based storage pools in which the device mount is managed by the OS and VMs are allocated space within a directory structure
I'm not trying to achieve anything fancy and I may wish to use the 1TB volume for other things so I went with the directory-based option.
To create the storage pool, I first created a storage pool defintion, added the directory in which guest images would be stored, started the pool and set it to autostart:
$ sudo virsh pool-define-as [guest_images_dir]
dir - - - - "/opt/kvm/[guest_images_dir]"
$ sudo virsh pool-build [guest_images_dir]
$ sudo virsh pool-start [guest_images_dir]
$ sudo virsh pool-autostart [guest_images_dir]
The configuration can be verified with # sudo virsh pool-list
and sudo virsh pool-info [guest_images_dir]
. # sudo virsh pool-dumpxml [guest_images_dir]
provides detailed configuration information.
2. Assiging the bridged network to KVM
By default, KVM uses a combination of NAT and DNSMASQ to provide network services to virtual machines. I prefer to keep my network management centralized and so don't want additional DHCP / DNS complexity in my network. The solution is to offer KVM a bridge interface on which its virtual switches can be instantiated and to which guest interaces are connected.
The KVM administration guide does not actually refer to bridged mode as a thing. The options it discusses are NAT, Routed (a separate subnet for VMs managed by the host which is routed to the primary network - possibly a synonym for bridged) and isolated in which VMs can only talk to one another and not the wider network.
The way I understand the configuration I've created is that I have assigned the network bridge created above to KVM on which it has created a default virtual switch which I see at the OS level as vnet0
To allocate the bridge, I created an xml file with the configuration. I did this because it's one way of feeding arguments consistently into virsh and I wanted to test it rather than becaue I had to.
$ vi /tmp/[bridge_name].xml
<network>
<name>kvmbr0</name>
<forward mode="bridge"/>
<bridge name="kvmbr0"/>
</network>
It's not a complex configuration. To implement it, I ran # sudo virsh net-define /tmp/[bridge-name].xml
and checked that the configuration had taken with # sudo virsh net-list
.
Like storage pools, the new network needs to be started and made to autostart:
$ virsh net autostart [bridge_name]
$ virsh net start [bridge_name]
Creating a VM
The KVM installation is now ready for a VM to be created. VM creation from the shell uses the virt-install tool which doesn't seem to come included by default with KVM and must be installed. I called this out at the top as one of the dependencies though so it should be installed. In this step by step, I'll create the VM, then create an SSH tunnel from another machine to allow me to VNC into its console. Given that I'm going to be installing OpenBSD on this VM, I'm certain I should be able to avoid the need for the secondary machine altogether and simply manage it directly via it's console using, # virsh console [domain_name]
. Once the installation is complete, that actually works fine but I have yet to figure out how to force the VM to start with console output. TODO.
The VM I'll create first is an OpenBSD snapshot (6.6-beta at time of writing). The virt-install command allows for VMs to be booted from http accessible ISOs but I'll keep it simple and download the thing, ‘wget http://ftp.nluug.nl/pub/OpenBSD/snapshots/amd64/install66.iso’ in /opt/kvm/iso.
To find the correct OS name, you need the osinfo-query tool. It's installed with osinfo-db-tools and libosinfo-bin. This is used for the –os-variant argument.
$ osinfo-query os | grep openbsd
[...]
openbsd6.2 | OpenBSD 6.2 | 6.2 | http://openbsd.org/openbsd/6.2
A domain is what KVM calls a VM.
$ sudo virt-install --virt-type kvm --name obsdtestbuild --cpu Opteron_G1 --vcpus 2 --memory 2000 --cdrom /opt/kvm/iso/install66.iso --disk pool=funky_obsd_images,size=20,sparse --network bridge=kvmbr0 --os-variant openbsd6.2 --serial pty --console pty --graphics vnc,port=5910
The Opteron_G1 is selected because there are issues between OpenBSD and the Opteron_G3 which implemented the Phenom properly. These cause panics at boot (including installation) that are not going to be fixed due to the age of my motherboard. I'm not overly concerned about maximizing performance so I can live with this. I would love to be able to do the installation over the serial console but this requires passing arguments to the OpenBSD boot prompt which doesn't appear to be possible with KVM over the serial console. I might solve this with OpenBSD's autoinstall functionlity down the line.
After much screwing around with the –cdrom (doesn't show output on console), –location (doesn't understand OpenBSD file layout), I found some suggestions that building a custom installation image with /etc/boot.conf configured with set tty com0
in it. This might well be a solution but using VNC is easy enough and adds little complextiy. To do, so I included the --graphics vnc,port=[local_port]
argument. This has the VM's VGA console redirect to a VNC instance on 127.0.0.1:[local_port]. To get access to this from another machine, we need to create a SSH tunnel and then connect-up a VNC client to the local port. So:
ssh [user]@[kvmhost.tld] -L [client_port]:127.0.0.1:[host_port]
Where the local port is the port the VNC client will connect to on the management machine and host_port is the port specified in the virt-install command.
Now connect up the VNC client to 127.0.0.1:[client_port] and complete the installation. You can select to redirect the console during the OpenBSD installation or do it afterwards by creating /etc/boot.conf as above. When you reboot the machine, the virt-install process will finish on the host and you can check the existence of the VM with sudo virsh list
.
This results in a VM with the following characteristics as obtained with # sudo virsh dumpxml obsdtestinstall
<domain type='kvm' id='34'>
<name>obsdtestbuild</name>
<uuid>335ee00f-4e2a-4cd1-acab-33e5980ffeef</uuid>
<memory unit='KiB'>2048000</memory>
<currentMemory unit='KiB'>2048000</currentMemory>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-2.8'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='forbid'>Opteron_G1</model>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/opt/kvm/obsdkerncomp/obsdtestbuild-18.qcow2'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<backingStore/>
<target dev='hdb' bus='ide'/>
<readonly/>
<alias name='ide0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<alias name='usb'/>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<alias name='usb'/>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<alias name='usb'/>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:8f:49:c4'/>
<source bridge='kvmbr0'/>
<target dev='vnet1'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/2'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/2'>
<source path='/dev/pts/2'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<graphics type='vnc' port='5910' autoport='no' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
</devices>
<seclabel type='none' model='none'/>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+64055:+64055</label>
<imagelabel>+64055:+64055</imagelabel>
</seclabel>
</domain>
Odds and ends
Reset OpenBSD root password in single user mode (because you mashed the keyboard during installation and forgot that you need to setup doas.conf as root…)
Connecting to the console of a domain when console redirection has been configured (i.e. via /etc/boot.conf in OpenBSD) can be performed on the host with virsh console [domain_name]
which means that actually interacting with non-X machines can be done entirely via SSH.
How do I kill / delete / murder / destroy a pool, network or VM when I stuff up the epic command line required to create it (use an XML config, dummy)?
sudo virsh destroy [domain_name]
sudh virsh undefine [domain_name]
It doesn't seem to matter which order these are executed in but they are both required.
Script to stand-up a new VM
I got bored of forgetting how to setup VMs when I needed to create them so wrote a script. It needs error handling.
#!/bin/bash
vmname="$1"
vncport="$2"
filename="$3"
vmname_pool="${vmname}_pool"
install="/opt/kvm/iso/$filename"
echo "Script takes three arguments, vmname, vncport and filename of the install ISO\n"
echo "Connect to the VM to build it using VNC. Establish the reverse port forward with, ssh -p 2411 mjpadmin@funktower.sys.kyomu.co.uk -L $vncport:127.0.0.1:$vncport"
echo "Creates storage pool named, $vmname_pool, a new VM named $vmname, boots it from $filename (stored in /opt/kvm/iso/) and establishes its console over VNC on port $vncport \n"
# Create a disk pool
echo "Creating the disk pool"
virsh pool-define-as $vmname_pool dir - - - - "/opt/kvm/$vmname"
virsh pool-build $vmname_pool
virsh pool-start $vmname_pool
virsh pool-autostart $vmname_pool
# Create the VM
echo "Creating $vmname"
virt-install --virt-type kvm --name $vmname --cpu Opteron_G1 --vcpus 2 --memory 4000 --cdrom /opt/kvm/iso/$filename --disk pool=$vmname_pool,size=60,sparse --network bridge=kvmbr0,model=virtio --os-variant openbsd6.2 --serial pty --console pty --graphics vnc,port=$vncport