echo "

User session recording using log-user-session

Since I needed a good way to track what users do based on their IP/SSH fingerprint I started looking and found log-user-session to be a very neat tool. I created an RPM for RHEL7 and a DEB for Ubuntu 18.04 Bionic and aside from installing the RPM/DEB you just need to make sure these 2 lines are present in /etc/ssh/sshd_config and you are good to go.

LogLevel VERBOSE
ForceCommand /usr/bin/log-user-session

For fingerprint pairing, just use the date and IP and get the fingerprint out of the secure.log/auth.log.

The GitHub page of the project

RPM for Red Hat 7

DEB for Ubuntu 18.04

MDADM and a single LVM to boot from (on a Debian based system)

Yesterday I moved a customer's server to new disks. To get some extra features like snapshotting I opted to convert the current static disk layout to LVM on top of MDADM. As a bonus I did the entire sync online to a new disk connected to my laptop which acted as a degraded mirror.

Yes I know GPT is the way to go but I started this move at 7 PM to minimise business impact and since the new disks are still only 500G I went for MBR which was already in place. For GPT "RAID" check this answer

This tutorial can also be used to move a non-RAID server to a RAID setup or to move a server to a new machine that will have MDADM RAID.

First create the partition table

parted /dev/sdz 
mklabel msdos
mkpart primary ext2 0% 100%
set 1 lvm on
quit

Create the MDADM mirror and LVM group + partitions (I will only use 2 in this tutorial but you can use this as a guide to get a better solution for a professional environment)

mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdz1 missing
pvcreate /dev/md0
vgcreate lvm /dev/md0
lvcreate -L 500M lvm -n boot
lvcreate -L 20G lvm -n root
mkfs.ext4 /dev/mapper/lvm-boot
mkfs.ext4 /dev/mapper/lvm-root

 Next mount the new partitions and start cloning the old system

mount /dev/mapper/lvm-root /mnt
mkdir /mnt/boot
mount /dev/mapper/lvm-boot /mnt/boot
rsync -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} user@src /mnt/
*here you should stop all services on the src and rerun the rsync command for a final sync*

Next we will set up the bootloader and adapt the system files

for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done
sudo cp /etc/resolv.conf /mnt/etc/resolv.conf
sudo chroot /mnt
blkid
*get the ID's for the LVM paritions*
vi /etc/fstab
*replace the ID's of the old /, /boot, ...*
apt-get install lvm2 mdadm
mdadm --examine --scan >> /etc/mdadm.conf
*delete the previous lines starting with ARRAY, if any*
vi /etc/mdadm/mdadm.conf
*now update initramfs to make sure it contains MDADM and LVM2 support and install grub to the new disk*
update-initramfs -k all -u -v
grub-install /dev/sdz

At this point you should be able to shut down the src server and boot from the new disk after replacement (or if you use this to move a server by starting the dst server). If you are able to boot succesfully and have the second disk already in place we are now going to "restore" the mirror.

*copy the MBR*
dd if=/dev/sdz of=/dev/sdy bs=512 count=1
partprobe /dev/sdy
*add the disk to the MDADM RAID to start rebuilding*
mdadm --manage /dev/md0 --add /dev/sdy1
*check if the rebuild is started*
cat /proc/mdstat
*just to make sure reinstall grub on boot disks, choose both /dev/sdz en /dev/sdy*
dpkg-reconfigure grub-pc

Another reboot after the rebuild is finished can be good to verify everything.

Upgrading Ubuntu to 16.04 on OpenVZ (Proxmox <=3.4)

Stop any service and create a "snapshot". Since my Proxmox doesn't have a snapshot solution yet I specifically choose to use the local (SSD) storage without compression to get the backup to finish as fast as possible, we can't afford unnecessary downtime and in the next generation of our servers we will be using file system based snasphots (LVM or ZFS) which makes waiting for a backup to finish something of the past. I created my snapshot using the web interface of Proxmox 3.4 but this is what happens in the background after stopping the container.

vzdump 113 --remove 0 --mode stop --storage local --node duff-prox-01

I changed the IP of my VPS during the entire upgrade.

sed -i "s/old_ip/temporary_ip/g" /etc/network/interfaces /etc/hosts
/etc/init.d/networking restart

Next upgrade Ubuntu (this application is in the update-manager-core package)

do-release-upgrade

You will get a warning and we will address this by replacing Systemd with the legacy Upstart before rebooting in Ubuntu 16.04

apt-get install upstart-sysv

After the reboot, check with netstat if everything started fine, change the IP again and reboot to start using a fully up to data Ubuntu in a dated VPS environment.

netstat -atnp 
apt-get -y upgrade
apt-get -y dist-upgrade
sed -i "s/temporary_ip/old_ip/g" /etc/network/interfaces /etc/hosts
reboot

Zimbra to Zimbra migration using Zextras and SSHFS

This is only an extension to a normal installation. So you should have your new Zimbra environment ready to move the data of your current Zimbra environment to it.

In the past we did an in-place upgrade of our Zimbra and the Ubuntu it was residing on. The last upgrade from 8.5.1 on Ubuntu 14.04 to 8.7.1 on Ubuntu 16.04 was such a drama that I started looking for an easier solution that would give me a fresh Zimbra with all my settings, accounts, mails, calendar items ... without too much fuss.

To prevent inconsistency please stop your old Zimbra and afterwards make a snapshot and/or backup before you continue.

/etc/init.d/zimbra stop

OR

su - zimbra -c "zmcontrol stop"

Also stop and disable fetchmail if you're using it

/etc/init.d/fetchmail stop 
sed -i "s/START_DAEMON=yes/START_DAEMON=no/g" /etc/default/fetchmail

Next change the IP because the Zimbra will need to be started again to do the export

sed -i "s/old_ip/temporary_ip/g" /etc/hosts /etc/network/interfaces
reboot

Next on both Zimbras create a folder to store the migration data on

su - zimbra -c "mkdir /opt/zimbra/backup/exports /opt/zimbra/backup/zextras"

Next mount this folder as the user zimbra on the new Zimbra (don't set a password on the zimbra user, just create a key pair and share the SSH public key). This will save you time to move the data.

 sshfs zimbra@temporary_ip:/opt/zimbra/backup/exports/ /opt/zimbra/backup/exports/

Next install the migration tool on the old Zimbra

cd /tmp && wget https://download.zextras.com/zextras_migration_tool-latest.tgz
tar xvzf zextras_migration_tool-*.tgz
cd zextras_migration_tool-*/
./install all

Now open the Zimbra admin page (https://temporary_ip:7071) and start exporting the domains and user you want to keep. Through ZeXtras -> ZxMig -> Start Migration

undefined

undefinedundefined

 Change the folder to the newly created /opt/zimbra/backup/exports

undefined

Select the domains you want to move

undefined

Under ZxNotifications check for any error during the export

undefined

When this finishes you can start importing on the new Zimbra. First install the Zextra suite (this is free for 30 days and we will remove it after we finish to get rid of warnings afterwards)

cd /tmp && wget http://download.zextras.com/zextras_suite-latest.tgz 
tar xvzf zextras_suite-*.tgz
cd zextras_suite-*/
./install all

Open the Zimbra admin page (https://new_ip:7071) and open the ZeXtas -> ZxBackup -> select Import Backup

 undefinedundefinedundefined

Change the path to /opt/zimbra/backup/exports

undefined

Select the domains you want to import

undefinedundefined

Select the accounts you want to import

undefined

And check if the restore started without warnings

undefined

After this finished you can clean up and give your new Zimbra the IP of the old one to start sending and receiving mails again.

If you had an SSL certificate installed now is the time to move that as well. Do this as user zimbra!

On the old zimbra

rsync -au /opt/zimbra/ssl/ zimbra@new_ip:/tmp/ssl/

On the new Zimbra

cp /tmp/ssl/zimbra/commercial/commercial.key /opt/zimbra/ssl/zimbra/commercial/ 
./zmcertmgr deploycrt comm /tmp/ssl/zimbra/commercial/commercial.crt /tmp/ssl/zimbra/commercial/commercial_ca.crt

In any case I would also enable HTTP -> HTTPS redirection

zmprov ms "FQDN" zimbraReverseProxyMailMode redirect

Let's finish:

FIRST: shutdown the old Zimbra machine and make sure it won't boot by itself. Don't forget to copy any backup settings, fetchmail config, cron jobs, ...

On the new Zimbra

cd /tmp/zextras_suite-*/ 
./install -u all
sed -i "s/new_ip/old_ip/g" /etc/hosts /etc/network/interfaces
reboot

Now your new Zimbra should be available with all the settings, accounts and mails just like it was when you stopped your old Zimbra.

Upgrading an existing Zimbra environment from Ubuntu 14.04 (ZCS 8.6.0) to Ubuntu 16.04 (ZCS 8.7.1)

Since I got some complaints about the current environment which I checked and according to the changelog these issues should be fixed by now. I'm going to perform an in-place upgrade of my current Zimbra 8.6.0 on Ubuntu 14.04 to Zimbra 8.7.1 on Ubuntu 16.04. The system runs in an OpenVZ container which gives us some extra flexibility.

 

First of all create a backup or a snapshot after you stopped the Zimbra services.

/etc/init.d/zimbra stop

Next create a "snapshot". Since this hypervisor doesn't have a snapshot solution I specifically choose to use the local (SSD) storage without compression to get the backup to finish as fast as possible, we can't afford unnecessary downtime and in the next generation of our servers we will be using file system based snasphots (LVM or ZFS) which makes waiting for a backup to finish something of the past. I created my snapshot using the web interface of Proxmox 3.4 but this is what happens in the background after stopping the container.

vzdump 113 --remove 0 --mode stop --storage local --node duff-prox-01

 

To prevent Zimbra from accepting mails. I changed the IP of my VPS during the entire upgrade. Next restart Zimbra and make sure it still runs fine.

sed -i "s/172.16.4.10/172.16.4.99/g" /etc/network/interfaces /etc/hosts
service networking restart
/etc/init.d/zimbra restart

 

Next install and enable the Zimbra proxy and memcached if you didn't had them configured in the past. See https://wiki.zimbra.com/wiki/Enabling_Zimbra_Proxy_and_memcached

cd /tmp/ && wget https://files.zimbra.com/downloads/8.6.0_GA/zcs-8.6.0_GA_1153.UBUNTU14_64.20141215151116.tgz
tar xvzf zcs-8.6.0_GA_1153.UBUNTU14_64.20141215151116.tgz
cd zcs-8.6.0_GA_1153.UBUNTU14_64.20141215151116/packages
dpkg -i dpkg -i zimbra-proxy_8.6.0.GA.1153.UBUNTU14.64_amd64.deb zimbra-memcached_8.6.0.GA.1153.UBUNTU14.64_amd64.deb
su - zimbra
./libexec/zmproxyconfig -e -w -o -a 8080:80:8443:443 -x both -H zimbra.ampersant
./libexec/zmproxyconfig -e -m -o -i 7143:143:7993:993 -p 7110:110:7995:995 -H zimbra.ampersant
zmprov ms zimbra.ampersant zimbraMailReferMode reverse-proxied
zmmailboxdctl restart
zmprov ms zimbra.ampersant +zimbraServiceEnabled memcached
zmcontrol restart

 

Now upgrade Zimbra to the latest edition available for Ubuntu 14.04

Note! Zimbra introduced their own repository so before starting the upgrade make sure you can install GPG keys from keyserver.ubuntu.com:11371 and that your server can surf the web (through the proxy if any). And just to be sure, rebuild the apt cache:

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9BE6ED79
sudo apt-get clean
cd /var/lib/apt
sudo mv lists lists.old
sudo mkdir -p lists/partial
sudo apt-get clean
sudo apt-get update
cd /tmp/ && wget https://files.zimbra.com/downloads/8.7.1_GA/zcs-8.7.1_GA_1670.UBUNTU14_64.20161025045105.tgz
tar xvzf zcs-8.7.1_GA_1670.UBUNTU14_64.20161025045105.tgz
cd zcs-8.7.1_GA_1670.UBUNTU14_64.20161025045105
./install.sh

If you run in to any other issues please first check the installation log

tail -f /tmp/install.log.*

If you successfully finish the upgrade stop Zimbra before you upgrade Ubuntu

/etc/init.d/zimbra stop

 

Next upgrade Ubuntu (this application is in the update-manager-core package)

do-release-upgrade

 

Only for when you are running a very old kernel! Revert back to upstart

apt-get install upstart-sysv

 

Get the latest build of Zimbra for Ubuntu 16.04

cd /tmp/ && wget https://files.zimbra.com/downloads/8.7.1_GA/zcs-8.7.1_GA_1670.UBUNTU16_64.20161025045114.tgz

 

Next extract the installation files

tar -xvzf zcs-8.7.1_GA_1670.UBUNTU16_64.20161025045114.tgz

 

And start the repair of the old install to be able to do an in-place upgrade. You will get some alerts and we work around them anyway

cd zcs-8.7.1_GA_1670.UBUNTU16_64.20161025045114
echo "deb [arch=amd64] https://repo.zimbra.com/apt/87 xenial zimbra
deb-src [arch=amd64] https://repo.zimbra.com/apt/87 xenial zimbra" > /etc/apt/sources.list.d/zimbra.list
apt-get update
apt-get install zimbra-core-components
apt-get -f install
dpkg -i packages/*.deb
apt-get -f install
./install.sh --softwareonly #DO NOT CHECK DATABASE INTEGRITY!

Started 1:10 -> finished

Benchmarking Linux bonding with cheap NICs

Since a couple of years I have been using bonding to achieve load balancing, higher bandwidth and redundancy on Linux (and BSD, Windows, Solaris, ...). At home this can be rather challenging since lots of consumer switches don't understand all the bonding methods and Realtek adapters have the unpleasant habit of keeping the spoofed MAC address after a reboot which messes things up.

For this set up I will use a bunch of cheap Realtek and Intel NICs to connect 2 PC's using 4 NICs on every side to benchmark the actual speed.

I already mentioned that my Realtek adapters tend to remember the spoofed MAC address during reboot which causes UDEV to give the adapters a new name and rendering your config worthless. To prevent this I added some lines inside /etc/network/interfaces that start and stop the bond and load and unload the bonding module which causes the NIC to have it's factory MAC address back before rebooting.

This is my basic config using bonding mode 0 (round-robin). I only changed the mode= or in case of mode=2 and mode=4 with L3+L4 hashing I uncommented the xmit_hash_policy variable. All benchmarked modes give fault tolerance and load balancing (albeit not always visible in the results)

# The primary network interface
auto bond0
iface bond0 inet static
pre-up modprobe bonding mode=0 miimon=100 #xmit_hash_policy=layer3+4
pre-up ip link set enp4s0 master bond0
pre-up ip link set enp6s0 master bond0
pre-up ip link set enp8s0 master bond0
pre-up ip link set enp9s0 master bond0
mtu 9000
address 10.0.0.1
netmask 255.255.255.0
up /bin/true
down /bin/true
post-down ip link set enp4s0 nomaster
post-down ip link set enp6s0 nomaster
post-down ip link set enp8s0 nomaster
post-down ip link set enp9s0 nomaster
post-down ip link set dev bond0 down
post-down rmmod bonding

Mode=0 is only good for connecting 2 systems with multiple NICs directly to each other to achieve a single connection with a higher bandwidth. In switched environments or even worse in a bridge this method will really mess up your connections. In a switched connection you will see random packet loss and out of order packets. In a bridge you are lucky if you even get some packages over.

To get the most out of this I set the MTU to 9000 (jumbo frames) and connected 2 systems directly to each other. My NICs are all auto sensing so I didn't have to use crossed cables.

I used these scripts to run multiple instances of iperf (network benchmark tool) in parallel https://sandilands.info/sgordon/multiple-iperf-instances

method mode 0 (Round Robin) mode 2 (XOR) mode 2 (XOR L3+L4) mode 4 (LACP L2) mode 4 (LACP L3+L4)
single connection speed 3.14 Gbits/sec 719 Mbits/sec 661 Mbits/sec 730 Mbits/sec 725 Mbits/sec
total speed of 4 simultaneous connections 3.157 Gbits/sec 722 Mbits/sec 2.107 Gbit/s 735 Mbits/sec 1.484 Gbits/sec
ethtool advertised speed 4000Mb/s 4000Mb/s 4000Mb/s 4000Mb/s 4000Mb/s
Where to use inter system connection only

inter system connection OR on a switch that only supports static LAGs

inter system connection OR on a L3 switch than only supports static LAGs (...)

inter system connection OR on a L2 managed switch that supports LACP inter system connection OR on a L3 switch that supports LACP

As you can see mode=0 wins the benchmark but as I already said that comes with a price. Netgear for instance will recommend mode=2 for unmanaged switches or switches that only can handle static LAGs and mode=4 for switches that support LACP.

But in real life you should always use LACP since it is the most robust method out there and if you really need higher single connection speed you will have to invest in 10, 25, 40 or 100Gbit connections.

LACP can combine any form of hashing. You can combine hashing the MAC address (L2), the IP address (L3) or the UDP/TCP session (L4). This isn't only the case for Linux. Solaris and derivates like SmartOS just as well give you the option to combine any of these 3 hashing methods in your LACP aggregate. In Solaris L4 hashing is the default one, in Linux it is L2.

You can see in my result that running multiple iperf sessions on different ports really does make a difference if you use L4 hashing since we have 4 different TCP sessions. In the real world LACP will rarely be used for single connection configurations. If you use LACP underneath a bridge for your hypervisor for instance, since all your VMs and containers will have their own MAC address, IP address and use their own UDP/TCP sessions all your physical connections will be actually used and you will get a higher total bandwidth (albeit hard to benchmark) but you will never get more than 1 Gbit per session.

To finish, one mode that doesn't have anything to do with load balancing but can be nice is mode=1. I used this in the past to set up a fiber connection as primary link with a WiFi link as backup. In case the fiber stops working, traffic would be send over the WiFi link. Of course this kind of behavior can be achieved by using STP as well if you have a managed switch on both ends.

Fully encrypted ZFS root on Linux using LUKS (Ubuntu 16.04)

Since I wanted to have the joy of compression, fast resilvering, caches, ... on my workstation I started to look to use ZFS with LZ4 compression on top of a bunch of LUKS devices. I used 6 * 128GB MLC SSDs and put them in this great backplane IcyDock MB996SP-6SB

Plain ZFS with an eCryptFS home folder on top wasn't a good solution because that would render the LZ4 compression useless -> If you can compress encrypted data, your encryption method is useless...

So this are the steps I took to get it working:

Get a Ubuntu desktop live USB/CD and boot it. Next do this to get the necessary packages

sudo apt-get update
sudo apt-get install cryptsetyp debootstrap zfs mdadm

Get started by making a DOS-type partition table on /dev/sda with a 80MB primary partition and a second primary partition that uses all that is left. Afterwards you can copy the partition table using this simple dd command to copy only the first sector.

sudo dd if=/dev/sda of=/dev/sdb bs=512 count=1

Please notice that this only works for a DOS table with primary partitions. For logical partitions and GPT you will have to use something more advanced:

Backup

sfdisk -d /dev/sda > part_table

Restore

sfdisk /dev/sda < part_table

But this is what I did, since I only used primary partitions anyway.

dd if=/dev/sda of=/dev/sdb bs=512 count=1
dd if=/dev/sda of=/dev/sdc bs=512 count=1
dd if=/dev/sda of=/dev/sdd bs=512 count=1
dd if=/dev/sda of=/dev/sdf bs=512 count=1
dd if=/dev/sda of=/dev/sdg bs=512 count=1

Next encrypt your second primary partitions using luksFormat. By default LUKS will use AES with a 256bit key. You can benchmark to see which is the fastest encryption to use. Either way make sure your CPU has AES encryption and that this function is enabled in the BIOS/EFI otherwise you will have a lot of overhead!

Benchmark

cryptsetup benchmark

Check if AES is enabled (works on AMD and Intel CPU's)

grep -m1 -o aes /proc/cpuinfo

The actual formatting with the default settings (in my case the ones with the best performance)

cryptsetup luksFormat /dev/sda2
cryptsetup luksFormat /dev/sdb2
cryptsetup luksFormat /dev/sdc2
cryptsetup luksFormat /dev/sdd2
cryptsetup luksFormat /dev/sdf2
cryptsetup luksFormat /dev/sdg2

Now Open (decrypt) to use them

cryptsetup luksOpen /dev/sdg2 crypt_sdg2
cryptsetup luksOpen /dev/sde2 crypt_sde2
cryptsetup luksOpen /dev/sdf2 crypt_sdf2
cryptsetup luksOpen /dev/sdd2 crypt_sdd2
cryptsetup luksOpen /dev/sdc2 crypt_sdc2
cryptsetup luksOpen /dev/sdb2 crypt_sdb2
cryptsetup luksOpen /dev/sda2 crypt_sda2

And create a zpool with 3 mirrors (VDEVs) to create a RAID10. I know you loose a lot of space this way but the LZ4 makes up a little for that and the increased speed, reliability and most important resilvering time makes using mirrors preferable over RAIDZ[1-3]

Aside I see a lot of tutorials where they ask you to set the ashift (sector alignment) to 12 (4k native) but since all my SSDs have a physical and logical sector size of 512 (ashift=9) I don't see why you would like to do that. Anyway ZFS should detect the sector size and align automatically.

zpool create rpool mirror crypt_sda2 crypt_sdb2 mirror crypt_sdc2 crypt_sde2 mirror crypt_sdd2 crypt_sdg2

Next we will enable compression on the pool (LZ4 is the default compression and the best/fastest one available)

zfs set compression=on rpool

Now create a / mountpoint make it bootable, make sure that the pool itself isn't mounted and export (stop) the entire pool

zfs create rpool/ROOT
zpool set bootfs=rpool/ROOT rpool
zfs set mountpoint=none rpool
zfs set mountpoint=/ rpool/ROOT
zpool export rpool

Next create a RAID10 of the first primary partitions to use as /boot

mdadm --create /dev/md0 --level=10 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sdf1 /dev/sdg1
mkfs.ext4 /dev/md0

Next import (start) your pool and redirect all mounting points to /mnt

zpool import -R /mnt rpool

And prepare a basic Ubuntu 16.04

debootstrap xenial /mnt

Get the UUIDs

blkid | grep LUKS

Next parse all the UUID of your LUKS containers in /etc/crypttab inside your target directory.

echo "crypt_sda2 UUID=b86435dd-71cd-45cf-abde-ee373554915b none luks" >> /mnt/etc/crypttab
echo "crypt_sdb2 UUID=8d370731-8b6c-4789-9d15-68b5c6a8d74f none luks" >> /mnt/etc/crypttab
echo "crypt_sdc2 UUID=260bb228-a1b8-4739-8ce7-a4671b4d723b none luks" >> /mnt/etc/crypttab
echo "crypt_sdd2 UUID=9e35fc89-bd1c-4db6-b9fc-15d311652f0b none luks" >> /mnt/etc/crypttab
echo "crypt_sde2 UUID=35129e92-3fb6-4118-aada-5dc2be628c05 none luks" >> /mnt/etc/crypttab
echo "crypt_sdg2 UUID=3ef442d5-ed6e-4a4a-bcce-84f3c31acf32 none luks" >> /mnt/etc/crypttab

Set your hostname

echo "SoloTheatre" > /mnt/etc/hostname
echo "127.0.1.1 SoloTheatre" >> /mnt/etc/hosts

Prepare a chroot environment and enter it

mount /dev/md0 /mnt/boot
mount --bind /dev /mnt/dev
mount --bind /dev/pts /mnt/dev/pts
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
chroot /mnt /bin/bash --login
hostname SoloTheatre

Force initramfs to be cryptsetup aware

echo "export CRYPTSETUP=y" >> /usr/share/initramfs-tools/conf-hooks.d/forcecryptsetup

Now push all the LUKS containers inside a initramfs config to make sure they are being picked up and presented for decryption when booting

echo "target=crypt_sda2,source=UUID=b86435dd-71cd-45cf-abde-ee373554915b,key=none,rootdev,discard" >> /etc/initramfs-tools/conf.d/cryptroot
echo "target=crypt_sdb2,source=UUID=8d370731-8b6c-4789-9d15-68b5c6a8d74f,key=none,rootdev,discard" >> /etc/initramfs-tools/conf.d/cryptroot
echo "target=crypt_sdc2,source=UUID=260bb228-a1b8-4739-8ce7-a4671b4d723b,key=none,rootdev,discard" >> /etc/initramfs-tools/conf.d/cryptroot
echo "target=crypt_sdd2,source=UUID=9e35fc89-bd1c-4db6-b9fc-15d311652f0b,key=none,rootdev,discard" >> /etc/initramfs-tools/conf.d/cryptroot
echo "target=crypt_sdf2,source=UUID=35129e92-3fb6-4118-aada-5dc2be628c05,key=none,rootdev,discard" >> /etc/initramfs-tools/conf.d/cryptroot
echo "target=crypt_sdg2,source=UUID=3ef442d5-ed6e-4a4a-bcce-84f3c31acf32,key=none,rootdev,discard" >> /etc/initramfs-tools/conf.d/cryptroot

link all the LUKS containers since grub-update doesn't care about the default /dev/mapper/ directory

ln -sf /dev/mapper/crypt_sda2 /dev/crypt_sda2
ln -sf /dev/mapper/crypt_sdb2 /dev/crypt_sdb2
ln -sf /dev/mapper/crypt_sdc2 /dev/crypt_sdc2
ln -sf /dev/mapper/crypt_sdd2 /dev/crypt_sdd2
ln -sf /dev/mapper/crypt_sdf2 /dev/crypt_sdf2
ln -sf /dev/mapper/crypt_sdg2 /dev/crypt_sdg2

Next set up apt repositories

echo "deb http://be.archive.ubuntu.com/ubuntu/ xenial main universe restricted multiverse
deb http://security.ubuntu.com/ubuntu/ xenial-security universe multiverse main restricted
deb http://be.archive.ubuntu.com/ubuntu/ xenial-updates universe multiverse main restricted
" > /etc/apt/sources.list

And install the bare necessities to get started. Replace ubuntu-minimal with ubuntu-desktop if you are planning to use this system as a desktop computer.

apt-get update
apt-get install mdadm zfs zfs-initramfs grub-pc linux-image-generic ubuntu-minimal cryptsetup
#install grub to all the disks you used for the /boot mdadm RAID
apt-get upgrade
apt-get dist-upgrade

When you see the ncursus window for grub-pc make sure grub uses the ZFS as root and select all your physical disks to install grub on (/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sdf,/dev/sdg). If you didn't get this window you can run:

sudo dpkg-reconfigure grub-pc

undefinedundefinedundefined

Set the UUID of your md raid device and all the LUKS containers you used for ZFS in /etc/fstab

UUID=c6c15ae8-2453-4e7e-8013-d5ce88d97800 /boot auto defaults 0 0
UUID=bff05b3e-bbec-4aba-a4d3-9d6f8b6f28c9 / zfs defaults 0 0
...
...
...

Force initramfs and grub update

update-initramfs -k all -c
update-grub

Set swap (4G is sufficient on most modern systems)

zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false rpool/swap
mkswap -f /dev/zvol/rpool/swap
echo /dev/zvol/rpool/swap none swap defaults 0 0 >> /etc/fstab

create a sudo user and exit + unmount everything before rebooting in your new invironment

adduser USERNAME
usermod -a -G adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare,libvirtd USERNAME
exit
umount /mnt/boot
umount /mnt/dev/pts
umount /mnt/dev
umount /mnt/proc
umount /mnt/sys
zfs umount -a
zpool export rpool
reboot

Bewaren

Bewaren

Getting started with ProjectFiFo inside KVM on Linux (Ubuntu 16.04)

This is a step by step guide on how to set up project FiFo in a set of VMs using KVM as hypervisor. Since SmartOS as a bare metal hypervisor is using KVM too this would give us following situation:

undefined

Before getting started:

Make sure the Google DNS servers are accessible from you network (8.8.8.8,8.8.4.4) since we will use them for the reverse DNS to keep things easy.

Getting started:

I will run the VMs on a standard Ubuntu 16.04 LTS desktop set up. You will need to install KVM and the very easy virt-manager before we can start.

sudo apt install qemu-kvm libvirt-bin virt-manager
sudo adduser $USER libvirtd

Now open up virt-manager and get familiar with the interface

To get the latest SmartOS VMware image you can use to boot from do this, after extracting we convert the vmdk (VMware Disk) to a KVM compatible qcow2 disk.

cd /tmp
wget https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/smartos-latest.vmwarevm.tar.bz2
tar -xjf smartos-latest.vmwarevm.tar.bz2
cd SmartOS.vmwarevm/
qemu-img convert -c -p -O qcow2 SmartOS.vmdk SmartOS.qcow2
sudo mv *.qcow2 /var/lib/libvirt/images/

Now create a first node VM in the virt-manager

Make sure you give every node more then one CPU since LeoFS has a known issue with single core machines. If you only can give one core, check this https://github.com/leo-project/leofs/issues/477

For selecting the fixed IPs for my zones I refer to the KVM DHCP range

...
<ip address="192.168.122.1" netmask="255.255.255.0">
<dhcp>
<range start="192.168.122.128" end="192.168.122.254" />
</dhcp>
</ip>
<route address="192.168.222.0" prefix="24" gateway="192.168.122.2" />
<ip family="ipv6" address="2001:db8:ca2:2::1" prefix="64" />
<route family="ipv6" address="2001:db8:ca2:3::" prefix="64" gateway="2001:db8:ca2:2::2"/>
<route family="ipv6" address="2001:db9:4:1::" prefix="64" gateway="2001:db8:ca2:2::3" metric='2'>
</route>
...

I will use 192.168.122.[2-127] as the fixed IP pool

Since I had to start partially over you will see my node1 is 192.168.122.5 and node2 is 192.168.122.6, you can choose you're own IPs but just make sure you keep track of the changes you make and avoid IP conflicts!

undefinedundefinedundefinedundefinedundefinedundefined

After you click finish the machine will automatically boot, you won't be able to type anything since you will get ghost characters all the time. If you get an alert this probably means that virtualization isn't enabled in the BIOS. Either way SmartOS only works on Intel CPU's and FIFO adds an extra requirement (AVX) which makes that your CPU needs to be Sandy Bridge or newer.

undefinedundefined

Now stop the VM and to get rid of this we are using VNC instead of spice and we will also add an extra disk to install our zones on. And make sure you select "copy host CPU configuration" so you can actually use KVM inside your KVM...

undefinedundefinedundefined

Now start the machine and fill in the configuration questions.

undefinedundefinedundefinedundefinedundefinedundefinedundefinedundefinedundefined

After pressing y + enter a last time the machine will do the configuration and boot in to the Triton SmartOS. Please repeat these steps for the 2nd node.

undefinedundefined

Now log in on both nodes from a single set of SSH terminals to proceed with the FiFo manual https://docs.project-fifo.net/v0.8.3/docs

undefined

We'll start with setting up the LeoFS zones (one on every node) https://docs.project-fifo.net/docs/installing-leofs#section-step-1-create-zones

Do this on both nodes

We are now importing a basic image (container) that we will use for our storage and management zones

imgadm update
imgadm import 1bd84670-055a-11e5-aaa2-0346bb21d5a1
imgadm list | grep 1bd84670-055a-11e5-aaa2-0346bb21d5a1

undefined

leo-zone1.json (for node1)

{
"autoboot": true,
"brand": "joyent",
"image_uuid": "1bd84670-055a-11e5-aaa2-0346bb21d5a1",
"max_physical_memory": 3072,
"cpu_cap": 100,
"alias": "1.leofs",
"quota": "80",
"resolvers": [
"8.8.8.8",
"8.8.4.4"
],
"nics": [
{
"interface": "net0",
"nic_tag": "admin",
"ip": "192.168.122.2",
"gateway": "192.168.122.1",
"netmask": "255.255.255.0"
}
]
}

leo-zone2.json (for node2)

{
"autoboot": true,
"brand": "joyent",
"image_uuid": "1bd84670-055a-11e5-aaa2-0346bb21d5a1",
"max_physical_memory": 512,
"cpu_cap": 100,
"alias": "2.leofs",
"quota": "20",
"resolvers": [
"8.8.8.8",
"8.8.4.4"
],
"nics": [
{
"interface": "net0",
"nic_tag": "admin",
"ip": "192.168.122.3",
"gateway": "192.168.122.1",
"netmask": "255.255.255.0"
}
]
}

Now on node1 do this (you will have to paste the leo-zone1.json from above) -> If you don't know vi, just press "i" and after you paste you content (select and middle mouse button to paste) do ":wq" + enter

cd /opt
vi leo-zone1.json
vmadm create -f leo-zone1.json

Now on node2 do this (you will have to paste the leo-zone2.json from above)

cd /opt
vi leo-zone2.json
vmadm create -f leo-zone2.json

undefined

Use/save the UUID that you'll find trailing "successfully created VM" -> you can retrieve them again by issuing vmadm list

on node 1 enter the LeoFS-zone1

zlogin 59871103-cb76-c653-e089-b08bc25503ae
curl -O https://project-fifo.net/fifo.gpg
gpg --primary-keyring /opt/local/etc/gnupg/pkgsrc.gpg --import < fifo.gpg
gpg --keyring /opt/local/etc/gnupg/pkgsrc.gpg --fingerprint
VERSION=rel
cp /opt/local/etc/pkgin/repositories.conf /opt/local/etc/pkgin/repositories.conf.original
echo "http://release.project-fifo.net/pkg/${VERSION}" >> /opt/local/etc/pkgin/repositories.conf
pkgin -fy up

Following 2 commands will ask for verification just press y + enter

pkgin install coreutils sudo gawk gsed
pkgin install leo_manager leo_gateway leo_storage

 Now repeat this on node2 for LeoFS-zone2 but replace the last command so you only install the leo_manager

pkgin install leo_manager

Next we are going to configure our LeoFS-zones

vi /opt/local/leo_manager/etc/leo_manager.conf

I only give the lines that need to be changed, please don't remove or alter the other lines in the config files

This should contain

nodename = manager_0@192.168.122.2
distributed_cookie = bUq8z5aEDCVMEU3W
manager.partner = manager_1@192.168.122.3

where the IP in nodename is the IP you choose for leo-zone1.json, the IP in manager.partner is the one you choose in leo-zone2.json and where the distributed_cookie is the result of

openssl rand -base64 32 | fold -w16 | head -n1

for node2 this would make

nodename = manager_1@192.168.122.3
manager.mode = slave
distributed_cookie = bUq8z5aEDCVMEU3W
manager.partner = manager_0@192.168.122.2

Now configure the gateway on node1

vi /opt/local/leo_gateway/etc/leo_gateway.conf
## Name of Manager node(s)
managers = [manager_0@192.168.122.2, manager_1@192.168.122.3]

And the storage on node1

vi /opt/local/leo_storage/etc/leo_storage.conf
## Name of Manager node(s)
managers = [manager_0@192.168.122.2, manager_1@192.168.122.3]
## Cookie for distributed node communication. All nodes in the same cluster
## should use the same cookie or they will not be able to communicate.
distributed_cookie = bUq8z5aEDCVMEU3W

Check the cookies on both nodes, to make sure it is the same in every config file

grep cookie /opt/local/leo_*/etc/leo_*.conf

And now let's start the services, first in leo-zone1 on node1 and then repeat this in leo-zone2 on node2

svcadm enable epmd
svcadm enable leofs/manager

If everything went fine you should issue this command and get following result

leofs-adm status

undefined

Next we will enable the gateway and storage on node1

svcadm enable leofs/storage

And verify if this started correctly

leofs-adm status

undefined

Next start the storage one node1

leofs-adm start

undefined

Next start the gateway (still on node 1 in the leoFS-zone1)

svcadm enable leofs/gateway
leofs-adm status

undefined

OK you can exit the LeoFS zones on both nodes to continue and install the FiFo manager https://docs.project-fifo.net/v0.8.3/docs/fifo-overview

exit

On node 1 we are going to set up the FiFo zone

in /opt create the json file

cd /opt
vi setupfifo.json

setupfifo.json

{
"autoboot": true,
"brand": "joyent",
"image_uuid": "1bd84670-055a-11e5-aaa2-0346bb21d5a1",
"delegate_dataset": true,
"indestructible_delegated": true,
"max_physical_memory": 3072,
"cpu_cap": 100,
"alias": "fifo",
"quota": "40",
"resolvers": [
"8.8.8.8",
"8.8.4.4"
],
"nics": [
{
"interface": "net0",
"nic_tag": "admin",
"ip": "192.168.122.4",
"gateway": "192.168.122.1",
"netmask": "255.255.255.0"
}
]
}
vmadm create -f setupfifo.json

undefined

Now login to the new FiFo zone

zlogin b8c7c39e-3ede-cf20-cdbc-b9f92fbd4a7d

First we do need to configure the delegate dataset to be mounted to /data we can do this from within the zone with the following command:

zfs set mountpoint=/data zones/$(zonename)/data

Now install the packages

cd /data
curl -O https://project-fifo.net/fifo.gpg
gpg --primary-keyring /opt/local/etc/gnupg/pkgsrc.gpg --import < fifo.gpg
gpg --keyring /opt/local/etc/gnupg/pkgsrc.gpg --fingerprint
echo "http://release.project-fifo.net/pkg/rel" >> /opt/local/etc/pkgin/repositories.conf
pkgin -fy up
pkgin install fifo-snarl fifo-sniffle fifo-howl fifo-cerberus

Now you can enable and start the just installed services

svcadm enable epmd
svcadm enable snarl
svcadm enable sniffle
svcadm enable howl
svcs epmd snarl sniffle howl

undefined

The last step is to create an admin user and organisation, this can be done with one simple command:

# snarl-admin init <realm> <org> <role> <user> <pass>
snarl-admin init default test Users admin ******

undefined

Now let you FiFo zone connect to the previous created LeoFs zones

sniffle-admin init-leofs 192.168.122.2.xip.io

undefined

exit

Next we are installing the zlogin and chunter services https://docs.project-fifo.net/docs/chunter

Make sure you are on both nodes logged out of any zones

Chunter is Project FiFo's hypervisor interaction service. Chunter runs on each hypervisor controlled by Project-FiFo. Chunter interacts with SmartOS to create, update, and destroy vms. Chunter also collects vm and performance data to report back to Howl.

On both SmartOS nodes run following commands to install the FiFo's zdoor server

VERSION=rel
cd /opt
curl -O http://release.project-fifo.net/gz/${VERSION}/fifo_zlogin-latest.gz
gunzip fifo_zlogin-latest.gz
sh fifo_zlogin-latest

undefinedundefined

Next on both nodes install chunter

VERSION=rel
cd /opt
curl -O http://release.project-fifo.net/gz/${VERSION}/chunter-latest.gz
gunzip chunter-latest.gz
sh chunter-latest

undefined

Now on both SmartOS nodes start the just installed services

svcadm enable epmd
svcs epmd
svcadm enable fifo/zlogin
svcs fifo/zlogin
svcadm enable chunter
svcs chunter

undefined

Once the service is running FiFo will auto-discover the node and after about a minute the SmartOS Node will appear in the FiFo web browser to be managed.

Now you can go and check out the web interface by browsing to the FiFo zone IP (192.168.122.4)

undefined

Under hypervisors you should see both SmartOS nodes (and see that we already over provisioned node1)

undefined

Under datasets you can download some pre-build base images. I opted for Ubuntu 16.04 in LX (container) and KVM (VM).

clicking a dataset equals marking it for download. Don't press to many or you will have to wait for all of them to download.

undefined

Now set up some basic things to be able to create a container or VM. Start by adding a basic package, ip range and network.

Make sure to select admin as the network tag since this is the only network we have created so far

 undefinedundefinedundefinedundefined

Next connect the range to the newly created network

undefinedundefinedundefined

Next we will create a container to test with

undefinedundefinedundefinedundefinedundefinedundefinedundefined

This was a basic tutorial on how to get started with SmartOS and project FiFo inside KVM on Linux. I hope you enjoyed it!

For more details on how to use the web interface refer to https://docs.project-fifo.net/docs/cerberus-general

For statistics on your machines set up a dalmantiner zone https://docs.project-fifo.net/v0.8.3/docs/ddb-installation

Home