This is a step by step guide on how to set up project FiFo in a set of VMs using KVM as hypervisor. Since SmartOS as a bare metal hypervisor is using KVM too this would give us following situation:
Before getting started:
Make sure the Google DNS servers are accessible from you network (8.8.8.8,8.8.4.4) since we will use them for the reverse DNS to keep things easy.
Getting started:
I will run the VMs on a standard Ubuntu 16.04 LTS desktop set up. You will need to install KVM and the very easy virt-manager before we can start.
sudo apt install qemu-kvm libvirt-bin virt-manager
sudo adduser $USER libvirtd
Now open up virt-manager and get familiar with the interface
To get the latest SmartOS VMware image you can use to boot from do this, after extracting we convert the vmdk (VMware Disk) to a KVM compatible qcow2 disk.
cd /tmp
wget https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/smartos-latest.vmwarevm.tar.bz2
tar -xjf smartos-latest.vmwarevm.tar.bz2
cd SmartOS.vmwarevm/
qemu-img convert -c -p -O qcow2 SmartOS.vmdk SmartOS.qcow2
sudo mv *.qcow2 /var/lib/libvirt/images/
Now create a first node VM in the virt-manager
Make sure you give every node more then one CPU since LeoFS has a known issue with single core machines. If you only can give one core, check this https://github.com/leo-project/leofs/issues/477
For selecting the fixed IPs for my zones I refer to the KVM DHCP range
...
<ip address="192.168.122.1" netmask="255.255.255.0">
<dhcp>
<range start="192.168.122.128" end="192.168.122.254" />
</dhcp>
</ip>
<route address="192.168.222.0" prefix="24" gateway="192.168.122.2" />
<ip family="ipv6" address="2001:db8:ca2:2::1" prefix="64" />
<route family="ipv6" address="2001:db8:ca2:3::" prefix="64" gateway="2001:db8:ca2:2::2"/>
<route family="ipv6" address="2001:db9:4:1::" prefix="64" gateway="2001:db8:ca2:2::3" metric='2'>
</route>
...
I will use 192.168.122.[2-127] as the fixed IP pool
Since I had to start partially over you will see my node1 is 192.168.122.5 and node2 is 192.168.122.6, you can choose you're own IPs but just make sure you keep track of the changes you make and avoid IP conflicts!

After you click finish the machine will automatically boot, you won't be able to type anything since you will get ghost characters all the time. If you get an alert this probably means that virtualization isn't enabled in the BIOS. Either way SmartOS only works on Intel CPU's and FIFO adds an extra requirement (AVX) which makes that your CPU needs to be Sandy Bridge or newer.

Now stop the VM and to get rid of this we are using VNC instead of spice and we will also add an extra disk to install our zones on. And make sure you select "copy host CPU configuration" so you can actually use KVM inside your KVM...

Now start the machine and fill in the configuration questions.

After pressing y + enter a last time the machine will do the configuration and boot in to the Triton SmartOS. Please repeat these steps for the 2nd node.

Now log in on both nodes from a single set of SSH terminals to proceed with the FiFo manual https://docs.project-fifo.net/v0.8.3/docs
We'll start with setting up the LeoFS zones (one on every node) https://docs.project-fifo.net/docs/installing-leofs#section-step-1-create-zones
Do this on both nodes
We are now importing a basic image (container) that we will use for our storage and management zones
imgadm update
imgadm import 1bd84670-055a-11e5-aaa2-0346bb21d5a1
imgadm list | grep 1bd84670-055a-11e5-aaa2-0346bb21d5a1
leo-zone1.json (for node1)
{
"autoboot": true,
"brand": "joyent",
"image_uuid": "1bd84670-055a-11e5-aaa2-0346bb21d5a1",
"max_physical_memory": 3072,
"cpu_cap": 100,
"alias": "1.leofs",
"quota": "80",
"resolvers": [
"8.8.8.8",
"8.8.4.4"
],
"nics": [
{
"interface": "net0",
"nic_tag": "admin",
"ip": "192.168.122.2",
"gateway": "192.168.122.1",
"netmask": "255.255.255.0"
}
]
}
leo-zone2.json (for node2)
{
"autoboot": true,
"brand": "joyent",
"image_uuid": "1bd84670-055a-11e5-aaa2-0346bb21d5a1",
"max_physical_memory": 512,
"cpu_cap": 100,
"alias": "2.leofs",
"quota": "20",
"resolvers": [
"8.8.8.8",
"8.8.4.4"
],
"nics": [
{
"interface": "net0",
"nic_tag": "admin",
"ip": "192.168.122.3",
"gateway": "192.168.122.1",
"netmask": "255.255.255.0"
}
]
}
Now on node1 do this (you will have to paste the leo-zone1.json from above) -> If you don't know vi, just press "i" and after you paste you content (select and middle mouse button to paste) do ":wq" + enter
cd /opt
vi leo-zone1.json
vmadm create -f leo-zone1.json
Now on node2 do this (you will have to paste the leo-zone2.json from above)
cd /opt
vi leo-zone2.json
vmadm create -f leo-zone2.json
Use/save the UUID that you'll find trailing "successfully created VM" -> you can retrieve them again by issuing vmadm list
on node 1 enter the LeoFS-zone1
zlogin 59871103-cb76-c653-e089-b08bc25503ae
curl -O https://project-fifo.net/fifo.gpg
gpg --primary-keyring /opt/local/etc/gnupg/pkgsrc.gpg --import < fifo.gpg
gpg --keyring /opt/local/etc/gnupg/pkgsrc.gpg --fingerprint
VERSION=rel
cp /opt/local/etc/pkgin/repositories.conf /opt/local/etc/pkgin/repositories.conf.original
echo "http://release.project-fifo.net/pkg/${VERSION}" >> /opt/local/etc/pkgin/repositories.conf
pkgin -fy up
Following 2 commands will ask for verification just press y + enter
pkgin install coreutils sudo gawk gsed
pkgin install leo_manager leo_gateway leo_storage
Now repeat this on node2 for LeoFS-zone2 but replace the last command so you only install the leo_manager
pkgin install leo_manager
Next we are going to configure our LeoFS-zones
vi /opt/local/leo_manager/etc/leo_manager.conf
I only give the lines that need to be changed, please don't remove or alter the other lines in the config files
This should contain
nodename = manager_0@192.168.122.2
distributed_cookie = bUq8z5aEDCVMEU3W
manager.partner = manager_1@192.168.122.3
where the IP in nodename is the IP you choose for leo-zone1.json, the IP in manager.partner is the one you choose in leo-zone2.json and where the distributed_cookie is the result of
openssl rand -base64 32 | fold -w16 | head -n1
for node2 this would make
nodename = manager_1@192.168.122.3
manager.mode = slave
distributed_cookie = bUq8z5aEDCVMEU3W
manager.partner = manager_0@192.168.122.2
Now configure the gateway on node1
vi /opt/local/leo_gateway/etc/leo_gateway.conf
## Name of Manager node(s)
managers = [manager_0@192.168.122.2, manager_1@192.168.122.3]
And the storage on node1
vi /opt/local/leo_storage/etc/leo_storage.conf
## Name of Manager node(s)
managers = [manager_0@192.168.122.2, manager_1@192.168.122.3]
## Cookie for distributed node communication. All nodes in the same cluster
## should use the same cookie or they will not be able to communicate.
distributed_cookie = bUq8z5aEDCVMEU3W
Check the cookies on both nodes, to make sure it is the same in every config file
grep cookie /opt/local/leo_*/etc/leo_*.conf
And now let's start the services, first in leo-zone1 on node1 and then repeat this in leo-zone2 on node2
svcadm enable epmd
svcadm enable leofs/manager
If everything went fine you should issue this command and get following result
leofs-adm status
Next we will enable the gateway and storage on node1
svcadm enable leofs/storage
And verify if this started correctly
leofs-adm status
Next start the storage one node1
leofs-adm start
Next start the gateway (still on node 1 in the leoFS-zone1)
svcadm enable leofs/gateway
leofs-adm status
OK you can exit the LeoFS zones on both nodes to continue and install the FiFo manager https://docs.project-fifo.net/v0.8.3/docs/fifo-overview
exit
On node 1 we are going to set up the FiFo zone
in /opt create the json file
cd /opt
vi setupfifo.json
setupfifo.json
{
"autoboot": true,
"brand": "joyent",
"image_uuid": "1bd84670-055a-11e5-aaa2-0346bb21d5a1",
"delegate_dataset": true,
"indestructible_delegated": true,
"max_physical_memory": 3072,
"cpu_cap": 100,
"alias": "fifo",
"quota": "40",
"resolvers": [
"8.8.8.8",
"8.8.4.4"
],
"nics": [
{
"interface": "net0",
"nic_tag": "admin",
"ip": "192.168.122.4",
"gateway": "192.168.122.1",
"netmask": "255.255.255.0"
}
]
}
vmadm create -f setupfifo.json
Now login to the new FiFo zone
zlogin b8c7c39e-3ede-cf20-cdbc-b9f92fbd4a7d
First we do need to configure the delegate dataset to be mounted to /data we can do this from within the zone with the following command:
zfs set mountpoint=/data zones/$(zonename)/data
Now install the packages
cd /data
curl -O https://project-fifo.net/fifo.gpg
gpg --primary-keyring /opt/local/etc/gnupg/pkgsrc.gpg --import < fifo.gpg
gpg --keyring /opt/local/etc/gnupg/pkgsrc.gpg --fingerprint
echo "http://release.project-fifo.net/pkg/rel" >> /opt/local/etc/pkgin/repositories.conf
pkgin -fy up
pkgin install fifo-snarl fifo-sniffle fifo-howl fifo-cerberus
Now you can enable and start the just installed services
svcadm enable epmd
svcadm enable snarl
svcadm enable sniffle
svcadm enable howl
svcs epmd snarl sniffle howl
The last step is to create an admin user and organisation, this can be done with one simple command:
# snarl-admin init <realm> <org> <role> <user> <pass>
snarl-admin init default test Users admin ******
Now let you FiFo zone connect to the previous created LeoFs zones
sniffle-admin init-leofs 192.168.122.2.xip.io
exit
Next we are installing the zlogin and chunter services https://docs.project-fifo.net/docs/chunter
Make sure you are on both nodes logged out of any zones
Chunter is Project FiFo's hypervisor interaction service. Chunter runs on each hypervisor controlled by Project-FiFo. Chunter interacts with SmartOS to create, update, and destroy vms. Chunter also collects vm and performance data to report back to Howl.
On both SmartOS nodes run following commands to install the FiFo's zdoor server
VERSION=rel
cd /opt
curl -O http://release.project-fifo.net/gz/${VERSION}/fifo_zlogin-latest.gz
gunzip fifo_zlogin-latest.gz
sh fifo_zlogin-latest

Next on both nodes install chunter
VERSION=rel
cd /opt
curl -O http://release.project-fifo.net/gz/${VERSION}/chunter-latest.gz
gunzip chunter-latest.gz
sh chunter-latest
Now on both SmartOS nodes start the just installed services
svcadm enable epmd
svcs epmd
svcadm enable fifo/zlogin
svcs fifo/zlogin
svcadm enable chunter
svcs chunter
Once the service is running FiFo will auto-discover the node and after about a minute the SmartOS Node will appear in the FiFo web browser to be managed.
Now you can go and check out the web interface by browsing to the FiFo zone IP (192.168.122.4)
Under hypervisors you should see both SmartOS nodes (and see that we already over provisioned node1)
Under datasets you can download some pre-build base images. I opted for Ubuntu 16.04 in LX (container) and KVM (VM).
clicking a dataset equals marking it for download. Don't press to many or you will have to wait for all of them to download.
Now set up some basic things to be able to create a container or VM. Start by adding a basic package, ip range and network.
Make sure to select admin as the network tag since this is the only network we have created so far

Next connect the range to the newly created network

Next we will create a container to test with

This was a basic tutorial on how to get started with SmartOS and project FiFo inside KVM on Linux. I hope you enjoyed it!
For more details on how to use the web interface refer to https://docs.project-fifo.net/docs/cerberus-general
For statistics on your machines set up a dalmantiner zone https://docs.project-fifo.net/v0.8.3/docs/ddb-installation