echo "

Simple script to get the IP of a list of hostnames (update)

I made a file called to_find_ip which has a hostname on every line. And a simple bash script to process the file and return the matching IP.

The script called get_ip_for_list_of_hostnames.sh (use getent ahosts if you only need IPv4 addresses)

#!/bin/bash
while read p; do
getent hosts $p | cut -f1 -d ' '
done <$1

To run the script:

sh /tmp/get_ip_for_list_of_hostnames.sh /tmp/to_find_ip

Upgrading an existing Zimbra environment from Ubuntu 14.04 (ZCS 8.6.0) to Ubuntu 16.04 (ZCS 8.7.1)

Since I got some complaints about the current environment which I checked and according to the changelog these issues should be fixed by now. I'm going to perform an in-place upgrade of my current Zimbra 8.6.0 on Ubuntu 14.04 to Zimbra 8.7.1 on Ubuntu 16.04. The system runs in an OpenVZ container which gives us some extra flexibility.

 

First of all create a backup or a snapshot after you stopped the Zimbra services.

/etc/init.d/zimbra stop

Next create a "snapshot". Since this hypervisor doesn't have a snapshot solution I specifically choose to use the local (SSD) storage without compression to get the backup to finish as fast as possible, we can't afford unnecessary downtime and in the next generation of our servers we will be using file system based snasphots (LVM or ZFS) which makes waiting for a backup to finish something of the past. I created my snapshot using the web interface of Proxmox 3.4 but this is what happens in the background after stopping the container.

vzdump 113 --remove 0 --mode stop --storage local --node duff-prox-01

 

To prevent Zimbra from accepting mails. I changed the IP of my VPS during the entire upgrade. Next restart Zimbra and make sure it still runs fine.

sed -i "s/172.16.4.10/172.16.4.99/g" /etc/network/interfaces /etc/hosts
service networking restart
/etc/init.d/zimbra restart

 

Next install and enable the Zimbra proxy and memcached if you didn't had them configured in the past. See https://wiki.zimbra.com/wiki/Enabling_Zimbra_Proxy_and_memcached

cd /tmp/ && wget https://files.zimbra.com/downloads/8.6.0_GA/zcs-8.6.0_GA_1153.UBUNTU14_64.20141215151116.tgz
tar xvzf zcs-8.6.0_GA_1153.UBUNTU14_64.20141215151116.tgz
cd zcs-8.6.0_GA_1153.UBUNTU14_64.20141215151116/packages
dpkg -i dpkg -i zimbra-proxy_8.6.0.GA.1153.UBUNTU14.64_amd64.deb zimbra-memcached_8.6.0.GA.1153.UBUNTU14.64_amd64.deb
su - zimbra
./libexec/zmproxyconfig -e -w -o -a 8080:80:8443:443 -x both -H zimbra.ampersant
./libexec/zmproxyconfig -e -m -o -i 7143:143:7993:993 -p 7110:110:7995:995 -H zimbra.ampersant
zmprov ms zimbra.ampersant zimbraMailReferMode reverse-proxied
zmmailboxdctl restart
zmprov ms zimbra.ampersant +zimbraServiceEnabled memcached
zmcontrol restart

 

Now upgrade Zimbra to the latest edition available for Ubuntu 14.04

Note! Zimbra introduced their own repository so before starting the upgrade make sure you can install GPG keys from keyserver.ubuntu.com:11371 and that your server can surf the web (through the proxy if any). And just to be sure, rebuild the apt cache:

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9BE6ED79
sudo apt-get clean
cd /var/lib/apt
sudo mv lists lists.old
sudo mkdir -p lists/partial
sudo apt-get clean
sudo apt-get update
cd /tmp/ && wget https://files.zimbra.com/downloads/8.7.1_GA/zcs-8.7.1_GA_1670.UBUNTU14_64.20161025045105.tgz
tar xvzf zcs-8.7.1_GA_1670.UBUNTU14_64.20161025045105.tgz
cd zcs-8.7.1_GA_1670.UBUNTU14_64.20161025045105
./install.sh

If you run in to any other issues please first check the installation log

tail -f /tmp/install.log.*

If you successfully finish the upgrade stop Zimbra before you upgrade Ubuntu

/etc/init.d/zimbra stop

 

Next upgrade Ubuntu (this application is in the update-manager-core package)

do-release-upgrade

 

Only for when you are running a very old kernel! Revert back to upstart

apt-get install upstart-sysv

 

Get the latest build of Zimbra for Ubuntu 16.04

cd /tmp/ && wget https://files.zimbra.com/downloads/8.7.1_GA/zcs-8.7.1_GA_1670.UBUNTU16_64.20161025045114.tgz

 

Next extract the installation files

tar -xvzf zcs-8.7.1_GA_1670.UBUNTU16_64.20161025045114.tgz

 

And start the repair of the old install to be able to do an in-place upgrade. You will get some alerts and we work around them anyway

cd zcs-8.7.1_GA_1670.UBUNTU16_64.20161025045114
echo "deb [arch=amd64] https://repo.zimbra.com/apt/87 xenial zimbra
deb-src [arch=amd64] https://repo.zimbra.com/apt/87 xenial zimbra" > /etc/apt/sources.list.d/zimbra.list
apt-get update
apt-get install zimbra-core-components
apt-get -f install
dpkg -i packages/*.deb
apt-get -f install
./install.sh --softwareonly #DO NOT CHECK DATABASE INTEGRITY!

Started 1:10 -> finished

Benchmarking Linux bonding with cheap NICs

Since a couple of years I have been using bonding to achieve load balancing, higher bandwidth and redundancy on Linux (and BSD, Windows, Solaris, ...). At home this can be rather challenging since lots of consumer switches don't understand all the bonding methods and Realtek adapters have the unpleasant habit of keeping the spoofed MAC address after a reboot which messes things up.

For this set up I will use a bunch of cheap Realtek and Intel NICs to connect 2 PC's using 4 NICs on every side to benchmark the actual speed.

I already mentioned that my Realtek adapters tend to remember the spoofed MAC address during reboot which causes UDEV to give the adapters a new name and rendering your config worthless. To prevent this I added some lines inside /etc/network/interfaces that start and stop the bond and load and unload the bonding module which causes the NIC to have it's factory MAC address back before rebooting.

This is my basic config using bonding mode 0 (round-robin). I only changed the mode= or in case of mode=2 and mode=4 with L3+L4 hashing I uncommented the xmit_hash_policy variable. All benchmarked modes give fault tolerance and load balancing (albeit not always visible in the results)

# The primary network interface
auto bond0
iface bond0 inet static
pre-up modprobe bonding mode=0 miimon=100 #xmit_hash_policy=layer3+4
pre-up ip link set enp4s0 master bond0
pre-up ip link set enp6s0 master bond0
pre-up ip link set enp8s0 master bond0
pre-up ip link set enp9s0 master bond0
mtu 9000
address 10.0.0.1
netmask 255.255.255.0
up /bin/true
down /bin/true
post-down ip link set enp4s0 nomaster
post-down ip link set enp6s0 nomaster
post-down ip link set enp8s0 nomaster
post-down ip link set enp9s0 nomaster
post-down ip link set dev bond0 down
post-down rmmod bonding

Mode=0 is only good for connecting 2 systems with multiple NICs directly to each other to achieve a single connection with a higher bandwidth. In switched environments or even worse in a bridge this method will really mess up your connections. In a switched connection you will see random packet loss and out of order packets. In a bridge you are lucky if you even get some packages over.

To get the most out of this I set the MTU to 9000 (jumbo frames) and connected 2 systems directly to each other. My NICs are all auto sensing so I didn't have to use crossed cables.

I used these scripts to run multiple instances of iperf (network benchmark tool) in parallel https://sandilands.info/sgordon/multiple-iperf-instances

method mode 0 (Round Robin) mode 2 (XOR) mode 2 (XOR L3+L4) mode 4 (LACP L2) mode 4 (LACP L3+L4)
single connection speed 3.14 Gbits/sec 719 Mbits/sec 661 Mbits/sec 730 Mbits/sec 725 Mbits/sec
total speed of 4 simultaneous connections 3.157 Gbits/sec 722 Mbits/sec 2.107 Gbit/s 735 Mbits/sec 1.484 Gbits/sec
ethtool advertised speed 4000Mb/s 4000Mb/s 4000Mb/s 4000Mb/s 4000Mb/s
Where to use inter system connection only

inter system connection OR on a switch that only supports static LAGs

inter system connection OR on a L3 switch than only supports static LAGs (...)

inter system connection OR on a L2 managed switch that supports LACP inter system connection OR on a L3 switch that supports LACP

As you can see mode=0 wins the benchmark but as I already said that comes with a price. Netgear for instance will recommend mode=2 for unmanaged switches or switches that only can handle static LAGs and mode=4 for switches that support LACP.

But in real life you should always use LACP since it is the most robust method out there and if you really need higher single connection speed you will have to invest in 10, 25, 40 or 100Gbit connections.

LACP can combine any form of hashing. You can combine hashing the MAC address (L2), the IP address (L3) or the UDP/TCP session (L4). This isn't only the case for Linux. Solaris and derivates like SmartOS just as well give you the option to combine any of these 3 hashing methods in your LACP aggregate. In Solaris L4 hashing is the default one, in Linux it is L2.

You can see in my result that running multiple iperf sessions on different ports really does make a difference if you use L4 hashing since we have 4 different TCP sessions. In the real world LACP will rarely be used for single connection configurations. If you use LACP underneath a bridge for your hypervisor for instance, since all your VMs and containers will have their own MAC address, IP address and use their own UDP/TCP sessions all your physical connections will be actually used and you will get a higher total bandwidth (albeit hard to benchmark) but you will never get more than 1 Gbit per session.

To finish, one mode that doesn't have anything to do with load balancing but can be nice is mode=1. I used this in the past to set up a fiber connection as primary link with a WiFi link as backup. In case the fiber stops working, traffic would be send over the WiFi link. Of course this kind of behavior can be achieved by using STP as well if you have a managed switch on both ends.

Home