So in the previous post I already hinted at the possibility of using SimpleHTTP as a basic file server for your mirror. You can use this to publish any folder and I combined some tricks to get this SSL terminated SimpleHTTP server. This is a lot simpler than Apache and a good solution if your only goal is a simple file server.
The actual web server (simple-https-server.py)
import BaseHTTPServer, SimpleHTTPServer
import ssl
httpd = BaseHTTPServer.HTTPServer(('', 8443), SimpleHTTPServer.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket (httpd.socket, certfile='../mirror.pem', keyfile='../mirror.key', server_side=True)
httpd.serve_forever()
The SystemD service. Make sure the user exists and please disable the shell for the simplehttp user in /etc/passwd. (/etc/systemd/system/simplehttp.service)
[Unit]
Description=Job that runs the python SimpleHTTPServer daemon
Documentation=man:SimpleHTTPServer(1)
[Service]
Type=simple
User=simplehttp
WorkingDirectory=/opt/data/mirror/
ExecStart=/usr/bin/python /opt/data/simple-https-server.py &
ExecStop=/bin/kill `/bin/ps aux | /bin/grep SimpleHTTPServer | /bin/grep -v grep | /usr/bin/awk '{ print $2 }'`
[Install]
WantedBy=multi-user.target
And of course, enable and start the service + create the right FW entries. In this example you have a redirect to HTTPS as well.
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-masquerade --permanent
firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toport=443 --permanent
firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=8443 --permanent
systemctl enable simplehttp
service simplehttp start
I created this script to create a local repository of RPM packages based on the repositories available to the system (very important, otherwise it won't work). To automate initial and further syncs I'm simply using Cron.
The machine is a basic system that is used as a webserver (apache, nginx or python SimpleHTTPServer)
I created a directory for RHEL7 (named "7"), you should do this for all versions, before running the script and started python SimpleHTTPServer in /var/www/html/ and opened port 80 in firewalld. This is just a proof-of-concept so nothing fancy.
This is the script:
#!/bin/bash
BASEDIRECTORY="/var/www/html/redhat/"
while read VERSION REPO; do
reposync --gpgcheck -l --repoid=$REPO --download_path=$BASEDIRECTORY/$VERSION/
if [ ! -d "$BASEDIRECTORY/$VERSION/$REPO/repodata" ]; then
createrepo -v $BASEDIRECTORY/$VERSION/$REPO/
else
createrepo --update -v $BASEDIRECTORY/$VERSION/$REPO/
fi
done <$1
This is the repos file:
7 rhel-7-server-extras-rpm
7 rhel-7-server-optional-rpms
7 rhel-7-server-rh-common-rpms
7 rhel-7-server-rpms
7 rhel-7-server-satellite-tools-6.3-rpms
7 rhel-server-rhscl-7-rpms
And to run it, just do:
sh /root/syncrepos.sh /var/www/repos
For older RHEL repositories, you should put them in the content view (when using Satellite) or make sure you can access them. Since they won't automagically appear in your yum repolist you will have to create a repo file yourself. Copy and adapt the snippet for all repositories and to keep things clean, create a repo file for every Red Hat version. (or CentOS, ...). SSLcerts should be just the same as the ones from a working RHEL7 entry.
/etc/yum.repos.d/rhel6.repo
[rhel-6-server-rpms]
metadata_expire = 1
sslclientcert = /etc/pki/entitlement/6369168190531272611.pem
baseurl = https://prhsv401.belgianrail.be/pulp/repos/YPTO/Library/Mirror/content/dist/rhel/server/6/6Server/$basearch/os
ui_repoid_vars = releasever basearch
sslverify = 1
name = Red Hat Enterprise Linux 6 Server (RPMs)
sslclientkey = /etc/pki/entitlement/6369168190531272611-key.pem
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
enabled = 1
sslcacert = /etc/rhsm/ca/katello-server-ca.pem
gpgcheck = 1
For simpleHTTP you can use these 2 neat tricks:
Yesterday I had a machine that stopped booting after an update. What I didn't know was that during issues with yum update the LVM package was removed and after that the initramfs's (all of them) where regenerated. Which resulted in a machine that wasn't able to boot anymore.
The errors (list of search words, not actual since no log exists):
dracut-initqueue timeout
root does not exist
Starting dracut emergency shell
Entering emergency mode
dracut-initqueue[259]: Warning: dracut-initqueue timeout
dracut-initqueue[279] Warning: Could not boot
dracut-initqueue[279] Warning; /dev/mapper/rhel_...-root does not exit
in rescue mode
job timeout
Timed out waiting for dev-mapper-VG\LV.device
unable to mount logic volumes
vgchange missing
lvchange missing
This is how I solved it:
I started a live CD, configured the network and chrooted into the machine and ran (check kernel version and actual initramfs file):
lsinitrd -m –k /boot/initramfs-3.10.0-693.11.1.el7.x86_64.img | grep lvm (returned nothing)
yum install lvm2
dracut -f /boot/initramfs-3.10.0-693.11.1.el7.x86_64.img 3.10.0-693.11.1.el7.x86_64
Since this isn't standard behavior, you should check all services and make sure that your packages are consistent.
yum check all
Today I created a crontab entry to automate the backup of Satellite using katello-backup. We had this in the past but it was a bit harsh. Now we keep biweekly fulls, daily incrementals and clean up after one month. (as an example). Make sure that the backup doesn't run when you for example run your OpenScap reports, since all services are down during the backup.
#katello backup, biweekly full + daily incremental
0 2 * * 0 root expr `date +\%s` / 604800 \% 2 >/dev/null || (/usr/sbin/satellite-backup --assumeyes /backup/ && ls -td -- /backup/satellite-backup-* | head -n 1 > /backup/latest_full; find /backup/ -type d -ctime +30 -exec rm -rf {} \;)
0 2 * * 2-6 root /usr/sbin/satellite-backup --assumeyes /backup/ --incremental "$(cat /backup/latest_full | head -n1)"
#this checks if the latest backup failed and cleans up anyway to free up space
0 6 * * 0 if [[ $(find "$(cat /backup/latest_full)" -mtime +15 -print) ]]; then find /backup/ -type d -ctime +30 -exec rm -rf {} \;; fi
This was tested on RHEL7 but should work on any recent Linux.
In case you have a too large disk or you want to roll back an expansion because for instance the need of extra disk space isn't relevant anymore, you can follow this guide.
! Always make sure you have a backup, these are dangerous commands to run that can lead to data corruption
There are a lot of tricks you can do but these ones are tested. Shifting around with new disks to swap should be the safest way but Red Hat's implementation of Grub and initramfs has some drawbacks which during my tests lead to an unbootable machine. This article describes how I did a move (over network if you like) in a Debian-based set up.
First things first, resize the actual file system(s). If you can't unmount the partition, start from a rescue disk (gparted-live for instance) to do this part and return to the installed system afterwards. Take a margin of 10% to make sure we don't have any conversion issues resulting in dataloss (repeat this for all partitions)
If you are on xfs or another filesystem that doesn't allow shrinking you can use fstransform (actually you can always use this). fstransform is in the EPEL directory but you can just download the RPM and install it with yum install too.
If you want to keep xfs
fstransform /dev/[volumegroup]/[logicalvolume] --new-size=[size] xfs
If you rather would have ext4
fstransform /dev/[volumegroup]/[logicalvolume] --new-size=[size] ext4
for ext4
e2fsck -f /dev/[volumegroup]/[logicalvolume]
resize2fs -p /dev/[volumegroup]/[logicalvolume] [size]
example:
resize2fs -p /dev/mapper/vg_system_lv_tmp 900M
Next we are going to reduce the LVM volume (if you can't manage the downtime, this part you can do with a mounted FS) (repeat this for all partitions)
lvreduce -L [newlvsize] /dev/[volumegroup]/[logicalvolume]
example:
lvreduce -L 900M /dev/mapper/vg_system_lv_tmp
Next if you want to drop a disk, now is the moment to clean it out. (if your LVM is on one disk, you can skip this part)
pvmove /dev/[the disk you want to remove]
vgreduce [volumegroup] /dev/[the disk you want to remove]
pvremove /dev/[the disk you want to remove]
Now you can drop/reuse this disk.
To shrink the LVM part on a disk with partition table you should first check the used and total extensions and ask LVM to move all extensions to the front of the LVM partition
vgdisplay
pvmove --alloc anywhere /dev/[partition]:[alloc PE + 1]-[total PE - 1]
Now check if all used extends are on the front of the disk and the only free portion is on the end. If not repeat the previous pvmove command until you get the desired result.
pvs -v --segments
This is how it should look
And now you can resize the volume group to match the wanted disk size
pvresize --setphysicalvolumesize [size] /dev/[lvm partition]
example:
pvresize --setphysicalvolumesize 38G /dev/sda2
Next you can resize the partition table to become small enough to fit the end size of your disk and next you can resize the virtual disk. Check the extends of an other virtual machine with the wanted disk size if you want to push it to the limit, otherwise keep a margin. I used 40G as an example.
fdisk /dev/sda
*d* -> select partition 2 if that is the LVM one
*n* -> create a new primary partition 2 with the default suggested start sector but for the end sector select 83886079 (for a 40G total disk with 500M boot partition)
*:wq* to save
Now resize the VM disk to the same size (40G in this example). Next you can use lvextend and resize2fs to fill up the margin again to get the desired result.
Since it seems impossible to shrink a disk in hyper-v (Gen 1). I added a 40G disk, started the live cd again and did:
dd if=/dev/sda of=/dev/sdb
Next I just removed the first disk (if you want to be safe, press remove but select "no" when hyper-v asks if you want to remove the disk from the host too) and made the new disk the primary disk and marked it "contains the operating system for the virtual machine" and it booted fine.
pvresize /dev/[lvm partition]
lvresize -L [oldlvsize] /dev/[volumegroup]/[logicalvolume]
resize2fs /dev/[volumegroup]/[logicalvolume]
And finally delete the old disk from the host if you kept it. For this go to the host running the VM, go into the directory you find when you open the disk settings of the VM and just delete the old disk with shift+delete.
I made a file called to_find_ip which has a hostname on every line. And a simple bash script to process the file and return the matching IP.
The script called get_ip_for_list_of_hostnames.sh (use getent ahosts if you only need IPv4 addresses)
#!/bin/bash
while read p; do
getent hosts $p | cut -f1 -d ' '
done <$1
To run the script:
sh /tmp/get_ip_for_list_of_hostnames.sh /tmp/to_find_ip