echo "

User session recording using log-user-session

Since I needed a good way to track what users do based on their IP/SSH fingerprint I started looking and found log-user-session to be a very neat tool. I created an RPM for RHEL7 and a DEB for Ubuntu 18.04 Bionic and aside from installing the RPM/DEB you just need to make sure these 2 lines are present in /etc/ssh/sshd_config and you are good to go.

LogLevel VERBOSE
ForceCommand /usr/bin/log-user-session

For fingerprint pairing, just use the date and IP and get the fingerprint out of the secure.log/auth.log.

The GitHub page of the project

RPM for Red Hat 7

DEB for Ubuntu 18.04

Addendum: SimpleHTTP(S) or how to get an SSL terminated file server with 5 lines of Python code...

So in the previous post I already hinted at the possibility of using SimpleHTTP as a basic file server for your mirror. You can use this to publish any folder and I combined some tricks to get this SSL terminated SimpleHTTP server. This is a lot simpler than Apache and a good solution if your only goal is a simple file server.

 

The actual web server (simple-https-server.py)

import BaseHTTPServer, SimpleHTTPServer
import ssl

httpd = BaseHTTPServer.HTTPServer(('', 8443), SimpleHTTPServer.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket (httpd.socket, certfile='../mirror.pem', keyfile='../mirror.key', server_side=True)
httpd.serve_forever()

The SystemD service. Make sure the user exists and please disable the shell for the simplehttp user in /etc/passwd. (/etc/systemd/system/simplehttp.service)

[Unit]
Description=Job that runs the python SimpleHTTPServer daemon
Documentation=man:SimpleHTTPServer(1)

[Service]
Type=simple
User=simplehttp
WorkingDirectory=/opt/data/mirror/
ExecStart=/usr/bin/python /opt/data/simple-https-server.py &
ExecStop=/bin/kill `/bin/ps aux | /bin/grep SimpleHTTPServer | /bin/grep -v grep | /usr/bin/awk '{ print $2 }'`

[Install]
WantedBy=multi-user.target

And of course, enable and start the service + create the right FW entries. In this example you have a redirect to HTTPS as well.

firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-masquerade --permanent
firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toport=443 --permanent
firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=8443 --permanent
systemctl enable simplehttp
service simplehttp start

 

dracut-initqueue timeout and root LV missing, some LV's missing in rescue mode

Yesterday I had a machine that stopped booting after an update. What I didn't know was that during issues with yum update the LVM package was removed and after that the initramfs's (all of them) where regenerated. Which resulted in a machine that wasn't able to boot anymore.

The errors (list of search words, not actual since no log exists):

dracut-initqueue timeout
root does not exist
Starting dracut emergency shell
Entering emergency mode
dracut-initqueue[259]: Warning: dracut-initqueue timeout
dracut-initqueue[279] Warning: Could not boot
dracut-initqueue[279] Warning; /dev/mapper/rhel_...-root does not exit
in rescue mode
job timeout
Timed out waiting for dev-mapper-VG\LV.device
unable to mount logic volumes
vgchange missing
lvchange missing

This is how I solved it:

I started a live CD, configured the network and chrooted into the machine and ran (check kernel version and actual initramfs file):

lsinitrd -m –k /boot/initramfs-3.10.0-693.11.1.el7.x86_64.img | grep lvm (returned nothing)
yum install lvm2
dracut -f /boot/initramfs-3.10.0-693.11.1.el7.x86_64.img 3.10.0-693.11.1.el7.x86_64

Since this isn't standard behavior, you should check all services and make sure that your packages are consistent.

yum check all

Fully automated backup of Satellite

Today I created a crontab entry to automate the backup of Satellite using katello-backup. We had this in the past but it was a bit harsh. Now we keep biweekly fulls, daily incrementals and clean up after one month. (as an example). Make sure that the backup doesn't run when you for example run your OpenScap reports, since all services are down during the backup.

#katello backup, biweekly full + daily incremental
0 2 * * 0 root expr `date +\%s` / 604800 \% 2 >/dev/null || (/usr/sbin/satellite-backup --assumeyes /backup/ && ls -td -- /backup/satellite-backup-* | head -n 1 > /backup/latest_full; find /backup/ -type d -ctime +30 -exec rm -rf {} \;)
0 2 * * 2-6 root /usr/sbin/satellite-backup --assumeyes /backup/ --incremental "$(cat /backup/latest_full | head -n1)"
#this checks if the latest backup failed and cleans up anyway to free up space
0 6 * * 0 if [[ $(find "$(cat /backup/latest_full)" -mtime +15 -print) ]]; then find /backup/ -type d -ctime +30 -exec rm -rf {} \;; fi

Shrink a disk

This was tested on RHEL7 but should work on any recent Linux.

In case you have a too large disk or you want to roll back an expansion because for instance the need of extra disk space isn't relevant anymore, you can follow this guide.

! Always make sure you have a backup, these are dangerous commands to run that can lead to data corruption

There are a lot of tricks you can do but these ones are tested. Shifting around with new disks to swap should be the safest way but Red Hat's implementation of Grub and initramfs has some drawbacks which during my tests lead to an unbootable machine. This article describes how I did a move (over network if you like) in a Debian-based set up.

First things first, resize the actual file system(s). If you can't unmount the partition, start from a rescue disk (gparted-live for instance) to do this part and return to the installed system afterwards. Take a margin of 10% to make sure we don't have any conversion issues resulting in dataloss (repeat this for all partitions)

If you are on xfs or another filesystem that doesn't allow shrinking you can use fstransform (actually you can always use this). fstransform is in the EPEL directory but you can just download the RPM and install it with yum install too.

If you want to keep xfs

fstransform /dev/[volumegroup]/[logicalvolume] --new-size=[size] xfs

If you rather would have ext4

fstransform /dev/[volumegroup]/[logicalvolume] --new-size=[size] ext4

for ext4

e2fsck -f /dev/[volumegroup]/[logicalvolume]
resize2fs -p /dev/[volumegroup]/[logicalvolume] [size]
 
example:
resize2fs -p /dev/mapper/vg_system_lv_tmp 900M

Next we are going to reduce the LVM volume (if you can't manage the downtime, this part you can do with a mounted FS) (repeat this for all partitions)

lvreduce -L [newlvsize] /dev/[volumegroup]/[logicalvolume]
 
example:
lvreduce -L 900M /dev/mapper/vg_system_lv_tmp

Next if you want to drop a disk, now is the moment to clean it out. (if your LVM is on one disk, you can skip this part)

pvmove /dev/[the disk you want to remove]
vgreduce [volumegroup] /dev/[the disk you want to remove]
pvremove /dev/[the disk you want to remove]

Now you can drop/reuse this disk.

To shrink the LVM part on a disk with partition table you should first check the used and total extensions and ask LVM to move all extensions to the front of the LVM partition

vgdisplay
pvmove --alloc anywhere /dev/[partition]:[alloc PE + 1]-[total PE - 1]

Now check if all used extends are on the front of the disk and the only free portion is on the end. If not repeat the previous pvmove command until you get the desired result.

pvs -v --segments

This is how it should look

undefined

And now you can resize the volume group to match the wanted disk size

pvresize --setphysicalvolumesize [size] /dev/[lvm partition]
 
example:
pvresize --setphysicalvolumesize 38G /dev/sda2

Next you can resize the partition table to become small enough to fit the end size of your disk and next you can resize the virtual disk. Check the extends of an other virtual machine with the wanted disk size if you want to push it to the limit, otherwise keep a margin. I used 40G as an example.

fdisk /dev/sda
*d* -> select partition 2 if that is the LVM one
*n* -> create a new primary partition 2 with the default suggested start sector but for the end sector select 83886079 (for a 40G total disk with 500M boot partition)
*:wq* to save

Now resize the VM disk to the same size (40G in this example). Next you can use lvextend and resize2fs to fill up the margin again to get the desired result.

Since it seems impossible to shrink a disk in hyper-v (Gen 1). I added a 40G disk, started the live cd again and did:

dd if=/dev/sda of=/dev/sdb

Next I just removed the first disk (if you want to be safe, press remove but select "no" when hyper-v asks if you want to remove the disk from the host too) and made the new disk the primary disk and marked it "contains the operating system for the virtual machine" and it booted fine.

pvresize /dev/[lvm partition]
lvresize -L [oldlvsize] /dev/[volumegroup]/[logicalvolume]
resize2fs /dev/[volumegroup]/[logicalvolume]

And finally delete the old disk from the host if you kept it. For this go to the host running the VM, go into the directory you find when you open the disk settings of the VM and just delete the old disk with shift+delete.

Setting a Grub password using Ansible (update)

After an upgrade from RedHat the template for the password is changed and uses a variable that OpenScap doesn't read. This makes that our test fails. On top of that the test checks for the use of common administrator account names like root, admin or administrator. This update solves the issue and from now we use a user instead of root for Grub2.

Today I had to update and verify that we have an entry password for Grub on all machines. We needed to do this to comply with the Certified Cloud Service Provider's OpenScap benchmark.

This only prevents a person with physical access to boot in single user mode! The machine can still be booted without the need of a password.

RHEL6 machines all use legacy boot where on RHEL7 we also make a difference between EFI and non-EFI machines. 

 

First generate the hashes (on a RHEL6 and on a RHEL7 node)

RHEL6

grub-crypt --sha-512

RHEL7

grub2-mkpasswd-pbkdf2

And to finish... Here are the Ansible lines:

playbook lines:

#GRUB
- name: "grub v1 | add password"
  lineinfile: dest=/etc/grub.conf regexp='^password ' state=present line='password --encrypted {{ grub_password_v1_passwd }}' insertafter='^timeout'
  when: rhel6
  tags: grub-password

- stat: path=/sys/firmware/efi/efivars/
  register: grub_efi
  when: rhel7
  tags: grub-password

- name: remove unwanted grub.cfg on EFI systems
  file:
    state: absent
    path: /boot/grub2/grub.cfg
  when: rhel7 and grub_efi.stat.exists == True
  tags: grub-password

- name: Install user template to make sure grub2-mkconfig doesn't mess up the config
  template:
    src: 01_users.j2
    dest: /etc/grub.d/01_users
    owner: root
    group: root
    mode: '0700'
  notify:
     - grub2-mkconfig EFI
     - grub2-mkconfig MBR
  when: rhel7
  tags: grub-password

- name: "grub v2 EFI | add password"
  lineinfile: dest=/etc/grub2-efi.cfg regexp="^password_pbkdf2 {{ grub_user }} " state=present insertafter=EOF line='password_pbkdf2 {{ grub_user }} {{ grub_password_v2_passwd }}'
  when: rhel7 and grub_efi.stat.exists == True
  tags: grub-password

- name: "grub v2 MBR | add password"
  lineinfile: dest=/etc/grub2.cfg regexp="^password_pbkdf2 {{ grub_user }} " state=present insertafter=EOF line='password_pbkdf2 {{ grub_user }} {{ grub_password_v2_passwd }}'
  when: rhel7 and grub_efi.stat.exists == False

vars:

grub_password_v1_passwd: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
grub_password_v2_passwd: grub.pbkdf2.sha512.10000.xxxxxxxxxxxxxxxxxxx
grub_user: loginuser

 Handlers:

- name: grub2-mkconfig EFI
  command: grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
  when: grub_efi.stat.exists == True

- name: grub2-mkconfig MBR
  command: grub2-mkconfig -o /boot/grub2/grub.cfg
  when: grub_efi.stat.exists == False

01_users.j2:

#!/bin/sh -e

cat << "EOF"
set superusers="{{ grub_user }}"
export superusers
password_pbkdf2 {{ grub_user }} {{ grub_password_v2_passwd }}
EOF

Shushing a syslog spammer

Today I had a machine of which the /var/log/messages was drowning in DEBUG and TRACE messages from the spring framerwork used by Tomcat. To prevent the partition from running full I made a temporary workaround until our programmer disabled the extensive debugging. I found this sort of lines in /var/log/messages:

May 21 03:38:54 tomcatserver current: 03:38:54.431 [XNIO-3 I/O-3] TRACE org.xnio.nio.selector - Selected on sun.nio.ch.EPollSelectorImpl@15e3ed9
May 21 03:38:54 tomcatserver current: 03:38:54.431 [XNIO-3 task-15] TRACE org.xnio.safe-close - Closing resource io.undertow.servlet.core.ServletBlockingHttpExchange@3168bcd7
May 21 03:38:54 tomcatserver current: 03:38:54.431 [XNIO-3 I/O-3] TRACE org.xnio.nio.selector - Beginning select on sun.nio.ch.EPollSelectorImpl@15e3ed9 (with timeout)
May 21 03:38:54 tomcatserver current: 03:38:54.433 [XNIO-3 I/O-3] TRACE org.xnio.nio.selector - Selected on sun.nio.ch.EPollSelectorImpl@15e3ed9
May 21 03:38:54 tomcatserver current: 03:38:54.434 [XNIO-3 I/O-3] TRACE org.xnio.nio.selector - Selected key sun.nio.ch.SelectionKeyImpl@e68ade3 for java.nio.channels.SocketChannel[connected oshut local=/164.35.83.148:9000 remote=/10.68.64.38:45856]
May 21 03:38:54 tomcatserver current: 03:38:54.434 [XNIO-3 I/O-3] TRACE org.xnio.listener - Invoking listener io.undertow.util.ConnectionUtils$4@1d0ccab7 on channel org.xnio.conduits.ConduitStreamSourceChannel@56407229
May 21 03:38:54 tomcatserver current: 03:38:54.434 [XNIO-3 I/O-3] TRACE org.xnio.nio - Cancelling key sun.nio.ch.SelectionKeyImpl@e68ade3 of java.nio.channels.SocketChannel[connected oshut local=/164.35.83.148:9000 remote=/10.68.64.38:45856] (same thread)
May 21 03:38:54 tomcatserver current: 03:38:54.434 [XNIO-3 I/O-3] TRACE org.xnio.listener - Invoking listener io.undertow.server.AbstractServerConnection$CloseSetter@788b3120 on channel org.xnio.nio.NioSocketStreamConnection@5ab7faac

/etc/rsyslog.d/tomcat_silencer.conf

if $programname == 'current' then ~

Where current is the name of the application that is spamming. You could also use a filter for DEBUG and TRACE messages if you like.

:msg, contains, "DEBUG" ~
:msg, contains, "TRACE" ~

Either way after you adapt the config of syslog restart and optionally run a logrotate

service rsyslog restart
logrotate --force /etc/logrotate.d/syslog

Script to clean up all the ARF/OpenScap compliance reports in Satellite

Since we only need to know the last compliance check I made a script to clean up all the previous reports before the next compliance check runs.

#!/bin/bash
#this script removes all the arf reports from the satellite server
###

#settings
USER=ronly
PASS=xxxxxxxxxxx
URI=https://localhost

#check amount of reports
while [ $(curl -k -u $USER:$PASS $URI/api/v2/compliance/arf_reports/ | python -m json.tool | grep \"\total\": | cut -f2 -d":" | cut -f1 -d"," | sed "s/ //g") -gt 0 ]; do
        #fetch reports
        for i in $(curl -k -u $USER:$PASS $URI/api/v2/compliance/arf_reports/ | python -m json.tool | grep \"\id\": | cut -f2 -d":" | cut -f1 -d"," | sed "s/ //g")
        #delete reports
        do
                curl -k -u $USER:$PASS -i -H "Content-Type: application/json" -X DELETE $URI/api/v2/compliance/arf_reports/$i
        done
done

To manually rerun the benchmark on all machines I use following ansible command

ansible all -m shell -a 'eval $(grep foreman_scap_client /var/spool/cron/root | cut -f6-7 -d" " | sed '/^$/d')'

 Update: Red Hat published my script https://access.redhat.com/solutions/3040861

 Update: After Satellite 6.3 the location for the cron rule has changed to /etc/cron.d/foreman_scap_client_cron

Force update of certificate in Satellite

I needed to update our certificate for Satellite and capsule to include SAN to get around issues in Chrome 58. In this tutorial we use a single certificate for both the Satellite and capsule(s) servers. 

This is what I did:

First create a CSR that contains the SAN field

san.cnf  (you only have to adapt the DNS.? fields)

[ req ]
default_bits       = 2048
distinguished_name = req_distinguished_name
req_extensions     = req_ext
[ req_distinguished_name ]
countryName                 = Country Name (2 letter code)
stateOrProvinceName         = State or Province Name (full name)
localityName               = Locality Name (eg, city)
organizationName           = Organization Name (eg, company)
commonName                 = Common Name (e.g. server FQDN or YOUR name)
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1   = satellite.company.com
DNS.2   = capsule.dmz.com

Next generate the CSR using this config

openssl req -out satellite.csr -newkey rsa:4096 -nodes -keyout satellite.key -config san.cnf

Next validate your CSR against your CA or SSL provider.

And now we are going to update the Satellite (as root on the satellite server)

satellite-installer --certs-server-cert /home/koen/satellite.cer --certs-server-cert-req /home/koen/satellite.csr --certs-server-key /home/koen/satellite.key --certs-server-ca-cert /etc/pki/ca-trust/source/anchors/CA01.crt --certs-update-server --certs-update-server-ca

And optionel create the package for the capsule server using the same certificate (since we have both FQDNs in the SAN field) (still as root on the satellite server)

capsule-certs-generate --capsule-fqdn "capsule.dmz.com" --certs-tar /root/sat_cert/capsule.dmz.com-certs.tar --server-cert /home/koen/satellite.cer --server-cert-req /home/koen/satellite.csr --server-key /home/koen/satellite.key --server-ca-cert /etc/pki/ca-trust/source/anchors/CA01.crt --regenerate --regenerate-ca --certs-update-server

Which will tell you how to install it on the capsule server. Next you can restart the katello-service on both machines and check if everything has been updated with the new cert.

katello-service restart

 

Using cvmanager to automate promotion of content views in Red Hat Satellite 6.2

In the past this was done using a bash script consisting of hammer commands ran once a month. Since this stopped working in Satellite 6.2 and wasn't compatible with composite content views we switched to the katello-cvmanager → https://github.com/RedHatSatellite/katello-cvmanager/

The publish.yaml file should contain all the content views in use (NOT the composite ones)

DEV,yaml;TEST.yaml;UAT.yaml;PROD.yaml should contain all the content views and composite content views used in the appropriate environment.

And once a month the monthly_updates.sh script is triggered through the root cron. This will publish a new content view for every update of the repos underneath and afterwards promote the (composite) content views for the environments. Unused content views are deleted except one.

 We ran into an issue with repos that have never been synced (own content) and opened a ticket for that https://github.com/RedHatSatellite/katello-cvmanager/issues/25

For now this is fixed by this line replacement

-if repo.has_key?('last_sync') and repo['last_sync'].has_key?('ended_at') and repo['last_sync']['ended_at']
+if repo.has_key?('last_sync') and repo['last_sync'].is_a?(::Hash) and repo['last_sync'].has_key?('ended_at') and repo['last_sync']['ended_at']

This is the cron job that I run to publish, promote and clean the content views once every month.

30 05 * * 0 [ $(date +\%d) -le 07 ] && cd /opt/satellite6_scripts/katello-cvmanager/ && /opt/satellite6_scripts/katello-cvmanager/monthly_updates.sh | mail -E -s "Satellite Monthly report: Content view updates" systems@company.com

This is the script that I use to trigger all the actions

==> monthly_updates.sh <==

#!/bin/sh
set -e
#Publish all content views
./cvmanager --config=publish.yaml --wait publish

#Update content views for DEV
./cvmanager --config=DEV.yaml --wait update
./cvmanager --config=DEV.yaml --wait promote

#Update content views for TEST
./cvmanager --config=TEST.yaml --wait update
./cvmanager --config=TEST.yaml --wait promote

#Update content views for UAT
./cvmanager --config=UAT.yaml --wait update
./cvmanager --config=UAT.yaml --wait promote

#Update content views for PROD
./cvmanager --config=PROD.yaml --wait update
./cvmanager --config=PROD.yaml --wait promote

#clean up unused content views
./cvmanager --config=publish.yaml --wait clean

==> PROD.yaml <==

---
:settings:
:user: read_only_user
:pass: *changme*
:uri: https://localhost
:timeout: 300
:org: 1
:lifecycle: 4
:keep: 1
:promote_cvs: true
:checkrepos: true
:cv:
rhel-7-server-x86_64: latest
rhel-6-server-x86_64: latest
capsule-7-x86_64: latest
:promote:
- rhel-7-server-x86_64
- rhel-6-server-x86_64
- capsule-7-x86_64

==> TEST.yaml <==

---
:settings:
:user: read_only_user
:pass: *changme*
:uri: https://localhost
:timeout: 300
:org: 1
:lifecycle: 2
:keep: 1
:promote_cvs: true
:checkrepos: true
:cv:
rhel-7-server-x86_64: latest
rhel-6-server-x86_64: latest
capsule-7-x86_64: latest
cv-repo-remi: latest
:promote:
- rhel-7-server-x86_64
- rhel-6-server-x86_64
- capsule-7-x86_64
- cv-repo-remi

==> UAT.yaml <==

---
:settings:
:user: read_only_user
:pass: *changme*
:uri: https://localhost
:timeout: 300
:org: 1
:lifecycle: 10
:keep: 1
:promote_cvs: true
:checkrepos: true
:cv:
cv-repo-remi: latest
:ccv:
cv-RHEL7-app-php7:
cv-repo-remi: latest
rhel-7-server-x86_64: latest
cv-RHEL6-app-php7:
cv-repo-remi: latest
rhel-6-server-x86_64: latest
:promote:
- cv-repo-remi
- cv-RHEL7-app-php7
- cv-RHEL6-app-php7

==> DEV.yaml <==

---
:settings:
:user: read_only_user
:pass: *changme*
:uri: https://localhost
:timeout: 300
:org: 1
:lifecycle: 9
:keep: 1
:promote_cvs: true
:checkrepos: true
:cv:
rhel-7-server-x86_64: latest
rhel-6-server-x86_64: latest
cv-repo-remi: latest
:ccv:
cv-RHEL7-app-php7:
cv-repo-remi: latest
rhel-7-server-x86_64: latest
cv-RHEL6-app-php7:
cv-repo-remi: latest
rhel-6-server-x86_64: latest
:promote:
- rhel-7-server-x86_64
- rhel-6-server-x86_64
- cv-repo-remi
- cv-RHEL7-app-php7
- cv-RHEL6-app-php7

==> publish.yaml <==

---
:settings:
:user: read_only_user
:pass: *changme*
:uri: https://localhost
:timeout: 300
:org: 1
:lifecycle: 1
:keep: 1
:promote_cvs: true
:checkrepos: true
:publish:
- rhel-7-server-x86_64
- rhel-6-server-x86_64
- capsule-7-x86_64
- cv-repo-remi
- cv-RHEL7-app-php7
- cv-RHEL6-app-php7

 

Use Ansible to report which systems need to reboot

I created this cronjob using an Ansible playbook to see if any of the Ansible managed hosts are up too long or have a newer kernel installed than the one running. This gives us a monthly overview to see which machines should be rebooted. This script is only valid for RPM and Red Hat based systems. Please adapt root on the end of the cron line to match the e-mail address you want to send your report to and make sure the system mail service is configured correctly.

/etc/cron.d/uptime-and-kernel-upgrade-report

# Send a report mail every month with the ansible managed hosts that have an uptime equal or higher than 300 days OR have a newer kernel installed than the one running
0 0 1 * * sysauto /usr/bin/flock -x -n /opt/systems/ansible -c 'cd /opt/systems/ansible/ ; ansible-playbook playbooks/systems/check_uptime_and_kernel_upgrade.yml | grep " has " | sed -e "s/^[ \t]*//" | mail -E -s "Monthly report: Systems that need a reboot" root'

check_uptime_and_kernel_upgrade.yml

- hosts: all
tasks:
- name: "Check for machines that have an uptime that exceeds 300 days"
shell: echo "$(hostname) has been up for $(uptime | cut -d ',' -f 1 | cut -d ' ' -f 4) days"
when: ansible_uptime_seconds > 25920000
register: uptime_exceeded
- name: "Check for machines that aren't running the latest installed kernel"
shell: LAST_KERNEL=$(rpm -q --last kernel | perl -pe 's/^kernel-(\S+).*/$1/' | head -1);CURRENT_KERNEL=$(uname -r);test $LAST_KERNEL = $CURRENT_KERNEL || echo "$(hostname) has a newer kernel installed than the one running"
ignore_errors: true
register: reboot_hint
- debug: var=uptime_exceeded.stdout_lines
- debug: var=reboot_hint.stdout_lines

 

Home