Red Hat Enterprise Linux 8.1

RHEL 8

Just as I published last Unix Tutorial Digest on November 5th, RHEL 8.1 release got shipped – think this is a great incremental release bringing a number of key improvements to the Red Hat Enterprise Linux 8.

RHEL 8 Release Cadence

Red Hat announced that going forward Red Hat Enterprise Linux OS will be receiving regular updates every 6 months. Since RHEL 8 release was in May 2019, this current RHEL 8.1 update is right on time, 6 months after.

RHEL 8.1 Improvements I Want To Try

There’s a number of great improvements in this release:

  • Live Kernel Patching with kpatch
  • SELinux profiles for containers and tbolt for Thunderbolt devices – will be cool to try on my RHEL 8 PC
  • Perhaps try RHEL 7.6 in-place upgrade to RHEL 8.1
  • Review rhel-system-roles and specifically the new storage role added in RHEL 8.1
  • LUKS2 online re-encryption
  • RHEL 8 Web Console
    • firewall zones management
    • Virtual Machines configuration

I also want to try Red Hat Universal Base Image for RHEL 8 – it’s been around since initial release in May, I just never got the chance to have a look.

See Also




Run Ansible Tasks Based on OS Distribution

Skipping Tasks Based on Ansible Conditionals

I’m actively refreshing my Ansible setup for both servers and desktops, running mostly Red Hat Enterprise Linux and CentOS Linux. Today’s quick tip is about the functionality that Ansible has for precise control of configuration management in such closely related distros.

How To Run Ansible Task In Specific OS Distribution

Ansible has quite a few facts it collects about each managed system, they are usually established (collected) at the very start of running any playbook (unless you decide to skip gathering facts).

A whole group of Ansible facts talks about OS distribution. Here they are as confirmed form the freshly deployed CentOS 8 Stream VM:

One of these facts is ansible_distribution:

greys@maverick:~/proj/ansible $ ansible stream -m setup| grep distribution
"ansible_distribution": "CentOS",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/redhat-release",
"ansible_distribution_file_variety": "RedHat",
"ansible_distribution_major_version": "8",
"ansible_distribution_release": "Core",
"ansible_distribution_version": "8.0",

We can use the very first one, called ansible_distribution. For CentOS, it says “CentOS”, but for Red Hat Enterprise Linux, it will be “RedHat”.

Example of using ansible_distribution

I have the following task below. It’s activating RHEL 8 subscription using my account, but obviously should only be doing this for Red Hat systems. For CentOS, I simply want to skip this task.

That’s why I’m checking for ansible_distribution, and as per below – the code will only run if and when the distribution is speficially RedHat, and not CentOS as my “stream” VM:

- name: Register RHEL 8 against Tech Stack
  shell: "subscription-manager register --activationkey=rhel8 --org=100XXXXX --force"
  tags: 
    - rhel
  when:
    ansible_distribution == "RedHat" 

That’s it! Even if I specify the tags=rhel filter, ansible-playbook will skip this task based on the collected facts (stream is the VM hostname):

greys@maverick:~/proj/ansible $ ansible-playbook --tags=rhel vm.yaml
PLAY [Tech Stack baseline] ***
TASK [Gathering Facts] *
ok: [stream]
TASK [techstack : Register RHEL 8 against Tech Stack] 
skipping: [stream]
PLAY RECAP *
stream                     : ok=1    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

See Also




CentOS 8 and CentOS Stream Released

Great news, CentOS 8 is released now. Even better – there’s now a step in between Fedora and RHEL, called CentOS Stream.

Have you tried them yet? I’ll be upgrading to CentOS 8 this week and am also thinking of downloading and installing CentOS Stream in a KVM VM.

Let me know what you think!

See Also




Attach Interface to Specific Firewall Zone in RHEL 8

RHEL 8

One of the first things I had to do on my recently built RHEL 8 PC was to move the primary network interface from public (default) zone to home zone – to make sure any firewall ports I open stay private enough.



How To List Which Zones and Interfaces are Active

Using the get-active-zones option of the firewall-cmd command, it’s possible to confirm where eno1 interface is at the moment. It’s already in the home zone cause I made the update earlier:

root@redhat:~ # firewall-cmd --get-active-zones
home
  interfaces: eno1
libvirt
  interfaces: virbr0

Attach Interface to a Firewall Zone

Here’s how one can move specified interface into a zone we want:

root@redhat:~ # firewall-cmd --zone=home --change-interface=eno1
success

Just to show how it works, I’m going to move eno1 into public zone and back to home one:

root@redhat:~ # firewall-cmd --zone=public --change-interface=eno1
success
root@redhat:~ # firewall-cmd --get-active-zones
libvirt
  interfaces: virbr0
public
  interfaces: eno1

Making Sure Firewall Changes Are Permanent

Don’t forget that after confirming a working firewall configuration, you need to re-run the same command with permanent option – this will update necessary files to make sure your firewall changes can survive a reboot:

root@redhat:~ # firewall-cmd --zone=home --change-interface=eno1 --permanent
The interface is under control of NetworkManager, setting zone to 'home'.
success

That’s it for today. Am really enjoying RHEL 8 configuration and still have this feeling I barely scratch the surface with all the new improvements this Red Hat Enterprise Linux brings.

See Also




Hello, World in podman

RHEL 8

Turns out it’s not that easy to install Docker CE in RHEL 8 just yet. Well, maybe there’s no immediate need since RHEL 8 comes with its own containerization stack based on podman?



Hello, World in podman

podman provides comprehensive compatibility with docker command, most non-Docker specific options are supported.

If you are familiar with docker command syntax, give it a try by just replacing docker with podman command. 

Let’s do the hello world exercise:

greys@redhat:~ $ podman run hello-world
Trying to pull registry.redhat.io/hello-world:latest…Failed
Trying to pull quay.io/hello-world:latest…Failed
Trying to pull docker.io/hello-world:latest…Getting image source signatures
Copying blob 1b930d010525: 977 B / 977 B [==================================] 0s
Copying config fce289e99eb9: 1.47 KiB / 1.47 KiB [==========================] 0s
Writing manifest to image destination
Storing signatures

Hello from Docker!

This message shows that your installation appears to be working correctly.

To try something more ambitious, you can run an Ubuntu container with:

$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

https://hub.docker.com/

For more examples and ideas, visit:

https://docs.docker.com/get-started/

As you can see, podman searches in Red Hat and Quay image repositories before moving on to Docker registry, but finally gets the hello-world image there.

Run Ubuntu image in podman

And if we want to follow Docker’s advice and try running the Ubuntu Docker image, we’ll replace

docker run -it ubuntu bash

with

podman run -it ubuntu bash

… It just works:

greys@redhat:~ $ podman run -it ubuntu bash
Trying to pull registry.redhat.io/ubuntu:latest…Failed
Trying to pull quay.io/ubuntu:latest…Failed
Trying to pull docker.io/ubuntu:latest…Getting image source signatures
Copying blob 5667fdb72017: 25.45 MiB / 25.45 MiB [==========================] 3s
Copying blob d83811f270d5: 34.53 KiB / 34.53 KiB [==========================] 3s
Copying blob ee671aafb583: 850 B / 850 B [==================================] 3s
Copying blob 7fc152dfb3a6: 163 B / 163 B [==================================] 3s
Copying config 2ca708c1c9cc: 3.33 KiB / 3.33 KiB [==========================] 0s
Writing manifest to image destination
Storing signatures
root@686f0d85b4ad:/# uname -a
Linux 686f0d85b4ad 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
root@686f0d85b4ad:/# cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"

I think it’s pretty cool. Will definitely read up and post more about podman and containerization in Red Hat in the following weeks.

See Also




Confirm Firewall Configuration in RHEL 8

List Firewall Rules in RHEL 8

I’m fascinated by the improvements and new features in RHEL 8, plus it’s a primary distro used in most corporate environments – so expect to quite a number of posts related to it in the nearest future.

The default interface for managing firewalls in RHEL 8 is firewalld and specifically firewall-cmd command.

Show Active Zones in RHEL 8

There’s a concept of zones – security domains – in RHEL 8 firewalls. You assign each of available network interfaces on your Red Hat Enterprise Linux system to one or more of these domains.

That’s why the first step is to confirm these zones, to see which ones are actively managing access for each network device:

root@rhel8:~ # firewall-cmd --get-active-zones
home
interfaces: enp2s0
libvirt
interfaces: virbr0

List All Rules for Firewall Zone in RHEL 8

I’m interested in the primary physical network interface – enp2s0. It belongs to the home zone as per previous command, so that’s the zone we’ll list all the rules fore:

root@rhel8:~ # firewall-cmd --list-all --zone=home
home (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp2s0
  sources: 
  services: cockpit dhcpv6-client mdns samba-client ssh
  ports: 5901/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

From the output below, I have highlighted additionally enabled ports – 5901 is the one for VNC client that allows me to access graphics desktop session on my RHEL 8 PC remotely.

That’s it for today! Thanks for stopping by and talk soon!

See Also




Skip Gathering Facts in Ansible

Red Hat Ansible
Red Hat Ansible

There are Ansible playbooks which depend on the most up-to-date information found on each node. That’s where fact gathering is a much needed help. But there are also simpler more predefined playbooks, which don’t need fact gathering and can therefore gain performance if no facts are collected.

Why Fact Gathering in Ansible Takes Time

Fact gathering means Ansible runs a number of commands to confirm the most recent values for important indicators and parameters.

Run against my freshly installed RHEL 8 based PC, this takes roughly 4 seconds. Part of this can be to how RHEL is configured (and that it’s still a work in progress), but part of this amount of time is defined by the sheer number of facts: more than 1000!

Typical Facts Collected By Ansible

This is not a complete list, I’m just giving you examples to indicate why facts collection may be time consuming:

  • hardware parameters of remote system
  • storage devices (types, models, sizes, capabilities)
  • filesystems and logical volume managers (objects, types, sizes)
  • OS distro information
  • network devices and full list of their capabilities
  • environment variables

Disable Fact Gathering in Ansible

Since I don’t really need to re-establish hardware specs or logical volumes layout of my RHEL 8 desktop every time I run some Ansible post-configuration, I decided to disable fact gathering and shave 4-5 sec at the start of each playbook run.

Simply specify this at the top of your Ansible playbook:

gather_facts: no

In on of my playbooks, this is how it looks:

---
- name: Baseline
  hosts: desktops
  gather_facts: no

This really made a noticeable difference. Have fun!

See Also




Specify User per Task in Ansible

become_user per task in Ansible

Turns out, become_user directive can be used not only for privilege escalation (running Ansible playbooks as root), but also for becoming any other when you want certain tasks run as that user instead of root.

Default Ansible Behavior for Running Tasks

I had the following piece of code, running /home/greys/.dotfiles/install script. It didn’t run as intended, creating symlinks in /root directory (because that’s what Ansible was running the task as):

- name: Create symlinks for dotfiles
  shell: /home/greys/.dotfiles/install
  register: dotfiles.result
  ignore_errors: yes
  tags: 
    - dotfiles

Specify User for an Ansible Task

become_user parameter can be specifed per task or per playbook, apparently. So that’s how you specify it per task – in my example to run the Create symlinks for dotfiles task as my user greys:

- name: Create symlinks for dotfiles
  shell: /home/greys/.dotfiles/install
  register: dotfiles.result
  ignore_errors: yes
  become: yes
  become_user: greys
  tags: 
    - dotfiles

See Also




How To Troubleshoot SELinux with Audit Logs

Audit Logs with SELinux Messags

I’m post configuring a new RHEL 8 setup on my old PC and want to share some useful SELinux troubleshooting techniques.

How To Check Audit Logs for SELinux

I had a problem with SSH not accepting keys for login. Specifically, I wanted the keys to be in a non-standard /var/ssh/greys/authorized_keys location (instead of my homedir), but SSH daemon would just ignore this file.

I double checked permissions, restarted SSHd and eventuall realised that the issue must have been due to SELinux. So I went to inspect the audit logs.

Red Hat Enterprise Linux puts audit logs into /var/log/audit directory. If you’re looking for SELinux issues, just grep for denied – it will show you everything that has recently been blocked:

root@rhel8:~ # grep denied /var/log/audit/*
type=AVC msg=audit(1567799177.932:3031): avc:  denied  { read } for  pid=24527 comm="sshd" name="authorized_keys" dev="dm-11" ino=26047253 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_t:s0 tclass=file permissive=0
 type=AVC msg=audit(1567799177.943:3033): avc:  denied  { read } for  pid=24527 comm="sshd" name="authorized_keys" dev="dm-11" ino=26047253 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_t:s0 tclass=file permissive=0
 type=AVC msg=audit(1567799177.956:3035): avc:  denied  { read } for  pid=24527 comm="sshd" name="authorized_keys" dev="dm-11" ino=26047253 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_t:s0 tclass=file permissive=0

I also highlighted the likely problem: SSH daemon is running under sshd_t context, but files in /var/ssh/ directories inherited standard var_t context.

Just to be sure, I checked the context on the default /home/greys/.ssh/authorized_keys file:

root@rhel8:~ # ls -alZ /home/greys/.ssh/authorized_keys 
 -rw-------. 1 greys greys unconfined_u:object_r:ssh_home_t:s0 95 Sep  6 20:28 /home/greys/.ssh/authorized_keys

That’s the answer! We need to change /var/ssh/greys/authorized_keys file to the ssh_home_t context.

Updating SELinux context for a file

First, let’s change the SELinux context:

root@rhel8:~ # semanage fcontext -a -t ssh_home_t /var/ssh/greys/authorized_keys

… and now we relabel the actual file:

root@rhel8:~ # restorecon -Rv /var/ssh/greys/authorized_keys
Relabeled /var/ssh/greys/authorized_keys from system_u:object_r:var_t:s0 to system_u:object_r:ssh_home_t:s0

That’s it – after that my logins using SSH keys started working just fine. Hope you find this example useful!

See Also




yum – List Installed Packages

unix-tutorial

CentOS and RedHat Linux are still the majority of my Linux servers and so now and then I have a RedHat specific question to investigate. This time around, I’ve explored getting the list of installed packages using yum command.

yum list installed

As hard as it may be to believe it, the actual command I needed is this:

[greys@rhel8 ~]$ yum list installed

That’s right – type it word for word and yum will report the full list of packages installed in your system along with package versions and package group names.

Here’s what Red Hat Enterprise Linux 8 beta VM reports:

[greys@rhel8 ~]$ yum list installed | more
Not root, Subscription Management repositories not updated
2018-10-28 13:33:38,137 [WARNING] yum:31323:MainThread @logutil.py:141 - logging already initialized
Not root, Subscription Management repositories not updated
Installed Packages
GConf2.x86_64 3.2.6-22.el8 @rhel-8-for-x86_64-appstream-beta-rpms
ModemManager.x86_64 1.8.0-1.el8 @rhel-8-for-x86_64-baseos-beta-rpms
ModemManager-glib.x86_64 1.8.0-1.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager.x86_64 1:1.14.0-5.el8 @anaconda
NetworkManager-adsl.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager-bluetooth.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager-libnm.x86_64 1:1.14.0-5.el8 @anaconda
NetworkManager-ovs.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager-team.x86_64 1:1.14.0-5.el8 @anaconda
NetworkManager-tui.x86_64 1:1.14.0-5.el8 @anaconda
NetworkManager-wifi.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager-wwan.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
PackageKit.x86_64 1.1.10-6.el8 @rhel-8-for-x86_64-appstream-beta-rpms
PackageKit-command-not-found.x86_64 1.1.10-6.el8 @rhel-8-for-x86_64-appstream-beta-rpms
PackageKit-glib.x86_64 1.1.10-6.el8 @rhel-8-for-x86_64-appstream-beta-rpms
PackageKit-gstreamer-plugin.x86_64 1.1.10-6.el8 @rhel-8-for-x86_64-appstream-beta-rpms
PackageKit-gtk3-module.x86_64 1.1.10-6.el8 @rhel-8-for-x86_64-appstream-beta-rpms
abattis-cantarell-fonts.noarch 0.0.25-4.el8 @rhel-8-for-x86_64-appstream-beta-rpms
accountsservice.x86_64 0.6.50-5.el8 @rhel-8-for-x86_64-appstream-beta-rpms
accountsservice-libs.x86_64 0.6.50-5.el8 @rhel-8-for-x86_64-appstream-beta-rpms
acl.x86_64 2.2.53-1.el8 @anaconda
adcli.x86_64 0.8.2-2.el8 @rhel-8-for-x86_64-baseos-beta-rpms
adobe-mappings-cmap.noarch 20171205-3.el8 @rhel-8-for-x86_64-appstream-beta-rpms
adobe-mappings-cmap-deprecated.noarch 20171205-3.el8 @rhel-8-for-x86_64-appstream-beta-rpms
...

Grep yum list installed using group name

The output makes is very easy to grep for packages that belong to the same software group, like rhel-8-for-x86_64-baseos-beta-rpms in this example:

[greys@rhel8 ~]$ yum list installed | grep rhel-8-for-x86_64-baseos-beta-rpms | more
2018-10-28 13:40:14,740 [WARNING] yum:31405:MainThread @logutil.py:141 - logging already initialized
ModemManager.x86_64 1.8.0-1.el8 @rhel-8-for-x86_64-baseos-beta-rpms
ModemManager-glib.x86_64 1.8.0-1.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager-adsl.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager-bluetooth.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager-ovs.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager-wifi.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
NetworkManager-wwan.x86_64 1:1.14.0-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
adcli.x86_64 0.8.2-2.el8 @rhel-8-for-x86_64-baseos-beta-rpms
at.x86_64 3.1.20-11.el8 @rhel-8-for-x86_64-baseos-beta-rpms
attr.x86_64 2.4.48-3.el8 @rhel-8-for-x86_64-baseos-beta-rpms
augeas-libs.x86_64 1.10.1-3.el8 @rhel-8-for-x86_64-baseos-beta-rpms
avahi.x86_64 0.7-18.el8 @rhel-8-for-x86_64-baseos-beta-rpms
avahi-glib.x86_64 0.7-18.el8 @rhel-8-for-x86_64-baseos-beta-rpms
avahi-libs.x86_64 0.7-18.el8 @rhel-8-for-x86_64-baseos-beta-rpms
bash-completion.noarch 1:2.7-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
bc.x86_64 1.07.1-5.el8 @rhel-8-for-x86_64-baseos-beta-rpms
binutils.x86_64 2.30-49.el8 @rhel-8-for-x86_64-baseos-beta-rpms
blktrace.x86_64 1.2.0-9.el8 @rhel-8-for-x86_64-baseos-beta-rpms
bluez.x86_64 5.50-1.el8 @rhel-8-for-x86_64-baseos-beta-rpms
bluez-libs.x86_64 5.50-1.el8 @rhel-8-for-x86_64-baseos-beta-rpms
bluez-obexd.x86_64 5.50-1.el8 @rhel-8-for-x86_64-baseos-beta-rpms
bolt.x86_64 0.4-1.el8 @rhel-8-for-x86_64-baseos-beta-rpms
bpftool.x86_64 4.18.0-32.el8 @rhel-8-for-x86_64-baseos-beta-rpms
bubblewrap.x86_64 0.3.0-1.el8 @rhel-8-for-x86_64-baseos-beta-rpms

That’s it for today!

See Also