Tower 4.0 for Mac

Tower 4.0 is out!

Great news! Tower (I still keep calling it Git Tower) released version 4.0 recently, with a number of bug fixes and a great new feature.

I’m not a software developer, but find myself writing and managing more and more code in the recent years – mostly infrastructure as code, but also Python scripts for my mini-projects.

I’ve been a user of Tower for the past 2 years and must say it’s a pleasure to use. I tried other GUI solutions for working with git, but on MacOS this Tower thing is so good there’s no competition.

Improvements in Tower 4.0

The biggest thing is the massively improved undo function – Cmd+Z now helps with undoing quite a bunch of accidental changes in your git environment.

For my (fairly basic) knowledge of git, here are the common mistakes that Cmd+Z will help with:

  • Undo deleting a branch or tag
  • Undo deleting files
  • Undo committing changes
  • Undo publishing a branch on a remote
  • Undo staging/unstaging changes

See Also




Use visudo to Check SUDO Config Syntax

syntax check with visudo -c

I’m working on a longer post about editing sudoers with visudo or editing /etc/sudoers directly (you should avoid this if possible), but for now here’s just a quick note on a visudo command functionality that I find really useful.

IMPORTANT: if possible, edit sudoers files from interactive root shell – meaning you are root already, so there’s a chance to troubleshoot if something went wrong.

Two Main Ways of Using visudo

Primary usage of visudo is interactive: you run the command and it helps you edit the /etc/sudoers file.

Secondary usage is syntax check of all the sudoers config – that’s what I’m going to show today.

Use visudo to Check Config Syntax

Run visudo with the -c option to have it check all the SUDO config files – the /etc/sudoers file and any includes from /etc/sudoers.d directory:

greys@becky:~ # visudo -c
/etc/sudoers: parsed OK
/etc/sudoers.d/010_at-export: parsed OK
/etc/sudoers.d/010_pi-nopasswd: parsed OK
/etc/sudoers.d/README: parsed OK

How Broken Syntax is Reported by visudo

root@becky:~ # visudo -c
/etc/sudoers: syntax error near line 10 <<<
       parse error in /etc/sudoers near line 10

As noted above, I’m running visudo from interactive shell – so even though in this example sudoers is broken, I can still fix it by editing the file directly (because I’m still root).

In this example above, I need to vi /etc/sudoers and check line 10 in the file.

IMPORTANT: Once changes are made, re-run visudo -c to make sure configs are correct now. Do NOT leave your root session – log into the same server separately and try sudo commands to check.

See Also




SSH Reference Mindmap

I haven’t shared this on Unix Tutorial before, but I’m a huge fan of mindmaps. I create and use them all the time for any process or knowledge that I am working with.

It seems to me that all the Unix Reference pages I’m creating will greatly benefit from mindmaps, so here’s an example of what I mean:

SSH Reference Mindmap

Why I Like Midmaps

  • it’s a great way to mentally place certain peices of knowledge to where they belong – so that you understand how certain topics relate to each other and this means you’ll remember them better
  • a mindmap is an awesome and very quick way to refresh your knowledge about something – even without specifics like port numbers and command lines you can still learn, remember and recall a lot
  • a mindmap is invaluable to highlighting gaps in knowledge – as you place broad topics into the midmap branch, you often realise that you need to learn more before that section is complete
  • it’s a great learning tool – I can just add questions about things I don’t know and come back later to update them with newly learned pieces of information

Let me know what you think!

pS: I’m preparing my Unix Tutorial Patreon page this month – so check it out and let me know how I can make Unix Tutorial better to help you learn and succeed. It will mean a lot if you decide to become a patron!

See Also




Basic Kubernetes Cluster with 2 VMs

I’m researching for my next Unix Tutorial Project, so today I’m spinning up two new Ubuntu 19.10 VMs and configuring them as Kubernetes Master and Kubernetes Node, accessible from my macOS desktop.

This is just a bunch of notes on the topic, the project will have the full procedure (and a YouTube video, if everything works out as I expect).

NOTE: I’m using install instructions supplied by Docker and Google for their packages install. Apparently, there’s a few more ways and packaing systems to get similar results specifically on Ubuntu.

Step 1: Spin Up the 2 VMs

I chose Ubuntu, but any Linux capable of running Docker should be fine.

The plan is to have one VM as Kubernetes Master (k8master) and another VM as Kubernetes Node (k8node1) – that’s where our containers will actually run.

I’m using Parallels on macOS, it has this express install which means you just point it at an ISO and it does unattended Ubuntu install. Really cool!

Step 2: Configure Host-Only Networking

Now, we need to shut down both VMs and then add host-only network adapter to each – this will be used for cluster communication and for remotely managing k8s.

IMPORTANT: this is an additional interface. The primary one is separate and used for Internet access (cause we need it to install software).

Step 3: Install Docker and Kubernetes on both VMs

Pretty standard procedure, following official install guides. This needs to be done on both Ubuntu 19.10 VMs.

The only thing is that Ubuntu 19.10 (code name Eoan in repo URLs) is not supported properly, so I ended up using previous distro codename (disco) for Docker URL:

$ echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu disco stable" | sudo tee -a /etc/apt/sources.list.d/docker.list
$ echo "deb https://apt.kubernetes.io/ kubernetes-eoan main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update

Install Docker

$ sudo apt-get install docker-ce
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
aufs-tools cgroupfs-mount containerd.io docker-ce-cli git git-man liberror-perl pigz
Suggested packages:
git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-cvs git-mediawiki git-svn
The following NEW packages will be installed:
aufs-tools cgroupfs-mount containerd.io docker-ce docker-ce-cli git git-man liberror-perl pigz
0 upgraded, 9 newly installed, 0 to remove and 175 not upgraded.
Need to get 90.5 MB of archives.
After this operation, 418 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

Install k8s (Kubernetes)

$ sudo apt-get install -y kubelet kubeadm kubectl
 Reading package lists… Done
 Building dependency tree
 Reading state information… Done
 The following additional packages will be installed:
   conntrack cri-tools ebtables ethtool kubernetes-cni socat
 Suggested packages:
   nftables
 The following NEW packages will be installed:
   conntrack cri-tools ebtables ethtool kubeadm kubectl kubelet kubernetes-cni socat
 0 upgraded, 9 newly installed, 0 to remove and 175 not upgraded.
 Need to get 51.8 MB of archives.
 After this operation, 273 MB of additional disk space will be used.

Step 4: Setup Kubernetes Cluster

This means starting a master node:

greys@k8master:~$ sudo kubeadm init --pod-network-cidr=10.37.0.0/16 --apiserver-advertise-address=10.37.129.3

and then using the command from the output in previous step, join the Kubernetes node 1 to the cluster:

root@k8node1:/# kubeadm join 10.37.129.3:6443 --token 2cqf5d.ykwwsripsqe0s530     --discovery-token-ca-cert-hash sha256:418c2ef3a98d73cddaea8b3470d7b710c384f1644b87785e7e907e8ef44a2193

This is how we’d verify status:

greys@k8master:~$ kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-r567t 0/1 Pending 0 6m38s
kube-system coredns-6955765f44-wj4vw 0/1 Pending 0 6m38s
kube-system etcd-k8master 1/1 Running 0 6m50s
kube-system kube-apiserver-k8master 1/1 Running 0 6m50s
kube-system kube-controller-manager-k8master 1/1 Running 0 6m50s
kube-system kube-proxy-g5ntg 1/1 Running 0 5m4s
kube-system kube-proxy-m24gq 1/1 Running 0 6m38s
kube-system kube-scheduler-k8master 1/1 Running 0 6m50s

Confirm status of nodes:

greys@k8master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8master NotReady master 12m v1.17.3
k8node1 NotReady 10m v1.17.3

Step 5: Connect to Kubernetes Master from my desktop

This step will need some consideration because I already have local Kubernetes setup on my desktop (via docker-desktop), which I’d like to keep. So I’ll need to add another configuration, I suspect.

That’s all for now!

See Also




Debian 10.3 Released

Debian Linux

Pretty cool! I almost missed that Debian 10.3 got released last week. This is a corrective release, meaning it’s about improving stability and security rather than about introducing major innovations.

Upgrade Debian 10.2 to 10.3

I only have one dedicated server running Debian 10, and will possibly reinstall even that – turns out I’m much more used to CentOS servers than anything else.

BUT this server is still there, so why not upgrade it?

Step 1: Update Debian repositories

First, we run apt-get update. I never noticed it before, but apparently this command is clever enough to recognize that InRelease changes version from 10.2 to 10.3 (see the last line of the output):

root@srv:~ # apt-get update
 Get:1 http://mirrors.online.net/debian buster InRelease [122 kB]
 Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
 Get:3 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
 Ign:4 http://mirrors.online.net/debian buster/non-free Sources
 Ign:5 http://mirrors.online.net/debian buster/main Sources
 Ign:6 http://mirrors.online.net/debian buster/main amd64 Packages
 Ign:7 http://mirrors.online.net/debian buster/main Translation-en
 Ign:8 http://mirrors.online.net/debian buster/non-free amd64 Packages
 Ign:9 http://mirrors.online.net/debian buster/non-free Translation-en
 Get:4 http://mirrors.online.net/debian buster/non-free Sources [86.3 kB]
 Get:5 http://mirrors.online.net/debian buster/main Sources [7,832 kB]
 Get:10 http://security.debian.org/debian-security buster/updates/main Sources [102 kB]
 Get:11 http://security.debian.org/debian-security buster/updates/main amd64 Packages [176 kB]
 Get:12 http://security.debian.org/debian-security buster/updates/main Translation-en [92.8 kB]
 Get:6 http://mirrors.online.net/debian buster/main amd64 Packages [7,907 kB]
 Get:7 http://mirrors.online.net/debian buster/main Translation-en [5,970 kB]
 Get:8 http://mirrors.online.net/debian buster/non-free amd64 Packages [88.0 kB]
 Get:9 http://mirrors.online.net/debian buster/non-free Translation-en [88.7 kB]
 Fetched 22.6 MB in 3s (6,828 kB/s)
 Reading package lists… Done
 N: Repository 'http://mirrors.online.net/debian buster InRelease' changed its 'Version' value from '10.2' to '10.3'

Step 2: Upgrade packages and Debian distro

apt-get dist-upgrade brings all the packages to the current release of your Debian/Ubuntu distro. In my case,

root@srv:~ # apt-get dist-upgrade
 Reading package lists… Done
 Building dependency tree
 Reading state information… Done
 Calculating upgrade… Done
 The following NEW packages will be installed:
   linux-headers-4.19.0-8-amd64 linux-headers-4.19.0-8-common linux-image-4.19.0-8-amd64
 The following packages will be upgraded:
   base-files e2fsprogs git-man libboost-iostreams1.67.0 libboost-system1.67.0 libcom-err2 libcups2 libcupsimage2 libext2fs2 libgnutls30 libidn2-0
   libnss-systemd libopenjp2-7 libpam-systemd libpython3.7 libpython3.7-dev libpython3.7-minimal libpython3.7-stdlib libsasl2-2 libsasl2-modules
   libsasl2-modules-db libss2 libsystemd0 libtiff5 libtimedate-perl libudev1 linux-compiler-gcc-8-x86 linux-headers-amd64 linux-image-amd64 linux-kbuild-4.19
   linux-libc-dev openssh-client openssh-server openssh-sftp-server python-apt python-apt-common python3-apt python3.7 python3.7-dev python3.7-minimal sudo
   systemd systemd-sysv udev
 44 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
 Need to get 129 MB of archives.
 After this operation, 325 MB of additional disk space will be used.
 Do you want to continue? [Y/n] y
...

Step 3: Reboot (when convenient)

You don’t have to reboot immediately. The biggest reason to do it is to start using new version of Linux kernel, but there’s hardly a specific update in minor kernel upgrade that justifies immediate downtime.

Here’s the kernel version before reboot:

root@srv:~ # uname -a
Linux srv.ts.fm 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 GNU/Linux

When possible, you should do a graceful reboot:

root@srv:~ # shutdown -r now

After system is back online, we can see that it’s running Debian Buster 10.3 now:

greys@srv:~ $ uname -a
Linux srv.ts.fm 4.19.0-8-amd64 #1 SMP Debian 4.19.98-1 (2020-01-26) x86_64 GNU/Linux

See Also




Systemd Startup Times with systemd-analyze

systemd-analyze

Systemd had the goal of optimising startup times from the very beginning. It is therefore no particular wonder that it has a tool for precise startup timing and analysis: systemd-analyze.

Systemd Startup Analysis

Run without command line parameters, systemd-analyze shows a summary of how quickly your system booted Linux last time:

greys@srv:~ $ systemd-analyze
Startup finished in 2.743s (kernel) + 2.654s (userspace) = 5.397s
graphical.target reached after 2.641s in userspace

See What Systemd Service Took Longest to Start

It gets better! You can specify the blame parameter, and systemd-analyze will show you exactly how long each of the startup tasks has taken – that’s really useful for troubleshooting:

greys@srv:~ $ systemd-analyze blame
1.983s docker.service
277ms certbot.service
276ms man-db.service
223ms dev-md0.device
210ms apt-daily-upgrade.service
208ms apt-daily.service
194ms logrotate.service
101ms systemd-fsck@dev-disk-by\x2duuid-cea13f85\x2d61fa\x2d414f\x2d9c2f\x2d1e48ec41ad25.service
69ms chrony.service
66ms systemd-journald.service
61ms ssh.service
59ms systemd-remount-fs.service
57ms systemd-fsck@dev-disk-by\x2duuid-cfceff76\x2df739\x2d49e2\x2da4d1\x2d02472e5457f9.service
46ms systemd-udev-trigger.service
41ms keyboard-setup.service
41ms systemd-logind.service
40ms containerd.service
36ms networking.service
31ms apparmor.service
27ms user@1000.service
21ms systemd-tmpfiles-setup.service
21ms rsyslog.service
19ms systemd-update-utmp.service
15ms storage.mount
14ms var-log.mount
14ms systemd-udevd.service
12ms dev-disk-by\x2duuid-799ad160\x2d8a59\x2d4c80\x2db78a\x2d7e3986876964.swap
11ms systemd-user-sessions.service
10ms dev-disk-by\x2duuid-261bd6ac\x2d2f4c\x2d475b\x2da4e4\x2db2548368e0fa.swap
10ms systemd-update-utmp-runlevel.service
9ms systemd-journal-flush.service
9ms polkit.service
9ms dev-disk-by\x2duuid-d45708da\x2d06f4\x2d41d6\x2daabe\x2decd87fb5edbe.swap
8ms systemd-tmpfiles-clean.service
8ms user-runtime-dir@1000.service
7ms dev-mqueue.mount
7ms sys-kernel-debug.mount
7ms systemd-tmpfiles-setup-dev.service
6ms systemd-modules-load.service
6ms console-setup.service
6ms systemd-sysusers.service
4ms systemd-sysctl.service
4ms kmod-static-nodes.service
4ms dev-hugepages.mount
4ms systemd-random-seed.service
2ms ifupdown-pre.service
1ms docker.socket
greys@srv:~ $

See Also




Project: Password Protect a Website with nginx

nginx
nginx logo

I needed to keep a few older websites online for a short little while, but didn’t want to leave them wide open in case older CMS systems were vulnerable – so I decided to protect them with password.

What is nginx?

nginx (pronounced Engine-Ex) is a webserver, reverse-proxy and caching solution powering a massive portion of the Internet websites today. It’s a lightweight web-server with non-locking implementation, meaning it can server impressive amounts of traffic with humble resource requirements.

nginx was acquired by F5 in 2019.

I’ll be writing a lot more about nginx in 2020, simply because I’m finally catching up with my dedicated hosts infrastructure and will be getting the time to document my setup and best practices.

Password Protecting in nginx

There’s a few steps to protecting a website using nginx (steps are similar but implemented differently in Apache web server):

  1. Decide and create/update the passwords file
  2. Decide on the username and password
  3. Generate password hash and add entry to the passwords file
  4. Update webserver configuration to specify password protection

Because websites are configured as directory locations, you have a choice of protecting the whole website like www.unixtutorial.org or just a part (subdirectory) of it, like www.unixtutorial.org/images.

INTERESTING: even though it’s commonly referred to as password protecting websites, what actually happens is you protect with username and password. So when you’re trying to open a protected website, you get a prompt like this, right there in your browser:

Password protection prompt

Password file and username/password Configuration

Most of the time website access is controlled by files named htpasswd. You either create default password file in /etc/nginx/htpasswd location, or create a website specific version like /etc/nginx/unixtutorial.htpasswd.

You can create a file using touch command:

# touch /etc/nginx/unixtutorial.htpasswd

Or better yet, use the htpasswd command to do it. But because htpasswd is part of Apache tools, you may have to install it first:

$ sudo yum install httpd-tools

When you run the htpasswd command, you specify two parameters: the password file name and the username you’ll use for access.

If the password file is missing, you’ll be notified like this:

$ sudo htpasswd /etc/nginx/htpasswd unixtutorial 
htpasswd: cannot modify file /etc/nginx/htpasswd; use '-c' to create it.

And yes, adding the -c option will get the file created:

$ sudo htpasswd -c /etc/nginx/htpasswd unixtutorial
New password:
Re-type new password:
Adding password for user unixtutorial

Now, if we cat the file, it will show the unixtutorial user and the password hash for it:

$ cat /etc/nginx/htpasswd
unixtutorial:$apr1$bExTryjo/$uxRop/uv5UwXvWl4EM5gv0

IMPORTANT: although this file doesn’t contain actual passwords, only their encrypted hashes, it can still be used to guess your passwords on powerful systems – so take the usual measures to protect access to this file.

Update Website Configuration with Password Protection

I’ve got the following setup for this old website in my example:

server {
     listen      *:80;
     server_name forum.reys.net;
     keepalive_timeout    60;

     access_log /var/log/nginx/forum.reys.net/access.log multi_vhost;
     error_log /var/log/nginx/forum.reys.net/error.log;

location / {
     include "/etc/nginx/includes/gzip.conf";
     proxy_pass  http://172.31.31.47:80;

     include "/etc/nginx/includes/proxy.conf";
     include "/etc/nginx/includes/headers.conf";
     }
}

Protection is done on the location level. In this example, location / means my whole website is protected.

So right in front of the proxy_pass entry, I’ll add my password protection part:

auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd;

As you can see, we’re referring to the password file that we created earlier. The auth_basic “Restricted” part helps you to configure a specific message (instead of word Restricted) that will be shown during username/password prompt.

That’s how the password protected part will look:

location / {
     include "/etc/nginx/includes/gzip.conf";
     proxy_pass  http://172.31.31.47:80;

     auth_basic "Restricted";
     auth_basic_user_file /etc/nginx/htpasswd;

     include "/etc/nginx/includes/proxy.conf";
     include "/etc/nginx/includes/headers.conf";
     }

Save the file and restart nginx:

$ sudo service restart nginx

Now the website https://forum.reys.net is password protected!

See Also




Host Key Verification Failed

Host key verification failed

When reinstalling servers with new versions of operating system or simply reprovisioning VMs under the same hostname, you eventually get this Host Key Verification Failed scenario. Should be easy enough to fix, once you’re positive that’s a valid infrastructure change.

Host Key Verification

Host key verification happens when you attempt to access remote server with SSH. Before verifying if you have a user on the remote server and whether your password or SSH key match that remote user, SSH client must do basic sanity checks on the lower level.

Specifically, SSH client checks if you attempted connecting to the remote server before. And whether anything changed since last time (it shouldn’t have).

Server (host) keys must not change during a normal life cycle of a server – they are generated at server/VM build stage (when OpenSSH starts up the first time) and remain the same – it’s the server’s identity.

This means if your SSH client has one keyprint for a particular server, and then suddenly detects it’s a different one – it’s flagged as an issue: at best, you’re looking at the new, legit server replacement with the same hostname. At worst, someone’s trying to intercept your connection and/or pretend to be your server.



Host Key Verification Failed

Here’s how I get this error on my Macbook (s1.unixtutorial.org doesn’t really exist, it’s just a hostname I show here as example):

greys@maverick:~ $ ssh s1.unixtutorial.org
Warning: the ECDSA host key for 's1.unixtutorial.org' differs from the key for the IP address '51.159.18.142'
Offending key for IP in /Users/greys/.ssh/known_hosts:590
Matching host key in /Users/greys/.ssh/known_hosts:592
Are you sure you want to continue connecting (yes/no)?

At this stage your default answer should always be “no”, followed by inspection of the known_hosts file to confirm what happened and why identity appears to be different.

If you answer no, you’ll get the Host Key Verification Failed error:

greys@maverick:~ $ ssh s1.unixtutorial.org
Warning: the ECDSA host key for 's1.unixtutorial.org' differs from the key for the IP address '51.159.18.142'
Offending key for IP in /Users/greys/.ssh/known_hosts:590
Matching host key in /Users/greys/.ssh/known_hosts:592
Are you sure you want to continue connecting (yes/no)? no
Host key verification failed.

How To Solve Host Key Verification Errors

The output above actually tells you what to do: inspect file known_hosts and look at the lines 590 and 592 specifically. One of them is likely to be obsolete, and if you remove it the issue will go away.

Specifically, if you (like me) just reinstalled the dedicated server or VM with a new OS but kept the original hostname, then the issue is expected (new server definitely generated a new host key), so the solution is indeed to remove old key from the known_hosts file and re-attempt the connection.

First, I edited the /Users/greys/.ssh/known_hosts file and removed the line 590, which looked something like this. We simply need to find the line with given number, or look for the hostname we just tried to ssh into (s1.unixtutorial.org in my case):

s1.unixtutorial.org,51.159.xx.yy ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlkzdHAyNTYAAAAxyzAgBPbBCXCL5w8

We can try reconnecting now, answer yes and connect to the server:

greys@maverick:~ $ ssh s1.unixtutorial.org
The authenticity of host 's1.unixtutorial.org (51.159.xx.yy)' can't be established.
ECDSA key fingerprint is SHA256:tviW39xN2M+4eZOUGi8UFvBZoHKaLaijBA581Nrhjac.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 's1.unixtutorial.org,51.159.xx.yy' (ECDSA) to the list of known hosts.
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Fri Feb  7 21:18:35 2020 from unixtutorial.org
[greys@s1 ~]$

As you can see, the output now makes a lot more sense: our SSH client can’t establish authenticity of the remote server s1.unixtutorial.org – this is because we removed any mention of that server from our known_hosts file in previous step. Answering yes adds info about s1.unixtutorial.org, so any later SSH sessions will work just fine:

greys@maverick:~ $ ssh s1.unixtutorial.org
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Sat Feb  8 18:31:39 2020 from 93.107.36.193
[greys@s1 ~]$

Copying Host Keys to New Server

I should note that in some cases your setup or organisation would require the same host keys to be kept even with server reinstall. In this case, you’ll need to use last know backup of old server to grab SSH host keys from, to re-deploy them onto the new server – I’ll show/explain this in one of the future posts.

See Also




Resume Downloads with curl

Resuming downloads with curl

If you’re on an unstable WiFi hotspot or simply forgot about a curl download and shut down your laptop, there’s a really cool thing to try: resume download from where you left off.

How To Download a File with curl

I’m downloading the Solus 4.1 release – it’s a 1.7GB ISO image. Here’s the command line that makes curl download the file and put it into local file (note the -O option):

greys@xps:~/Downloads/try $ curl -O http://solus.veatnet.de/iso/images/4.1/Solus-4.1-Budgie.iso  

How To Stop Download with curl

Now, if we press Ctlr+C to stop download, we’ll end up with a partially downloaded file:

greys@xps:~/Downloads $ curl -O http://solus.veatnet.de/iso/images/4.1/Solus-4.1-Budgie.iso
   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                         
                                  Dload  Upload   Total   Spent    Left  Speed                                           
   3 1692M    3 58.8M    0     0  2161k      0  0:13:21  0:00:27  0:12:54 4638k^C  

greys@xps:~/Downloads $ ls -la Solus*iso
-rw-r--r-- 1 greys 102318080 Feb 7 22:58 Solus-4.1-Budgie.iso
greys@xps:~/Downloads $ du -sh Solus-4.1-Budgie.iso
98M Solus-4.1-Budgie.iso

How To Resume Download with curl

To resume download (rather than restart and begin downloading the whole file again0, simply use the -C option. It’s meant to be taking a specific offset in bytes, but also works if you specify “-“, when curl looks at existing file and decides what the offset should be automatically:

That’s it, you can let the download complete now:

greys@xps:~/Downloads $ curl -C - -O http://solus.veatnet.de/iso/images/4.1/Solus-4.1-Budgie.iso                        

** Resuming transfer from byte position 102318080                                                                       

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                         

                                 Dload  Upload   Total   Spent    Left  Speed                                           

100 1595M  100 1595M    0     0  2868k      0  0:09:29  0:09:29 --:--:-- 1766k         

Here’s my resulting file:

-rw-r--r-- 1 greys 1774911488 Feb  7 23:08 Solus-4.1-Budgie.iso

Don’t Forget to Compare File Checksum!

Checking a checksum for newly downloaded ISO image is always a good practice, but it becomes a must when you’re resuming downloads: in addition to ensuring you got the same ISO image software distributors intended, you’re getting the assurance that your resumed download file is intact and fully operational.

I have downloaded Solus 4.1 SHA256 checksum from the same Solus Downloads page, and will use the sha256sum command to generate checksum for the ISO file. Obviously, both checksums must match:

greys@xps:~/Downloads $ cat Solus-4.1-Budgie.iso.sha256sum 
4bf00f2f50e7024a71058c50c24a5706f9c18f618c013b8a819db33482577d17  Solus-4.1-Budgie.iso
greys@xps:~/Downloads $ sha256sum Solus-4.1-Budgie.iso
4bf00f2f50e7024a71058c50c24a5706f9c18f618c013b8a819db33482577d17  Solus-4.1-Budgie.iso

That’s it for today! I can’t wait to try Solus 4.1, will posh about it shortly.

See Also




Run Ansible Tasks for Specific OS Release Version

Red Hat Ansible
Red Hat Ansible

I already know and have explained how easy it is to make an Ansible task run only on a specific Linux distro family (Debian/Ubuntu or RedHat/Centos), but recently needed to go even further: limit certain

Ansible tasks to be run on specific RHEL releases.

How To Run Ansible Tasks for RedHat or Debian

First, let me remind you how to specify that a task should be run for a certain distro family and not the others:

 - name: Disable IPv6
 template:
 src: templates/disable-ipv6.conf.j2
 dest: /etc/sysctl.d/disable-ipv6.conf
 mode: 0660
 backup: yes
 notify:
 disable ipv6
 tags:
 fix
 sysctl
 noipv6
 when: ansible_os_family == 'RedHat'

 name: Debian | blacklist ipv6 in modprobe
 lineinfile: "dest=/etc/modprobe.d/blacklist.conf line='blacklist ipv6' create=yes"
 tags:
 fix
 sysctl
 noipv6
 when: ansible_os_family == 'Debian' 

In this example, the first task (Disable IPv6) will only deploy the sysctl config element on RedHat (CentOS, Fedora) systems.

The second task will only run on Debian/Ubuntu family of Operating Systems.

How To Run Ansible for Specific Release of RedHat

Now to the issue at hand: while IPv6 was possible to disable in RHEL 6 and RHEL 7, it seems the IPv6 module is built-in with RHEL 8/CentOS 8 distros – and so there’s no point in trying to disable it.

Here’s how I created a task that would only run on RedHat/CentOS releases older than RHEL 8/CentOS 8:

 - name: Disable IPv6
 template:
 src: templates/disable-ipv6.conf.j2
 dest: /etc/sysctl.d/disable-ipv6.conf
 mode: 0660
 backup: yes
 notify:
 disable ipv6
 tags:
 fix
 sysctl
 noipv6
 when: ansible_os_family == 'RedHat' and ansible_distribution_major_version|int <= 7

That’s it for today!

See Also