Show Next Few Lines with grep

Showing next few lines with grep

Now and then I have to find some information in text files – logs, config files, etc. with a unique enough approach: I have a hostname or parameter name to search by, but the actual information I’m interested in is not found in the same line. Simple grep will just not do – so I’m using one of the really useful command line options for the grep command.

Default grep Behaviour

By default, grep shows you just the lines of text files that match the parameter you specify.

I’m looking for the IP address of the home office server called “server” in .ssh/config file.

Here’s what default grep shows me:

greys@maverick:~ $ grep server .ssh/config
Host server

So it’s useful in a sense that this confirms that I indeed have a server profile defined in my .ssh/config file, but it doesn’t show any more lines of the config so I can’t really see the IP address of my “server”.

Show Next Few Lines with grep

This is where the -A option (A – After) for grep becomes really useful: specify number of lines to show in addition to the line that matches search pattern and enjoy the result!

I’m asking grep to show me 2 more lines after the “server” line:

greys@maverick:~ $ grep -A2 server .ssh/config
Host server
     ForwardAgent yes

Super simple but very powerful way of using grep. I hope you like it!

See Also

How To Confirm Symlink Destination

Today I show you how to confirm symlink destination using readlink command.
Showing symlink destination

Back to basics day: a very simple tip on working with one of the most useful things available in Unix and Linux filesystems: symbolic links (symlinks).

How Symlinks Look in ls command

When using long-form ls output, symlinks will be shown like this:

greys@mcfly:~/proj/unixtutorial $ ls -la
total 435736
-rwxr-xr-x 1 greys staff 819 14 Dec 2018 .gitignore
drwxr-xr-x 3 greys staff 96 20 Nov 09:27 github
drwxr-xr-x 4 greys staff 128 20 Nov 10:58 scripts
lrwxr-xr-x 1 greys staff 30 10 Dec 20:40 try -> /Users/greys/proj/unixtutorial
drwxr-xr-x 44 greys staff 1408 20 Nov 10:54 wpengine

Show Symlink Destination with readlink

As with pretty much everything in Unix, there’s a simple command (readlink) that reads a symbolic link and shows you the destination it’s pointing to. Very handy for scripting:

greys@mcfly:~/proj/unixtutorial $ readlink try

It has a very basic syntax: you just specify a file or directory name, and if it’s a symlink you’ll get the full path to the destination as the result.

If readlink returns nothing, this means the object you’re inspecting isn’t a symlink at all. Based on the outputs above, if I check readlink for the regular scripts directory, I won’t get anything back:

greys@mcfly:~/proj/unixtutorial $ readlink scripts
greys@mcfly:~/proj/unixtutorial $

See Also

Get Last Field with awk

awk with $NF

New Live Chat at Unix Tutorial feature I launched over the weekend is great! Even a few hours I’m online each moring and each evening are giving me plenty of opportunities to meet visitors and to help with Unix questions. Today I discussed awk fields delimiter with someone, speicifcally the part of showing the last field in a string.

NF Variable in awk

Awk has a number of built-in variables. NF is one of them – it shows the total number of in a given string that awk just finished parsing. Used with the $ macro, NF becomes $NF and gives you the awk field number NF – which is the last field in an awk string.

If we have a DIRNAME variable with a typical full path, like this:

greys@mcfly:~ $ DIRNAME=/Users/greys/proj/unixtutorial
greys@mcfly:~ $ echo $DIRNAME

… then it’s possible to get the name of the last subdirectory in DIRNAME using awk and $NF:

greys@mcfly:~ $ echo $DIRNAME | awk -F\/ '{print $NF}'

That’s it – as simple as that!

See Also

Convert Epoch Time with Python

Convert Unix Epoch with Python

I’m slowly improving my Python skills, mostly by Googling and combining multiple answers to code a solution to my systems administration tasks. Today I decided to write a simpe converter that takes Epoch Time as a parameter and returns you the time and date it corresponds to.

datetime and timezone Modules in Python

Most of the functionality is done using the fromtimestamp function of the datetime module. But because I also like seeing time in UTC, I decided to use timezone module as well. script


import sys
from datetime import datetime, timezone

if len(sys.argv)>1:
  print ("This is the Epoch time: ", sys.argv[1])

      timestamp = int(sys.argv[1])

      if timestamp>0:
        timedate = datetime.fromtimestamp(timestamp)
        timedate_utc = datetime.fromtimestamp(timestamp, timezone.utc)

        print ("Time/date: ", format(timedate))
        print ("Time/date in UTC: ", format(timedate_utc))
  except ValueError:
        print ("Timestamp should be a positive integer, please.")
  print ("Usage: <EPOCH-TIMESTAMP>")

FIXME: I’ll revisit this to re-publish script directly from GitHub.

Here’s how you can use the script:

greys@maverick:~/proj/python $ ./ 1566672346
 This is the Epoch time:  1566672346
 Time/date:  2019-08-24 19:45:46
 Time/date in UTC:  2019-08-24 18:45:46+00:00

I implemented basic checks:

  • script won’t run if no command line parameters are passed
  • an error message will be shown if command line parameter isn’t a number (and therefore can’t be a timestamp)

Do you see anything that should be changed or can be improved? Let me know!

See Also

Advice: Safely Removing a File or Directory in Unix

Unix Tutorial

Removing files and directories is a very common task, so in some environments support engineers or automation scripts delete hundreds of files per day. That’s why I think it’s important to be familiar with different ways and safety mechanisms you should use when it comes to removing Unix directories. This article has a number of principles that should help you make your day-to-day files and directories operations safer.

DISCLAIMER: one can never be too careful when using a command line, and removing files or directories in Unix is not an exception. That’s why please take extra care and spend time planning and understanding commans and command line options before executing them on production data. I’m sharing my own advice and my approach, but DO YOUR OWN RESEARCH as I accept no responsibility for any possible loss cuased by direct or indirect use of the suggested commands.

Safely Removing Files and Directories

Advice in this article is equally applicable to commands you type and to the automation solutions you create. Be it a single command line or a complex Ansible playbook – safety mindset should be applied whenever you’re creating an actionable plan for working with important data.

If you can think of any more advice related to this topic, please let me know!

1. Double-check Directories Before Removing

I wouldn’t call this out if it hasn’t saved me so many times. No matter who made the request, no matter how urgent the task is, no matter how basic and obvious the directory name seems, always double-check directories before removing!

Basic approach is: replace rm/rmdir with ls -l command.

So instead of

$ rm -rf /etc /bin

you type ls command and review the output:

$ ls -l /etc /bin

Things you’re checking for are:

  1. Is this a user/task specific directory or a global directory?
  2. Does it seem to be part of the core OS?
  3. Will removing these files break any functionality you can think of?
  4. Does the directory contain any files?
  5. Does the number of files seem different from what you expected?

For instance: you’re asked to delete an empty directory. Do a quick ls, and if it has files – double-check if they should be deleted as well. Also, check if it’s one of the common core OS directories like /etc or /bin or /var – it could be that you got the name by mistake but removing directory without checking would become an even bigger mistake.

2. Consider Moving Instead of Removing

In troubleshooting, many requests are made so that you free up a directory or tidy up filesystem structure. But the issues are mostly around file and directory names, rather than the space they take up.

So if filesystem space is not an issue right now, consider moving the directory instead of removing (deleting) it completely:

$ mv /home/greys/dir1 /home/greys/dir1.old

The end result will be that /home/greys/dir1 directory is gone, but you still have time to review and recover files from /home/greys/dir1.old if necessary.

3. Use root Privilege Wisely

Hint: don’t use root unless you absolutely have to. If the request is to remove a subdirectory in some application path – find out what user the application is running as and become that user before removing the directory.

For instance:

# su - javauser
# rm -rf /opt/java/logs/debug

Run as root user, this will let you become a javauser and attempt to remove the /opt/java/logs/debug directory (debug subdirectory in /opt/java/logs directory).

If there’s an issue (like getting permissions denied error) – review and find out what the problem is instead of just becoming root and removing the directory or files anyway. Specifically: permission denied suggests that files belong to another user or group, meaning they are potentially used and needed for something else and not just the application you’re working on.

4. Double-check Any Masks or Variables

If you’re dealing with expanding filename masks, double-check them to have correct and non-zero values.

Consider this:

$ rm -rf /${SOMEDIR}

if you’re not careful validating it, then it’s quite possible that $SOMEDIR is not initialised (or initialised under some other user session), thus resulting in the vastly different command with catastrophic results (yes, I know: this exact example below is NOT that bad, because as regular user it simply won’t work. But run as run will result in OS self-destruct):

$ rm -rf /

Similarly, if there are filenames to be expanded, verify that expansion works as intended. Very important thing to realise is that your filename masks will be expanded as your current user.

$ <SUDO> rm /$(ls /root) 
ls: cannot access '/root/': Permission denied
 rm: cannot remove '/': Is a directory

This example above is using shell expansion: it runs ls /root command that will return valid values if you have enough permissions. But run as regular use this will give an error and also alter the path used for the rm command. It will be as if you tried to run the following with sudo privileges:

$ rm /

Again, I’m not giving you the full commans as it’s all too easy to break your Unix like OS beyond repair when you run full commands as root without double-checking.

5. echo Each Command Before Running

The last principle I find very useful is to prepend any potentially dangerous command with echo. This means your shell will attempt expanding any command line parameters and substitutions, but then show the resulting command line instead of actually executing it:

greys@becky:~ $ echo rm -rf /opt/java/logs/${HOSTNAME}
rm -rf /opt/java/logs/becky

See how it expanded ${HOSTNAME} variable and replaced it with the actual hostname, becky?

Use echo just to be super sure about what you think the Unix shell will execute.

That’s it for today, hope you like this collection of safety principles. Let me know if you want more articles of this kind!

See Also

return nothing

Projects: Automatic Keyboard Backlight for Dell XPS in Linux



Last night I finished a fun mini project as part of Unix Tutorials Projects. I have writted a basic enough script that can be added as root cronjob for automatically controlling keyboard backlight on my Dell XPS 9380.

Bash Script for Keyboard Backlight Control

As I’ve written just a couple of days ago, it’s actually quite easy to turn keyboard backlight on or off on a Dell XPS in Linux (and this probably works with other Dell laptops).

Armed with that knowledge, I’ve written the following script:


KBDBACKLIGHT=`cat /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness`

HOUR=`date +%H`

echo "---------------->" | tee -a $LOGFILE
date | tee -a $LOGFILE

if [ $HOUR -lt 4 -o $HOUR -gt 21 ]; then
echo "HOUR $HOUR is rather late! Must turn on backlight" | tee -a $LOGFILE
echo "HOUR $HOUR is not too late, must turn off the backlight" | tee -a $LOGFILE

echo "Current backlight $KBDBACKLIGHT is different from desired backlight $BACKLIGHT" | tee -a $LOGFILE

FILE=`find ${WORKDIR} -mmin -1440 -name ${LOCKFILE}`

echo "FILE: -$FILE-"

if [ -z "$FILE" ]; then
echo "No lock file! Updating keyboard backlight" | tee -a $LOGFILE

echo $BACKLIGHT > /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness
echo "Lockfile $FILE found, skipping action..." | tee -a $LOGFILE
echo "Current backlight $KBDBACKLIGHT is the same as desired... No action needed" | tee -a $LOGFILE

How My Dell Keyboard Backlight Script Works

This is what my script does when you run it as root (it won’t work if you run as regular user):

  • it determines the WORKDIR (I defined it as /home/greys/scripts/backlight)
  • it starts writing log file backlight.log in that $WORKDIR
  • it checks for lock file backlight.kbd in the same $WORKDIR
  • it confirms current hour and checks if it’s a rather late hour (when it must be dark). For now I’ve set it between 21 (9pm) and 4 (4am, that is)
  • if checks current keyboard backlight status ($KDBBACKLIGHT variable)
  • it compares this status to the desired state (based on which hour that is)
  • if we need to update keyboard backlight setting, we check for lockfile.
    • If a recent enough file exists, we skip updates
    • Otherwise, we set the backlight to new value
  • all actions are added to the $WORKDIR/backlight.log file

Log file looks like this:

greys@xps:~/scripts $ tail backlight/backlight.log 
Tue May 28 00:10:00 BST 2019
HOUR 00 is rather late! Must turn on backlight
Current backlight 2 is different from desired backlight 3
Lockfile /home/greys/scripts/backlight/backlight.kbd found, skipping action...
Tue May 28 00:15:00 BST 2019
HOUR 00 is rather late! Must turn on backlight
Current backlight 2 is different from desired backlight 3
Lockfile /home/greys/scripts/backlight/backlight.kbd found, skipping action...

How To Activate Keyboard Backlight cronjob

I have added this script to the root user’s cronjob. In Ubuntu 19.04 running on my XPS laptop, this is how it was done:

greys@xps:~/scripts $ sudo crontab -e
[sudo] password for greys:

I then added the following line:

*/5 * * * * /home/greys/scripts/

Depending on where you place similar script, you’ll need to update full path to it from /home/greys/scripts. And then update WORKDIR variable in the script itself.

Keyboard Backlight Unix Tutorial Project Follow Up

Here are just a few things I plan to improve:

  • see if I can access Ubuntu’s Night Light settings instead of hardcoding hours into the script
  •  fix the timezone – should be IST and not BST for my Dublin, Ireland location
  • Just for fun, try logging output into one of system journals for journalctl

See Also


How To: Show Colour Numbers in Unix Terminal

I’m decorating my tmux setup and needed to confirm colour numbers for some elements of the interface. Turns out, it’s simple enough to show all the possible colours with a 1-liner in your favourite Unix shell – bash shell in my case.

Using ESC sequences For Using Colours

I’ll explain how this works in full detail sometime in a separate post, but for now will just give you an example and show how it works:


So, in this example, this is how we achieve colorized text output:

  1. echo command uses -e option to support ESC sequences
  2. \e[38;5;75m is the ESC sequence specifying color number 75.
  3. \e[38;5; is just a special way of telling terminal that we want to use 256-color style

List 256 Terminal Colours with Bash

Here’s how we get the colours now: we create a loop from 1 until 255 (0 will be black) and then use the ESC syntax changing colour to $COLOR variable value. We then output the $COLOR value which will be a number:

for COLOR in {1..255}; do echo -en "\e[38;5;${COLOR}m${COLOR} "; done; echo;

Here’s how running this will look in a propertly configured 256-color terminal:


Bash Script to Show 256 Terminal Colours

Here’s the same 1-liner converted into proper script for better portability and readability:


for COLOR in {1..255}; do
echo -en "\e[38;5;${COLOR}m"
echo -n "${COLOR} "


If you save this as and chmod a+rx, you can now run it every time you want to refresh your memory or pick different colours for some use.

See Also

Check For Available Updates with YUM

If you’re using CentOS, Fedora or Red Hat Linux, you are probably familiar with the yum package manager. One of the really useful options for yum is checking whether there are any available updates to be installed.

Check For Updates with YUM

If you use check-update parameter with yum, it will show you the list of any available updates:

root@centos:~ # yum check-update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base:
* epel:
* extras:
* updates:

ansible.noarch 2.7.8-1.el7 epel
datadog-agent.x86_64 1:6.10.1-1 datadog
libgudev1.x86_64 219-62.el7_6.5 updates
nginx.x86_64 1:1.15.9-1.el7_4.ngx nginx
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 extras
polkit.x86_64 0.112-18.el7_6.1 updates
systemd.x86_64 219-62.el7_6.5 updates
systemd-libs.i686 219-62.el7_6.5 updates
systemd-libs.x86_64 219-62.el7_6.5 updates
systemd-python.x86_64 219-62.el7_6.5 updates
systemd-sysv.x86_64 219-62.el7_6.5 updates

Using yum check-update in Shell Scripts

One thing that I didn’t know and am very happy to discover is that yum check-update is actually meant for shell scripting. It returns a specific code after running, you can use the value to decide what do to next.

As usual: return value 0 means everything is fully updated, so no updates are available (and no action is needed). A value of 100 would mean you have updates available.

All we need to do is check the return value variable $? for its value in something like this:


yum check-update

if [ $? == 100 ]; then
    echo "You've got updates available!"
    echo "Great stuff! No updates pending..."

Here is how running this script would look if we saved the script as script:

root@s2:~ # ./
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base:
* epel:
* extras:
* updates:

ansible.noarch 2.7.8-1.el7 epel
datadog-agent.x86_64 1:6.10.1-1 datadog
libgudev1.x86_64 219-62.el7_6.5 updates
nginx.x86_64 1:1.15.9-1.el7_4.ngx nginx
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 extras
polkit.x86_64 0.112-18.el7_6.1 updates
systemd.x86_64 219-62.el7_6.5 updates
systemd-libs.i686 219-62.el7_6.5 updates
systemd-libs.x86_64 219-62.el7_6.5 updates
systemd-python.x86_64 219-62.el7_6.5 updates
systemd-sysv.x86_64 219-62.el7_6.5 updates
You've got updates available!

I’ll revisit this post soon to show you a few more things that can be done with yum check-update functionality.

See Also


pwd command and PWD variable

  • PWD variable.jpg

pwd command, and you probably know, reports the current working directory in your Unix shell. PWD stands for Present Working Directory. In addition to pwd, there’s also a special variable – one of the user environment variables – called PWD, that you can also use.

pwd command

Just to remind you, pwd command is a super simple way of confirming where you are in the filesystem tree.

Usually, your shell session starts in your home directory. For me and my username greys, this means /home/greys in most distributions:

greys@xps:~$ pwd

If I then use cd command to visit some other directory, pwd command will help me confirm it:

greys@xps:~$ cd /tmp
greys@xps:/tmp$ pwd

PWD environment variable

Most Unix shells have PWD as a variable. It reports the same information as pwd command but saves children processes the trouble of using pwd command and getpwd system call just to confirm the working directory of their parent process.

So, you can just do this to confirm $PWD value:

greys@xps:/tmp$ echo $PWD

… which really helps in shell scripting, cause you can do something like this:

echo "Home directory: $HOME"
echo "Current directory: $PWD"

if [ $HOME != $PWD ]; then
    echo "You MUST run this from your home directory!"
    echo "Thank you for running this script from your home directory."

When we run this, the script will compare standard $HOME variable (your user’s homedir) to the $PWD variable and will behave differently if they match.

I’ve created and saved in my projects directory for bash scripts: /home/greys/proj/bash:

greys@xps:~/proj/bash$ ./ 
Home directory: /home/greys
Current directory: /home/greys/proj/bash
You MUST run this from your home directory!

If I now change back to my home directory:

greys@xps:~/proj/bash$ cd /home/greys/

… the script will thank me for it:

greys@xps:~$ proj/bash/
Home directory: /home/greys
Current directory: /home/greys
Thank you for running this script from your home directory.

Have fun using pwd command and $PWD variable in your work and shell scripting!

See Also


awk delimiter

Since awk field separator seems to be a rather popular search term on this blog, I’d like to expand on the topic of using awk delimiters (field separators).

Two ways of separating fields in awk

There’s actually more than one way of separating awk fields: the commonly used -F option (specified as a parameter of the awk command) and the field separator variable FS (specified inside the awk script code).

awk -F field separator

Screen Shot 2019-02-13 at 09.59.59.png

As you may have seen from the awk field separator post, the easiest and quicked way to use one is by specifying -F command like option for awk (in the example we’re extracting the last octet of the IPv4 address):

greys@maverick:/ $ ifconfig en0 | grep "inet " | awk '{print $2}'
greys@maverick:/ $ echo | awk -F. '{print $4}'

Field Separator (FS) variable in awk

As your awk scripting gets better and more complex, you’ll probably recognise that it’s best to put such options inside the awk script instead of passing them as command line options. The benefit is, of course, that you don’t risk getting different (wrong!) results just because you forgot to specify a command line option – everything is contained in your script, so you run it with minimal set of parameters and always get the same result.

So, the second way of separating fields in awk script is by using the FS (field separator) variable, like this:

greys@maverick:~ $ echo | awk 'BEGIN { FS = "." } ; {print $4}' 

See Also