Convert Epoch Time with Python

Convert Unix Epoch with Python

I’m slowly improving my Python skills, mostly by Googling and combining multiple answers to code a solution to my systems administration tasks. Today I decided to write a simpe converter that takes Epoch Time as a parameter and returns you the time and date it corresponds to.

datetime and timezone Modules in Python

Most of the functionality is done using the fromtimestamp function of the datetime module. But because I also like seeing time in UTC, I decided to use timezone module as well.

epoch.py script

#!/Library/Frameworks/Python.framework/Versions/3.6/bin/python3

import sys
from datetime import datetime, timezone

if len(sys.argv)>1:
  print ("This is the Epoch time: ", sys.argv[1])

  try:
      timestamp = int(sys.argv[1])

      if timestamp>0:
        timedate = datetime.fromtimestamp(timestamp)
        timedate_utc = datetime.fromtimestamp(timestamp, timezone.utc)

        print ("Time/date: ", format(timedate))
        print ("Time/date in UTC: ", format(timedate_utc))
  except ValueError:
        print ("Timestamp should be a positive integer, please.")
else:
  print ("Usage: epoch.py <EPOCH-TIMESTAMP>")

FIXME: I’ll revisit this to re-publish script directly from GitHub.

Here’s how you can use the script:

greys@maverick:~/proj/python $ ./epoch.py 1566672346
 This is the Epoch time:  1566672346
 Time/date:  2019-08-24 19:45:46
 Time/date in UTC:  2019-08-24 18:45:46+00:00

I implemented basic checks:

  • script won’t run if no command line parameters are passed
  • an error message will be shown if command line parameter isn’t a number (and therefore can’t be a timestamp)

Do you see anything that should be changed or can be improved? Let me know!

See Also




Advice: Safely Removing a File or Directory in Unix

Unix Tutorial

Removing files and directories is a very common task, so in some environments support engineers or automation scripts delete hundreds of files per day. That’s why I think it’s important to be familiar with different ways and safety mechanisms you should use when it comes to removing Unix directories. This article has a number of principles that should help you make your day-to-day files and directories operations safer.



DISCLAIMER: one can never be too careful when using a command line, and removing files or directories in Unix is not an exception. That’s why please take extra care and spend time planning and understanding commans and command line options before executing them on production data. I’m sharing my own advice and my approach, but DO YOUR OWN RESEARCH as I accept no responsibility for any possible loss cuased by direct or indirect use of the suggested commands.

Safely Removing Files and Directories

Advice in this article is equally applicable to commands you type and to the automation solutions you create. Be it a single command line or a complex Ansible playbook – safety mindset should be applied whenever you’re creating an actionable plan for working with important data.

If you can think of any more advice related to this topic, please let me know!

1. Double-check Directories Before Removing

I wouldn’t call this out if it hasn’t saved me so many times. No matter who made the request, no matter how urgent the task is, no matter how basic and obvious the directory name seems, always double-check directories before removing!

Basic approach is: replace rm/rmdir with ls -l command.

So instead of

$ rm -rf /etc /bin

you type ls command and review the output:

$ ls -l /etc /bin

Things you’re checking for are:

  1. Is this a user/task specific directory or a global directory?
  2. Does it seem to be part of the core OS?
  3. Will removing these files break any functionality you can think of?
  4. Does the directory contain any files?
  5. Does the number of files seem different from what you expected?

For instance: you’re asked to delete an empty directory. Do a quick ls, and if it has files – double-check if they should be deleted as well. Also, check if it’s one of the common core OS directories like /etc or /bin or /var – it could be that you got the name by mistake but removing directory without checking would become an even bigger mistake.

2. Consider Moving Instead of Removing

In troubleshooting, many requests are made so that you free up a directory or tidy up filesystem structure. But the issues are mostly around file and directory names, rather than the space they take up.

So if filesystem space is not an issue right now, consider moving the directory instead of removing (deleting) it completely:

$ mv /home/greys/dir1 /home/greys/dir1.old

The end result will be that /home/greys/dir1 directory is gone, but you still have time to review and recover files from /home/greys/dir1.old if necessary.

3. Use root Privilege Wisely

Hint: don’t use root unless you absolutely have to. If the request is to remove a subdirectory in some application path – find out what user the application is running as and become that user before removing the directory.

For instance:

# su - javauser
# rm -rf /opt/java/logs/debug

Run as root user, this will let you become a javauser and attempt to remove the /opt/java/logs/debug directory (debug subdirectory in /opt/java/logs directory).

If there’s an issue (like getting permissions denied error) – review and find out what the problem is instead of just becoming root and removing the directory or files anyway. Specifically: permission denied suggests that files belong to another user or group, meaning they are potentially used and needed for something else and not just the application you’re working on.

4. Double-check Any Masks or Variables

If you’re dealing with expanding filename masks, double-check them to have correct and non-zero values.

Consider this:

$ rm -rf /${SOMEDIR}

if you’re not careful validating it, then it’s quite possible that $SOMEDIR is not initialised (or initialised under some other user session), thus resulting in the vastly different command with catastrophic results (yes, I know: this exact example below is NOT that bad, because as regular user it simply won’t work. But run as run will result in OS self-destruct):

$ rm -rf /

Similarly, if there are filenames to be expanded, verify that expansion works as intended. Very important thing to realise is that your filename masks will be expanded as your current user.

$ <SUDO> rm /$(ls /root) 
ls: cannot access '/root/': Permission denied
 rm: cannot remove '/': Is a directory

This example above is using shell expansion: it runs ls /root command that will return valid values if you have enough permissions. But run as regular use this will give an error and also alter the path used for the rm command. It will be as if you tried to run the following with sudo privileges:

$ rm /

Again, I’m not giving you the full commans as it’s all too easy to break your Unix like OS beyond repair when you run full commands as root without double-checking.

5. echo Each Command Before Running

The last principle I find very useful is to prepend any potentially dangerous command with echo. This means your shell will attempt expanding any command line parameters and substitutions, but then show the resulting command line instead of actually executing it:

greys@becky:~ $ echo rm -rf /opt/java/logs/${HOSTNAME}
rm -rf /opt/java/logs/becky

See how it expanded ${HOSTNAME} variable and replaced it with the actual hostname, becky?

Use echo just to be super sure about what you think the Unix shell will execute.

That’s it for today, hope you like this collection of safety principles. Let me know if you want more articles of this kind!

See Also

return nothing




Projects: Automatic Keyboard Backlight for Dell XPS in Linux

 

1C3D5091-6BB5-4201-9ABF-3B213879770C.JPG

Last night I finished a fun mini project as part of Unix Tutorials Projects. I have writted a basic enough script that can be added as root cronjob for automatically controlling keyboard backlight on my Dell XPS 9380.

Bash Script for Keyboard Backlight Control

As I’ve written just a couple of days ago, it’s actually quite easy to turn keyboard backlight on or off on a Dell XPS in Linux (and this probably works with other Dell laptops).

Armed with that knowledge, I’ve written the following script:

#!/bin/bash

WORKDIR=/home/greys/scripts/backlight
LOCKFILE=backlight.kbd
LOGFILE=${WORKDIR}/backlight.log
KBDBACKLIGHT=`cat /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness`

HOUR=`date +%H`

echo "---------------->" | tee -a $LOGFILE
date | tee -a $LOGFILE

if [ $HOUR -lt 4 -o $HOUR -gt 21 ]; then
echo "HOUR $HOUR is rather late! Must turn on backlight" | tee -a $LOGFILE
BACKLIGHT=3
else
echo "HOUR $HOUR is not too late, must turn off the backlight" | tee -a $LOGFILE
BACKLIGHT=0
fi

if [ $KBDBACKLIGHT -ne $BACKLIGHT ]; then
echo "Current backlight $KBDBACKLIGHT is different from desired backlight $BACKLIGHT" | tee -a $LOGFILE

FILE=`find ${WORKDIR} -mmin -1440 -name ${LOCKFILE}`

echo "FILE: -$FILE-"

if [ -z "$FILE" ]; then
echo "No lock file! Updating keyboard backlight" | tee -a $LOGFILE

echo $BACKLIGHT > /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness
touch ${WORKDIR}/${LOCKFILE}
else
echo "Lockfile $FILE found, skipping action..." | tee -a $LOGFILE
fi
else
echo "Current backlight $KBDBACKLIGHT is the same as desired... No action needed" | tee -a $LOGFILE
fi

How My Dell Keyboard Backlight Script Works

This is what my script does when you run it as root (it won’t work if you run as regular user):

  • it determines the WORKDIR (I defined it as /home/greys/scripts/backlight)
  • it starts writing log file backlight.log in that $WORKDIR
  • it checks for lock file backlight.kbd in the same $WORKDIR
  • it confirms current hour and checks if it’s a rather late hour (when it must be dark). For now I’ve set it between 21 (9pm) and 4 (4am, that is)
  • if checks current keyboard backlight status ($KDBBACKLIGHT variable)
  • it compares this status to the desired state (based on which hour that is)
  • if we need to update keyboard backlight setting, we check for lockfile.
    • If a recent enough file exists, we skip updates
    • Otherwise, we set the backlight to new value
  • all actions are added to the $WORKDIR/backlight.log file

Log file looks like this:

greys@xps:~/scripts $ tail backlight/backlight.log 
---------------->
Tue May 28 00:10:00 BST 2019
HOUR 00 is rather late! Must turn on backlight
Current backlight 2 is different from desired backlight 3
Lockfile /home/greys/scripts/backlight/backlight.kbd found, skipping action...
---------------->
Tue May 28 00:15:00 BST 2019
HOUR 00 is rather late! Must turn on backlight
Current backlight 2 is different from desired backlight 3
Lockfile /home/greys/scripts/backlight/backlight.kbd found, skipping action...

How To Activate Keyboard Backlight cronjob

I have added this script to the root user’s cronjob. In Ubuntu 19.04 running on my XPS laptop, this is how it was done:

greys@xps:~/scripts $ sudo crontab -e
[sudo] password for greys:

I then added the following line:

*/5 * * * * /home/greys/scripts/backlight.sh

Depending on where you place similar script, you’ll need to update full path to it from /home/greys/scripts. And then update WORKDIR variable in the script itself.

Keyboard Backlight Unix Tutorial Project Follow Up

Here are just a few things I plan to improve:

  • see if I can access Ubuntu’s Night Light settings instead of hardcoding hours into the script
  •  fix the timezone – should be IST and not BST for my Dublin, Ireland location
  • Just for fun, try logging output into one of system journals for journalctl

See Also

 




How To: Show Colour Numbers in Unix Terminal

256-terminal-colors-unix-linux.png
I’m decorating my tmux setup and needed to confirm colour numbers for some elements of the interface. Turns out, it’s simple enough to show all the possible colours with a 1-liner in your favourite Unix shell – bash shell in my case.

Using ESC sequences For Using Colours

I’ll explain how this works in full detail sometime in a separate post, but for now will just give you an example and show how it works:

hello-color-bash-output.png

So, in this example, this is how we achieve colorized text output:

  1. echo command uses -e option to support ESC sequences
  2. \e[38;5;75m is the ESC sequence specifying color number 75.
  3. \e[38;5; is just a special way of telling terminal that we want to use 256-color style

List 256 Terminal Colours with Bash

Here’s how we get the colours now: we create a loop from 1 until 255 (0 will be black) and then use the ESC syntax changing colour to $COLOR variable value. We then output the $COLOR value which will be a number:

for COLOR in {1..255}; do echo -en "\e[38;5;${COLOR}m${COLOR} "; done; echo;

Here’s how running this will look in a propertly configured 256-color terminal:

bash-show-256-colors.png

Bash Script to Show 256 Terminal Colours

Here’s the same 1-liner converted into proper script for better portability and readability:

#!/bin/bash

for COLOR in {1..255}; do
echo -en "\e[38;5;${COLOR}m"
echo -n "${COLOR} "
done

echo

If you save this as bash-256-colours.sh and chmod a+rx bash-256-colours.sh, you can now run it every time you want to refresh your memory or pick different colours for some use.

See Also




Check For Available Updates with YUM

If you’re using CentOS, Fedora or Red Hat Linux, you are probably familiar with the yum package manager. One of the really useful options for yum is checking whether there are any available updates to be installed.

Check For Updates with YUM

If you use check-update parameter with yum, it will show you the list of any available updates:

root@centos:~ # yum check-update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: rep-centos-fr.upress.io
* epel: mirror.in2p3.fr
* extras: rep-centos-fr.upress.io
* updates: ftp.pasteur.fr

ansible.noarch 2.7.8-1.el7 epel
datadog-agent.x86_64 1:6.10.1-1 datadog
libgudev1.x86_64 219-62.el7_6.5 updates
nginx.x86_64 1:1.15.9-1.el7_4.ngx nginx
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 extras
polkit.x86_64 0.112-18.el7_6.1 updates
systemd.x86_64 219-62.el7_6.5 updates
systemd-libs.i686 219-62.el7_6.5 updates
systemd-libs.x86_64 219-62.el7_6.5 updates
systemd-python.x86_64 219-62.el7_6.5 updates
systemd-sysv.x86_64 219-62.el7_6.5 updates

Using yum check-update in Shell Scripts

One thing that I didn’t know and am very happy to discover is that yum check-update is actually meant for shell scripting. It returns a specific code after running, you can use the value to decide what do to next.

As usual: return value 0 means everything is fully updated, so no updates are available (and no action is needed). A value of 100 would mean you have updates available.

All we need to do is check the return value variable $? for its value in something like this:

#!/bin/bash

yum check-update

if [ $? == 100 ]; then
    echo "You've got updates available!"
else
    echo "Great stuff! No updates pending..."
fi

Here is how running this script would look if we saved the script as check-yum-updates.sh script:

root@s2:~ # ./check-yum-updates.sh
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: rep-centos-fr.upress.io
* epel: mirror.in2p3.fr
* extras: rep-centos-fr.upress.io
* updates: ftp.pasteur.fr

ansible.noarch 2.7.8-1.el7 epel
datadog-agent.x86_64 1:6.10.1-1 datadog
libgudev1.x86_64 219-62.el7_6.5 updates
nginx.x86_64 1:1.15.9-1.el7_4.ngx nginx
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 extras
polkit.x86_64 0.112-18.el7_6.1 updates
systemd.x86_64 219-62.el7_6.5 updates
systemd-libs.i686 219-62.el7_6.5 updates
systemd-libs.x86_64 219-62.el7_6.5 updates
systemd-python.x86_64 219-62.el7_6.5 updates
systemd-sysv.x86_64 219-62.el7_6.5 updates
You've got updates available!

I’ll revisit this post soon to show you a few more things that can be done with yum check-update functionality.

See Also

 




pwd command and PWD variable

  • PWD variable.jpg

pwd command, and you probably know, reports the current working directory in your Unix shell. PWD stands for Present Working Directory. In addition to pwd, there’s also a special variable – one of the user environment variables – called PWD, that you can also use.

pwd command

Just to remind you, pwd command is a super simple way of confirming where you are in the filesystem tree.

Usually, your shell session starts in your home directory. For me and my username greys, this means /home/greys in most distributions:

greys@xps:~$ pwd
/home/greys

If I then use cd command to visit some other directory, pwd command will help me confirm it:

greys@xps:~$ cd /tmp
greys@xps:/tmp$ pwd
/tmp

PWD environment variable

Most Unix shells have PWD as a variable. It reports the same information as pwd command but saves children processes the trouble of using pwd command and getpwd system call just to confirm the working directory of their parent process.

So, you can just do this to confirm $PWD value:

greys@xps:/tmp$ echo $PWD
/tmp

… which really helps in shell scripting, cause you can do something like this:

#!/bin/bash
echo "Home directory: $HOME"
echo "Current directory: $PWD"

if [ $HOME != $PWD ]; then
    echo "You MUST run this from your home directory!"
    exit
else
    echo "Thank you for running this script from your home directory."
fi

When we run this, the script will compare standard $HOME variable (your user’s homedir) to the $PWD variable and will behave differently if they match.

I’ve created and saved pwd.sh in my projects directory for bash scripts: /home/greys/proj/bash:

greys@xps:~/proj/bash$ ./pwd.sh 
Home directory: /home/greys
Current directory: /home/greys/proj/bash
You MUST run this from your home directory!

If I now change back to my home directory:

greys@xps:~/proj/bash$ cd /home/greys/

… the script will thank me for it:

greys@xps:~$ proj/bash/pwd.sh
Home directory: /home/greys
Current directory: /home/greys
Thank you for running this script from your home directory.

Have fun using pwd command and $PWD variable in your work and shell scripting!

See Also

 




awk delimiter

Since awk field separator seems to be a rather popular search term on this blog, I’d like to expand on the topic of using awk delimiters (field separators).

Two ways of separating fields in awk

Screen Shot 2019-02-13 at 09.59.59.png

As you may have seen from the awk field separator post, the easiest and quicked way to use one is by specifying -F command like option for awk (in the example we’re extracting the last octet of the IPv4 address):

greys@maverick:/ $ ifconfig en0 | grep "inet " | awk '{print $2}'
192.168.1.220
greys@maverick:/ $ echo 192.168.1.220 | awk -F. '{print $4}'
220

As your awk scripting gets better and more complex, you’ll probably recognise that it’s best to put such options inside the awk script instead of passing them as command line options. The benefit is, of course, that you don’t risk getting different (wrong!) results just because you forgot to specify a command line option – everything is contained in your script, so you run it with minimal set of parameters and always get the same result.

So, the second way of separating fields in awk script is by using the FS (field separator) variable, like this:

greys@maverick:~ $ echo 192.168.1.200 | awk 'BEGIN { FS = "." } ; {print $4}' 
200

See Also




awk field separator

awk-field-separator-unixtutorial.pngYou’ve probably come across awk command and its most simple use: splitting string elements separated by blank spaces. In this short post I’d like to expand a little bit on using awk field separators.

To demonstrate, let’s inspect the output of ifconfig command on my Macbook:

greys@maverick:/ $ ifconfig en0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether dc:a9:04:7b:3f:44
inet6 fe80::879:43c0:48e2:124b%en0 prefixlen 64 secured scopeid 0xa
inet 192.168.1.221 netmask 0xffffff00 broadcast 192.168.1.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
I’d like to extract the IP address for this interface, which means we should first use grep to isolate just the line of output we want:
greys@maverick:/ $ ifconfig en0 | grep "inet "
inet 192.168.1.221 netmask 0xffffff00 broadcast 192.168.1.255
Great! Now it should be fairly easy to use awk to get that IP address. Since it’s the second word from the left, we’re telling awk to print parameter 2, {print $2}:
greys@maverick:/ $ ifconfig en0 | grep "inet " | awk '{print $2}'
192.168.1.221
awk uses space as field separator by default, but you can also specify any other character to use as separator instead.
To continue with our example, I further parse the output (which is just the IP address at this stage) using . character as field separator:
greys@maverick:/ $ ifconfig en0 | grep "inet " | awk '{print $2}' | awk -F. '{print $4}'
221
Obviously, if I want to access the last 2 octets of the IP address, I will modify the last awk command accordingly:
greys@maverick:/ $ ifconfig en0 | grep "inet " | awk '{print $2}' | awk -F. '{print $3,$4}'
1 221

See Also




Review Latest Logs with tail and awk

Part of managing any Unix system is keeping an eye on the vital log files.

Today I was discussing one of such scenarios with a friend and we arrived at a pretty cool example involving awk command and eventually a bash command substitution.

Let’s say we have a directory with a bunch of log files, all constantly updated at different times and intervals. Here’s how I may get the last 10 lines of the output from the most recent log file:

root@vps1:/var/log# cd /var/log
root@vps1:/var/log# ls -altr *log
-rw-r--r-- 1 root root 32224 Jul 10 22:49 faillog
-rw-r----- 1 syslog adm 0 Jul 25 06:25 kern.log
-rw-r--r-- 1 root root 0 Aug 1 06:25 alternatives.log
-rw-r--r-- 1 root root 2234 Aug 8 06:34 dpkg.log
-rw-rw-r-- 1 root utmp 294044 Aug 15 22:32 lastlog
-rw-r----- 1 syslog adm 12248 Aug 15 22:35 syslog
-rw-r----- 1 syslog adm 5160757 Aug 15 22:40 auth.log

Ok, now we just need to get that filename from the last line (auth.log).

Most obvious way would be to use tail command to extract the last line, and awk to show the 9th parameter in that line – which would be the filename:

root@vps1:/var/log# ls -altr *log | tail -1 | awk '{print $9}'
auth.log

Pretty cool, but can be optimised using awk’s END clause:

root@vps1:/var/log# ls -altr *log | awk 'END {print $9}'
auth.log

Alright. Now we wanted to show the 10 lines of output, which we can use tail -10 for.

A really basic approach is to assing the result of the line we’re using to a variable in Bash, and then access that variable, like this:

root@vps1:/var/log# FILE=`ls -altr *log | tail -1 | awk '{print $9}'`
root@vps1:/var/log# tail -10 ${FILE}
Aug 15 22:40:37 vps1 sshd[26578]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.65.145.196
Aug 15 22:40:39 vps1 sshd[26578]: Failed password for invalid user Fred from 159.65.145.196 port 47934 ssh2
Aug 15 22:40:39 vps1 sshd[26578]: Received disconnect from 159.65.145.196 port 47934:11: Normal Shutdown, Thank you for playing [preauth]
Aug 15 22:40:39 vps1 sshd[26578]: Disconnected from 159.65.145.196 port 47934 [preauth]
Aug 15 22:41:15 vps1 sshd[26580]: Connection closed by 51.15.4.190 port 44958 [preauth]
Aug 15 22:42:02 vps1 sshd[26585]: Connection closed by 13.232.227.143 port 40054 [preauth]
Aug 15 22:43:23 vps1 sshd[26587]: Connection closed by 51.15.4.190 port 52454 [preauth]
Aug 15 22:44:08 vps1 sshd[26589]: Connection closed by 13.232.227.143 port 47542 [preauth]
Aug 15 22:45:01 vps1 CRON[26604]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 15 22:45:01 vps1 CRON[26604]: pam_unix(cron:session): session closed for user root

But an ever shorter (better?) way to do this would be to use the command substitution in bash: the output of a command becomes the command itself (or string value in our case).

Check it out:

root@vps1:/var/log# tail -10 $(ls -altr *log | tail -1 | awk '{print $9}')
Aug 15 22:42:02 vps1 sshd[26585]: Connection closed by 13.232.227.143 port 40054 [preauth]
Aug 15 22:43:23 vps1 sshd[26587]: Connection closed by 51.15.4.190 port 52454 [preauth]
Aug 15 22:44:08 vps1 sshd[26589]: Connection closed by 13.232.227.143 port 47542 [preauth]
Aug 15 22:45:01 vps1 CRON[26604]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 15 22:45:01 vps1 CRON[26604]: pam_unix(cron:session): session closed for user root
Aug 15 22:45:26 vps1 sshd[26610]: Connection closed by 51.15.4.190 port 59872 [preauth]
Aug 15 22:46:15 vps1 sshd[26612]: Connection closed by 13.232.227.143 port 55030 [preauth]
Aug 15 22:46:23 vps1 sshd[26608]: Connection closed by 18.217.190.140 port 40804 [preauth]
Aug 15 22:47:28 vps1 sshd[26614]: Connection closed by 51.15.4.190 port 39044 [preauth]
Aug 15 22:48:20 vps1 sshd[26616]: Connection closed by 13.232.227.143 port 34286 [preauth]

So in this example $(ls -altr *log | tail -1 | awk ‘{print $9}’) is a substituion – bash executes the command in the parenthesis and then passes the resulting value to further processing (becoming a parameter for the tail -10 command).

In our command above, we’re essentially executing the following command right now:

root@vps1:/var/log# tail -10 auth.log

only auth.log is always the filename of the log file that was updated the latest, so it could become syslog or dpkg.log if they’re updated before next auth.log update.

See Also




Bash Scripts – Examples

I find it most useful when I approach any learning with a clear goal (or at least a specific enough task) to accomplish.

Here’s a list of simple Bash scripts that I think could be a useful learning exercise:

  • what’s your name? – asks for your name and then shows a greeting
  • hello, world! (that also shows hostname and list server IP addresses)
  • write a system info script – that shows you a hostname, disks usage, network interfaces and maybe system load
  • directory info – script that counts disks space taken up by a directory and shows number of files and number of directories in it
  • users info – show number of users on the system, their full names and home directories (all taken from /etc/passwd file)
  • list virtual hosts on your webserver – I actually have this as a login greeting on my webservers – small script that highlights which websites (domains) your server has in web server (Apache or Nginx) configuration.

Do you have any more examples of what you’d like to do in a Linux shell? If not, I’ll just start with the examples above. The plan is to replace each example name in the list above with a link to the Unix Tutorial post.