Project: Password Protect a Website with nginx

nginx
nginx logo

I needed to keep a few older websites online for a short little while, but didn’t want to leave them wide open in case older CMS systems were vulnerable – so I decided to protect them with password.

What is nginx?

nginx (pronounced Engine-Ex) is a webserver, reverse-proxy and caching solution powering a massive portion of the Internet websites today. It’s a lightweight web-server with non-locking implementation, meaning it can server impressive amounts of traffic with humble resource requirements.

nginx was acquired by F5 in 2019.

I’ll be writing a lot more about nginx in 2020, simply because I’m finally catching up with my dedicated hosts infrastructure and will be getting the time to document my setup and best practices.

Password Protecting in nginx

There’s a few steps to protecting a website using nginx (steps are similar but implemented differently in Apache web server):

  1. Decide and create/update the passwords file
  2. Decide on the username and password
  3. Generate password hash and add entry to the passwords file
  4. Update webserver configuration to specify password protection

Because websites are configured as directory locations, you have a choice of protecting the whole website like www.unixtutorial.org or just a part (subdirectory) of it, like www.unixtutorial.org/images.

INTERESTING: even though it’s commonly referred to as password protecting websites, what actually happens is you protect with username and password. So when you’re trying to open a protected website, you get a prompt like this, right there in your browser:

Password protection prompt

Password file and username/password Configuration

Most of the time website access is controlled by files named htpasswd. You either create default password file in /etc/nginx/htpasswd location, or create a website specific version like /etc/nginx/unixtutorial.htpasswd.

You can create a file using touch command:

# touch /etc/nginx/unixtutorial.htpasswd

Or better yet, use the htpasswd command to do it. But because htpasswd is part of Apache tools, you may have to install it first:

$ sudo yum install httpd-tools

When you run the htpasswd command, you specify two parameters: the password file name and the username you’ll use for access.

If the password file is missing, you’ll be notified like this:

$ sudo htpasswd /etc/nginx/htpasswd unixtutorial 
htpasswd: cannot modify file /etc/nginx/htpasswd; use '-c' to create it.

And yes, adding the -c option will get the file created:

$ sudo htpasswd -c /etc/nginx/htpasswd unixtutorial
New password:
Re-type new password:
Adding password for user unixtutorial

Now, if we cat the file, it will show the unixtutorial user and the password hash for it:

$ cat /etc/nginx/htpasswd
unixtutorial:$apr1$bExTryjo/$uxRop/uv5UwXvWl4EM5gv0

IMPORTANT: although this file doesn’t contain actual passwords, only their encrypted hashes, it can still be used to guess your passwords on powerful systems – so take the usual measures to protect access to this file.

Update Website Configuration with Password Protection

I’ve got the following setup for this old website in my example:

server {
     listen      *:80;
     server_name forum.reys.net;
     keepalive_timeout    60;

     access_log /var/log/nginx/forum.reys.net/access.log multi_vhost;
     error_log /var/log/nginx/forum.reys.net/error.log;

location / {
     include "/etc/nginx/includes/gzip.conf";
     proxy_pass  http://172.31.31.47:80;

     include "/etc/nginx/includes/proxy.conf";
     include "/etc/nginx/includes/headers.conf";
     }
}

Protection is done on the location level. In this example, location / means my whole website is protected.

So right in front of the proxy_pass entry, I’ll add my password protection part:

auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd;

As you can see, we’re referring to the password file that we created earlier. The auth_basic “Restricted” part helps you to configure a specific message (instead of word Restricted) that will be shown during username/password prompt.

That’s how the password protected part will look:

location / {
     include "/etc/nginx/includes/gzip.conf";
     proxy_pass  http://172.31.31.47:80;

     auth_basic "Restricted";
     auth_basic_user_file /etc/nginx/htpasswd;

     include "/etc/nginx/includes/proxy.conf";
     include "/etc/nginx/includes/headers.conf";
     }

Save the file and restart nginx:

$ sudo service restart nginx

Now the website https://forum.reys.net is password protected!

See Also




Project: Compile exFAT-FUSE in RHEL 8

Compiled ExFAT-FUSE binaries in RHEL 8

I have a large external SSD disk attached to my new desktop, and because I multiboot between Windows and different versions of Linux, I decided to keep it factory formatted with exFAT, which is an extended FAT32 implementation easily accessible from all of the operating systems for both read and write operations.

In today’s Unix Tutorial project I compile exFAT-FUSE software to accesse the exFAT partition from that external disk my RHEL 8 PC.



What is exFAT

FAT32 and ExFAT originate from Windows. It’s a family of filesystems that are native to Windows but go back to MS-DOS roots so the format is generally well-known, well-documented and widely implemented in moder Unix and Linux systems.

What is exFAT-FUSE

Usually filesystems require kernel modules to be providing desired functionality. Working with filesystems is considered to be low-level enough functionality that they’re done in kernel space (vs regular programs that run in user space).

But when this functionality is not easily available, it’s possible to get some filesystem drivers working in the userspace, by using FUSE technology.

FUSE means Filesystem in USEr space. It requires no kernel module for it to work, provided that functionality is slightly limited.

exFAT-FUSE is an example of FUSE implementation of filesystem driver – you compile binaries and run commands without any updates to Linux kernel or modules.

Installing exFAT Packages from EPEL

Usual approach is to use the EPEL repository and install exFAT utils from it…

Unfortunately, exfat packages are not available in RHEL 8 version of EPEL repository yet:

root@redhat:~ # yum --disablerepo="*" --enablerepo="epel" list available | grep exfat
root@redhat:~ #

This leaves us with the option of compiling exfat-fuse package ourselves.

IMPORTANT: you need to have development tools (automake, autoconf, make, gcc and a few other bits and pieces) installed on your RHEL 8 system before going through the rest of procedure.

You can install these tools using dnf:

root@redhat:~ # dnf group install "Development Tools"

Compile exFAT-FUSE in RHEL 8

Let’s download the exfat-fuse source code from GitHub:

root@redhat:~ # cd /dist
root@redhat:/dist # git clone https://github.com/relan/exfat.git
Cloning into 'exfat'…
remote: Enumerating objects: 3394, done.
remote: Total 3394 (delta 0), reused 0 (delta 0), pack-reused 3394
Receiving objects: 100% (3394/3394), 657.61 KiB | 1.58 MiB/s, done.
Resolving deltas: 100% (2184/2184), done.

Now prepare the configuration files for compiling:

root@redhat:/dist # cd exfat
root@redhat:/dist/exfat # autoreconf --install
configure.ac:32: installing './ar-lib'
configure.ac:29: installing './compile'
configure.ac:34: installing './config.guess'
configure.ac:34: installing './config.sub'
configure.ac:28: installing './install-sh'
configure.ac:28: installing './missing'
dump/Makefile.am: installing './depcomp'

… and attempt running the configure script:

root@redhat:/dist/exfat # ./configure --prefix=/soft
checking for a BSD-compatible install… /bin/install -c
checking whether build environment is sane… yes
checking for a thread-safe mkdir -p… /bin/mkdir -p
checking for gawk… gawk
checking whether make sets $(MAKE)… yes
checking whether make supports nested variables… yes
checking for gcc… gcc
checking whether the C compiler works… yes
checking for C compiler default output file name… a.out
checking for suffix of executables… 
checking whether we are cross compiling… no
checking for suffix of object files… o
checking whether we are using the GNU C compiler… yes
checking whether gcc accepts -g… yes
checking for gcc option to accept ISO C89… none needed
checking whether gcc understands -c and -o together… yes
checking whether make supports the include directive… yes (GNU style)
checking dependency style of gcc… gcc3
checking for gcc option to accept ISO C99… none needed
checking for ranlib… ranlib
checking for ar… ar
checking the archiver (ar) interface… ar
checking for special C compiler options needed for large files… no
checking for _FILE_OFFSET_BITS value needed for large files… no
checking build system type… x86_64-pc-linux-gnu
checking host system type… x86_64-pc-linux-gnu
checking for pkg-config… /bin/pkg-config
checking pkg-config is at least version 0.9.0… yes
checking for UBLIO… no
checking for FUSE… no
configure: error: Package requirements (fuse) were not met:
Package 'fuse', required by 'virtual:world', not found
Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix.

Alternatively, you may set the environment variables FUSE_CFLAGS and FUSE_LIBS to avoid the need to call pkg-config.

See the pkg-config man page for more details.

Ok, that didn’t work. Need the FUSE development library installed:

root@redhat:/dist/exfat # yum install fuse-devel
Updating Subscription Management repositories.
Last metadata expiration check: 0:01:03 ago on Tue 15 Oct 2019 08:48:59 IST.

Dependencies resolved.

Package       Arch      Version         Repository                        Size

Installing:
  fuse-devel    x86_64    2.9.7-12.el8    rhel-8-for-x86_64-baseos-rpms     43 k

This time configure works:

root@redhat:/dist/exfat # ./configure --prefix=/soft
checking for a BSD-compatible install… /bin/install -c
checking whether build environment is sane… yes
checking for a thread-safe mkdir -p… /bin/mkdir -p
checking for gawk… gawk
checking whether make sets $(MAKE)… yes
checking whether make supports nested variables… yes
checking for gcc… gcc
checking whether the C compiler works… yes
checking for C compiler default output file name… a.out
checking for suffix of executables… 
checking whether we are cross compiling… no
checking for suffix of object files… o
checking whether we are using the GNU C compiler… yes
checking whether gcc accepts -g… yes
checking for gcc option to accept ISO C89… none needed
checking whether gcc understands -c and -o together… yes
checking whether make supports the include directive… yes (GNU style)
checking dependency style of gcc… gcc3
checking for gcc option to accept ISO C99… none needed
checking for ranlib… ranlib
checking for ar… ar
checking the archiver (ar) interface… ar
checking for special C compiler options needed for large files… no
checking for _FILE_OFFSET_BITS value needed for large files… no
checking build system type… x86_64-pc-linux-gnu
checking host system type… x86_64-pc-linux-gnu
checking for pkg-config… /bin/pkg-config
checking pkg-config is at least version 0.9.0… yes
checking for UBLIO… no
checking for FUSE… yes
checking that generated files are newer than configure… done
configure: creating ./config.status
config.status: creating libexfat/Makefile
config.status: creating dump/Makefile
config.status: creating fsck/Makefile
config.status: creating fuse/Makefile
config.status: creating label/Makefile
config.status: creating mkfs/Makefile
config.status: creating Makefile
config.status: creating libexfat/config.h
config.status: executing depfiles commands

Let’s make the software. Make command compiles source codes into binary objects and eventually gives us executable files with commands that we can execute. End result of making exFAT-FUSE will be a number of exFAT specific commands for creating and mounting exFAT filesystems.

root@redhat:/dist/exfat # make
Making all in libexfat
make[1]: Entering directory '/dist/exfat/libexfat'
(CDPATH="${ZSH_VERSION+.}:" && cd .. && /bin/sh /dist/exfat/missing autoheader)
rm -f stamp-h1
touch config.h.in
cd .. && /bin/sh ./config.status libexfat/config.h
config.status: creating libexfat/config.h
config.status: libexfat/config.h is unchanged
make  all-am
make[2]: Entering directory '/dist/exfat/libexfat'
...
mv -f .deps/mkexfatfs-vbr.Tpo .deps/mkexfatfs-vbr.Po
gcc  -g -O2   -o mkexfatfs mkexfatfs-cbm.o mkexfatfs-fat.o mkexfatfs-main.o mkexfatfs-mkexfat.o mkexfatfs-rootdir.o mkexfatfs-uct.o mkexfatfs-uctc.o mkexfatfs-vbr.o ../libexfat/libexfat.a 
make[1]: Leaving directory '/dist/exfat/mkfs'
make[1]: Entering directory '/dist/exfat'
make[1]: Nothing to be done for 'all-am'.
make[1]: Leaving directory '/dist/exfat'

That’s done. Now we need to install the software, for this we run make install:

root@redhat:/dist/exfat # make install
Making install in libexfat
make[1]: Entering directory '/dist/exfat/libexfat'
make[2]: Entering directory '/dist/exfat/libexfat'
make[2]: Nothing to be done for 'install-exec-am'.
make[2]: Nothing to be done for 'install-data-am'.
make[2]: Leaving directory '/dist/exfat/libexfat'
make[1]: Leaving directory '/dist/exfat/libexfat'
Making install in dump
make[1]: Entering directory '/dist/exfat/dump'
make[2]: Entering directory '/dist/exfat/dump'
 /bin/mkdir -p '/soft/sbin'
  /bin/install -c dumpexfat '/soft/sbin'
 /bin/mkdir -p '/soft/share/man/man8'
 /bin/install -c -m 644 dumpexfat.8 '/soft/share/man/man8'
make[2]: Leaving directory '/dist/exfat/dump'
make[1]: Leaving directory '/dist/exfat/dump'
Making install in fsck
make[1]: Entering directory '/dist/exfat/fsck'
make[2]: Entering directory '/dist/exfat/fsck'
 /bin/mkdir -p '/soft/sbin'
  /bin/install -c exfatfsck '/soft/sbin'
make  install-exec-hook
...
make[3]: Entering directory '/dist/exfat/mkfs'
ln -sf mkexfatfs /soft/sbin/mkfs.exfat
make[3]: Leaving directory '/dist/exfat/mkfs'
 /bin/mkdir -p '/soft/share/man/man8'
 /bin/install -c -m 644 mkexfatfs.8 '/soft/share/man/man8'
make[2]: Leaving directory '/dist/exfat/mkfs'
make[1]: Leaving directory '/dist/exfat/mkfs'
make[1]: Entering directory '/dist/exfat'
make[2]: Entering directory '/dist/exfat'
make[2]: Nothing to be done for 'install-exec-am'.
make[2]: Nothing to be done for 'install-data-am'.
make[2]: Leaving directory '/dist/exfat'
make[1]: Leaving directory '/dist/exfat'

We now can go into /soft/sbin directory and find new binaries we just installed:

root@redhat:/dist/exfat # cd /soft/sbin
root@redhat:/soft # ls
deluge  sbin  share
root@redhat:/soft # cd s
-bash: cd: s: No such file or directory
root@redhat:/soft # cd sbin/
root@redhat:/soft/sbin # ls
dumpexfat  exfatlabel  mkexfatfs   mount.exfat
exfatfsck  fsck.exfat  mkfs.exfat  mount.exfat-fuse
root@redhat:/soft/sbin # ./m
mkexfatfs         mkfs.exfat        mount.exfat       mount.exfat-fuse

Transaction Summary

Install  1 Package

Total download size: 43 k
Installed size: 124 k
Is this ok [y/N]: y
Downloading Packages:

fuse-devel-2.9.7-12.el8.x86_64.rpm               17 kB/s |  43 kB     00:02    
Total                                            17 kB/s |  43 kB     00:02     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
 Preparing        :                                                        1/1 
  Installing       : fuse-devel-2.9.7-12.el8.x86_64                         1/1 
  Running scriptlet: fuse-devel-2.9.7-12.el8.x86_64                         1/1 
  Verifying        : fuse-devel-2.9.7-12.el8.x86_64                         1/1 
Installed products updated.

Installed:
  fuse-devel-2.9.7-12.el8.x86_64                                                Complete!

Mount exFAT Filesystem

The moment of truth!

While in the same /soft/sbin directory, let’s run the mount.exfat-fuse command and attempt to mount the /dev/sdc1 filesystem again:

root@redhat:/soft/sbin # mkdir /exfat root@redhat:/soft/sbin # ./mount.exfat-fuse /dev/sdc1 /exfat FUSE exfat 1.3.0 

Success!

root@redhat:/soft/sbin # df -h /exfat
Filesystem      Size  Used Avail Use% Mounted on 
/dev/sdc1       932G  556G  377G  60% /exfat

That’s it! It’s been fun 🙂

See Also




Project: Connect LG 5K Display to PC

As I mentioned, I’m building a new Linux based desktop PC – currently running RHEL 8. Since I’m planning it as a primary desktop system for my home lab, I want to eventually migrate workflows from MacBook Pro to the new PC – and this means I want to use my existging LG 5K UltraFine 27″ display. This seemed like an interesting Unix Tutorial Project from the very start!

Work In Progress

WARNING: this is a work in progress article, I plan to revisit and update it in the next few weeks/months.

LG 5K UltraFine Display

This is a great 5K display that’s still one of the only few options available for getting 5K resolution display in macOS without buying an iMac or iMac Pro from Apple.

It’s great for photography and excellent for day-to-day use cause all the browser, documents and terminal windows are super crisp. You’ll get similar results on 4K displays available for PCs but LG 5K UltraFine has better colour gamut and screen brightness than most.

Here’s how it looks:

… on the back, as you can see, there are only USB-C format ports: 1 for input and 3 for additional devices:

Why Using LG 5K on a PC is Tricky

LG 5K is a stunning monitor, created by LG in collaboration with Apple, specifically for use with Apple laptops. This poses four complications for anyone planning to use LG 5K with non-Apple hardware:

  1. You need Thunderbolt over USB-C Connectivity
  2. You need drivers for supporting camera, microphone and ambient light sensor that are installed in the LG 5K display
  3. You need software support for LG 5K brightness controls (there are no control buttons on this display – I kid you not!)
  4. You need software/drivers support for the 5K display – it’s called display tiling support

LG 5K display in Linux

So far, I figured out Issue 1, kind of started looking into Issue 2, ignored Issue 3 altogether and spent considerable amount of time with no result on Issue 4.

WARNING: I haven’t completed this project yet, meaning I get output via USB-C connector into LG 5K Display, but it’s not 5K resolution yet.

Display Tiling for 5K+ resolutions

Because many interfaces simply don’t have the bandwidth to push all these 5K+ pixels at acceptable (30Hz or 60Hz) rates, there’s a commmon workaround in hi-res displays, called display tiling. Resolution of 5120×2880 pixels is very bandwidth hungry when it comes to video cable connection.

Display tiling means you connect your hi-res display over multiple ports. Each port appears to be a separate display of typically half the resolution of your screen. Each of these virtual displays is called a tile, and your graphics adapter, its driver and possibly OS software should all support this tiling concept for a seamless experience – meaning your display appears to have 5K resolution, with software cleverly stitching tiles together into a single image you see.

In LG 5K case, half the display is driven by one tile of 2560×2880 pixels and another half is driven by 2nd cable and 2nd tile over it. Combined together, they give you the 5K resolution: 5120×2880 pixesl. But if you have old software or driver, you may see just half the screen image or sub-5K resolution.

Connecting LG 5K to PC over Thunderbolt 3

In short, Issue 1 is: this monitor has only one input: USB-C cable that actually acts as a Thuderbolt 3 cable. The idea behind it is great: you plug this cable into a MacBook, and magic happens:

  • MacBook shows output onto beautiful 5K display, with HiDPI support etc
  • Display devices like camera start working for video calls, etc.
  • Any devices you have plugged via USB-C ports on the back of the LG 5K display are presented to MacBook – so 5K display becomes a USB-C hub
  • Best of all: MacBook is getting charged over the same cable

In practice, this is super useful: no messy cables, all the stationery devices like USB drives and printers are plugged neatly into the back ports of the display and don’t even have to be touched. I plug a single cable from monitor into laptop and it all just works together!

But if I want the same functionality from a non-Apple PC, there’s immediately quite a few issues:

  1. USB-C ports your PC has are not going to work – they’re most likely USB 3.1 ports for connecting storage devices, meaning they don’t have DisplayPort functionality – so you have the correct port but it doesn’t have the correct functionality – no picture will be shown on the display
  2. There are not that many graphics cards that have USB-C form-factor output that will work with such displays
  3. There are even fewer graphics cards that can drive 5K over a single port

Thunderbolt 3 Add-In Cards

So what is the solution? You need to get one of them Thunderbolt 3 Add-In Cards (AIC). This here is the one I got, Gigabyte GC-TITAN RIDGE AIC:

Gigabyte GC-TITAN RIDGE rev1.0 AIC

Specifically, there are ASUS, ASrock and Gigabyte ones that I could find:

  • Asus
  • Gigabyte
  • ASrock

What Does Titan Ridge TB3 AIC Card Do?

AIC card provides full Thunderbolt 3 functionality – multiple 40Gbit/sec connectivity channels via USB-C ports. It’s mostly meant to be a storage adapter (similar to SCSI or RAID expansion cards), in a sense that you can connect high-speed directly-attached storage (DAS) units to it like small or large disk arrays.

BUT as it turns out, most of Thunderbolt 3 Add-In Cards also help with graphics output via USB-C interface. So you have two USB-C outputs and even a DisplayPort on my model:

DisplayPort output on the left, 2 USB-C outputs, 2 minDisplayPort inputs

The way Titan Ridge TB3 card works for graphics output is that it takes one or two miniDisplayPort inputs, converts the signal (up to 8K resolution is supported, apparently!) and outputs it via USB-C cable to a compatible display.

What this approach means is you still need a proper graphics card (GPU) installed in your desktop, but instead of plugging display into it (which you can’t do for USB-C), you do the next best thing:

  1. you plug TB3 adapter into graphics card (most likely over 2 cables)
  2. you plug your fancy USB-C connected monitor into the Titan Ridge TB3 card

Here’s how connectors work (showing just 1 DisplayPort input for now):

On the left is the USB-C Thunderbolt cable going to LG 5K Display

For LG 5K, you need to be sure to get Thunderbolt 3 card and not Thunderbolt 2 card – again, older card would have correct ports but incorrect functionality for 5K display. So it must be Thunderbold 3 card. And you definitely need 2 DisplayPort connectors going from graphics card into the Thunderbolt 3 AIC card, otherwise you’ll be limited to 4K resolution.

If you’re shopping, look by the technology name, introduced by Intel. Alpine Ridge is the older model (not suitable for 5K), Titan Ridge is the one you need.

I got myself the Gigabyte GC-Titan Ridge TB3 card, because my new desktop PC has a Gigabyte motherboard.

Installing Titan Ridge AIC for LG 5K

Here are the steps I took to configure this TB3 AIC card in my desktop:

Inspect external ports on the card to confirm their order – where 1st miniDisplayPort goes, where 1st USB-C/Thunderbolt output is, etc:

IMPORTANT: for 5K signal you need 2 DisplayPort connectors, so it’s important that you get 1st DP output on GPU into 1st DP input on the TB3 AIC card, and 2nd DP output into 2nd DP input.

On the AIC card itself, you have a bunch of ports.:

From left to right on this photo:

  • Thunderbold USB-C header (goes to your motherboard)
  • USB 3 header (goes to another port on your motherboard)
  • 2 PCIe type Power connectors

I suggest you connect all of them if your motherboard and power supply allow, but in my case I ended up disabling Thunderbolt support in BIOS and letting the AIC card figure things out.

Put TB3 AIC Card into PCIx4 Port

Don’t know why, but most motherboards as super picky about where you should put your Thunderbolt 3 Add-In Card: it may work anywhere, but best is to check which port is recommended.

For my Gigabyte Titan Ridge TB3 AIC, there’s this reference suggesting PCIe slot for each Gigabyte motherboard supporting it:

Motherboards with GC-TITAN RIDGE support

Based on the above, I actually realised my GPU was installed in the PCIe x4 slot so I moved it closer to CPU, into the fasted x16 slot available. TB3 AIC card took the place in the leftmost slot, PCIe x4:

GC-Titan Ridge card in the left PCIe x4 slot, Radeon RX580 in the PCIe x16 slot on the right

Double-check BIOS Version

You may need to upgrade BIOS on your motherboard for TB3 support – check manual or get in touch with me if you need help.

Fine-Tune BIOS Settings

I ended up disabling Thunderbolt support (it’s called Discrete Thunderbolt) in my BIOS altogether. With it enabled I could get DisplayPort output from the TB3 AIC, but not over USB-C. So after disabling support in BIOS, things started working.

Current Results of LG 5K to Desktop Project

  • Video output works and image shows on the LG 5K display
  • Windows 10 fully supports 5K resolution
  • RHEL 8 still only shows 3K resolution – work in progress

That’s it for now! Hope you learned something new – I know I did! This whole buisiness of connecting my 2 year old 5K display to a brand new PC turned out to be way more involving and educational than I expected.

See Also




Unix Tutorial Projects: Install Ubuntu 19.04 on Dell XPS 13 9380

I know that the upcoming Linux Mint release will be based on the just-released Ubuntu 19.04, but just couldn’t wait this long to try Ubuntu 19.04 and all the improvements it brings. So my past few week’s project has been to install Ubuntu 19.04 on my Dell XPS 13 9380 laptop.

My Linux reinstall checklist

This is the first time I’m reinstalling Linux on the laptop, so it’s not a terribly comprehensive list – but it covers basics:

  • ensure my (encrypted) homedir is going to survive the reinstall
  • copy SSH keys from server and key users, if required
  • capture list of installed packages (just in case)
  • capture list of running processes (just in case)
  • screenshot the status bar (to make sure the same things get autostarted after reinstall)

I also decided to use this opportunity to do the following:

  • check if Linux 5.0.0 helps with boot times (Linux Mint 19.2 was taking 25+sec to boot from super-fast SSD – clearly taking time to initialise some devices)
  • configure BTRFS to be the default operating system for most of storage – should be great for snapshots!
  • see if I can migrate the user profile for Brave browser when installing Brave from official repos (I compiled Brave from source on Linux Mint as you might remember)
  • check if there’s better support for sleep/hybrid sleep behavior

Migrating Encrypted Home Directory to New Linux Install

I had hoped for the Ubuntu 19.04 Live CD to have support for encrypted filesystems (it does offer encryption for your new install, after all), but couldn’t easily make it work – so I decided to take an ensier approach: just copy encrypted homedir to another (unencrypted) partition for now – will do further testing next time I reinstall.

Capture Currently Installed Packages as a List

Since Ubuntu and Linux Mint are ultimately based on Debian Linux, I still find dpkg command the easiest way to get packages list:

greys@xps:/ $ dpkg -l > /storage/dpkg-l.txt

First few lines will look like this if you check the file:

greys@xps:/ $ head -10 /storage/dpkg-l.txt
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==========================================-====================================-============-===============================================================================
ii accountsservice 0.6.50-0ubuntu1 amd64 query and manipulate user account information
ii acl 2.2.53-4 amd64 access control list - utilities
ii acpi-support 0.143 amd64 scripts for handling many ACPI events
ii acpid 1:2.0.31-1ubuntu2 amd64 Advanced Configuration and Power Interface event daemon
ii adduser 3.118ubuntu1 all add and remove users and groups

Capture Process Listing as a File

This was also easy:

greys@xps:/ $ ps -aef > /storage/ps-aef.txt

Burn Ubuntu ISO onto USB stick

I have followed my bootable USB from ISO in MacOS steps for this after downloaded the Ubuntu 19.04 ISO image. Because it’s not a Windows image, there are less steps and you can use Etcher to make bootable USB procedure as well.

Install Ubuntu 19.04, Replacing Existing Linux

Although procedure is mostly the same as in the Ubuntu 19.04 release post of mine back in April, I wanted to share exact steps of installing things on laptop:

COOL FACT: Because you install using the USB-based live-DVD approach, you can make screenshots as you normally would in Ubuntu. BUT you need to copy these somewhere before you reboot the live-DVD.

Update File/Directory Ownership (If Necessary)

It’s not uncommon that after OS reinstall you end up with a bunch of files from previous installation that don’t have correct ownership details: they have (and show) user IDs and group IDs as numbers – meaning your new install doesn’t have the match user or group. Since I’m only concerned with migrating my username greys and its home directory, my task was simple: compare userIDs/groupIDs and make sure they match.

If they don’t, the fix is simple: once your new install is completed, log in and run the following command with sudo:

greys@xps:/ $ sudo chown -R greys:greys /storage

This will make my new user greys and its group greys to be the new owners of everything under /storage. You should specify your own username and group name, of coruse. The point is, this approach doesn’t really need to know which numeric userid or groupid you have – it just updated ownership info so that your newly created user is definitely the owner of the files under /storage or whatever directory you specify.

Laptop Performance with Ubuntu 19.04

I’m happy to report that Ubuntu 19.04 brings a number of most welcome improvements to my Dell XPS 9380 setup:

  • Much faster boot time – there’s less than 15 seconds of cold boot time now which is pretty great. I think it’s due to better device support in 5.x Linux kernel – it recognises and initialises devices much better.
  • Noticeably better suspend/resume behavior – better power management must be at play, because now I have a pretty good chance of my laptop falling asleep upon closing lid. On 4.x kernel it pretty much always stayed awake and resulted in overheating laptop and draining the battery dry.
  • Better WiFi support – although WiFi driver/kernel module keeps crashing every few days and this means a reboot if I want to get back online.

Project Follow Up

I have a number of things to research based on this Unix Tutorial project:

  1. Get to the bottom of WiFi driver/kernel module issue – rebooting laptop every few days is not acceptable. I will at the very least find which module to reload without reboot, but ideally need to find out if a better driver is awailable to avoid all this maintenance altogether.
  2. Snap apps don’t seem to work on my external homedir based on btrfs filesystem. Could be a problem with btrfs, could be a problem with non-standard homedir localtion – either way Snaps don’t work, so I had to migrate my homedir to standard /home/greys location for now.
  3. Graphics performance seems improved but still not as snappy as I would expect. Must investigate further – I know my XPS laptop doesn’t have a discrete graphics card but still expect it to be powerful enough to handle normal daily use and interface elements in Ubuntu 19.04 like showing tasks or searching for an app.
  4. Improve the Dell XPS keyboard backlight situation – specifically, automate keyboard highlight for Dell laptops in Linux with a cron script.

See Also

 




Use OfflineIMAP For Receiving Email

unix-tutorial

This week’s Unix Tutorial Project is super geeky and fun: I’m setting up text-based email archive system using Mutt (NeoMutt, actually), OfflineIMAP and hopefully NotMuch. Will publish a project summary on the weekend.

Why use OfflineIMAP

OfflineIMAP tool is an open-source tool for downloading your email messages and storing them locally in a Maildir format (meaning each email message is stored in a separate file, each folder/GMail tag is a separate directory).

As the name suggests, this tool’s primary objective is to let you read your emails offline. Contrary to the other part of the name, offlineimap is NOT an IMAP server implementation.

I’d like to explore OfflineIMAP/Neomutt setup as a backup/archive solution for my cloud email accounts. I used to be with Fastmail but switched to gSuite email last year. I think it’s very important to keep local copies of any information you have in any cloud – no matter how big/reliable the service provider is, there are many scenarios where your data could be completely lost, and responsibility for keeping local backups is always with you.

Both gMail and Fastmail solutions are perfect for web browser use but any local email software is invariably bulkier and slower compared to web interface. I’m not giving up on finding the acceptably performance and reliable solution though.

This is one of the most recent attempts to download all emails and to have them easily searchable on my local PCs and laptops.

OfflineIMAP Configuration Steps

I’m only learning this tool, so this is probably the most basic usage:

  1. Confirm your mail server details (IMAP)
  2. Confirm your mailbox credentials (for Google, gSuite and even Fastmail you need to generate an app password – it’s separate and different from your primary email password)
  3. Create .offlineimaprc file in your home directory as shown below
  4. If necessary, create credentials file (for now – with cleartext app password for email access) – mine is /home/greys/.creds/techstack.pass
  5. Run offlineimap (first time and every time you want your email refreshed)

My .offlineimaprc file

Here’s what I have in my .offlineimaprc file for this experiment:

[general]
ui = ttyui
accounts = techstack

[Account techstack]
localrepository = techstack-local
remoterepository = techstack-remote

[Repository techstack-local]
type = Maildir
localfolders = ~/Mail/techstack/

[Repository techstack-remote]
type = Gmail
remoteuser = greys@techstack.ie
remotepassfile = ~/.creds/techstack.pass
maxconnections = 5
ssl = yes
sslcacertfile = /etc/ssl/certs/ca-certificates.crt
folderfilder = lambda foldername: foldername not in ['Archive']
expunge = no

You can have multiple accounts in this one config file, they’ll be listed in the accounts section (accounts = techstack, unixtutorial would mean 2 accounts: techstack one and one for my Unix Tutorial email).

localfolders parameter specifies that I want OfflineIMAP to create a Mail directory in my homedir (so ) and then techstack subdirectory there – meaning you can have account subidrectories there like /home/greys/Mail/techstack and /home/greys/Mail/personal, etc.

You define two repositories, local and remote one. The task of OfflineIMAP is to sync the two.

IMPORTANT: The really important parameter is maxconnections one. Default is 3 and I’ve changed it to 5 for quicker email sync. Setting it to a higher value resulted in failures – probably because Google servers rate limit my connection.

CRITICAL: expunge parameter is set to yes by default, so you must set it to no if your plan is to keep emails on the mail server after you sync them. By default they will be removed from the server as soon as they are downloaded, meaning Gmail app won’t see any messages. Once deleted, it will be rather tricky to restore all the emails – so it’s important to get this setting right from the very start. Since my primary usage is still web and Gmail app based, I certainly want all my emails to stay in Google cloud even after I download them using OfflineIMAP – that’s why I configured it as expunge = no.

As you can see, this config references the /home/greys/.creds/techstack.pass file. This file has an clear-text application password I generated for my email address in gSuite admin panel. My understanding is that this can be improved, so I’ll do a follow-up post later.

How To Use OfflineIMAP

Simply run the offlineimap command and you should see something like this:

greys@xps:~ $ offlineimap 
OfflineIMAP 7.2.2
Licensed under the GNU GPL v2 or any later version (with an OpenSSL exception)
imaplib2 v2.57 (system), Python v2.7.16, OpenSSL 1.1.1b 26 Feb 2019
Account sync techstack:
*** Processing account techstack
Establishing connection to imap.gmail.com:993 (techstack-remote)
Folder 2016 [acc: techstack]:
Syncing 2016: Gmail -> Maildir
Folder 2016/01-January [acc: techstack]:
Syncing 2016/01-January: Gmail -> Maildir
Folder 2016/02-February [acc: techstack]:
Syncing 2016/02-February: Gmail -> Maildir
Folder 2016/01-January [acc: techstack]:

As you can see, it processes account techstack, connects to gmail and starts processing remote folders (gmail tags) like 2016, 2016/01-January, 2016-02-February etc – these are the tags I have in my gSuite account.

Initial download would take a while. My 150K messages took almost 3 days to download.

That’s all for today, hope you give OfflineIMAP a try!

See Also




Projects: Automatic Keyboard Backlight for Dell XPS in Linux

 

1C3D5091-6BB5-4201-9ABF-3B213879770C.JPG

Last night I finished a fun mini project as part of Unix Tutorials Projects. I have writted a basic enough script that can be added as root cronjob for automatically controlling keyboard backlight on my Dell XPS 9380.

Bash Script for Keyboard Backlight Control

As I’ve written just a couple of days ago, it’s actually quite easy to turn keyboard backlight on or off on a Dell XPS in Linux (and this probably works with other Dell laptops).

Armed with that knowledge, I’ve written the following script:

#!/bin/bash

WORKDIR=/home/greys/scripts/backlight
LOCKFILE=backlight.kbd
LOGFILE=${WORKDIR}/backlight.log
KBDBACKLIGHT=`cat /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness`

HOUR=`date +%H`

echo "---------------->" | tee -a $LOGFILE
date | tee -a $LOGFILE

if [ $HOUR -lt 4 -o $HOUR -gt 21 ]; then
echo "HOUR $HOUR is rather late! Must turn on backlight" | tee -a $LOGFILE
BACKLIGHT=3
else
echo "HOUR $HOUR is not too late, must turn off the backlight" | tee -a $LOGFILE
BACKLIGHT=0
fi

if [ $KBDBACKLIGHT -ne $BACKLIGHT ]; then
echo "Current backlight $KBDBACKLIGHT is different from desired backlight $BACKLIGHT" | tee -a $LOGFILE

FILE=`find ${WORKDIR} -mmin -1440 -name ${LOCKFILE}`

echo "FILE: -$FILE-"

if [ -z "$FILE" ]; then
echo "No lock file! Updating keyboard backlight" | tee -a $LOGFILE

echo $BACKLIGHT > /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness
touch ${WORKDIR}/${LOCKFILE}
else
echo "Lockfile $FILE found, skipping action..." | tee -a $LOGFILE
fi
else
echo "Current backlight $KBDBACKLIGHT is the same as desired... No action needed" | tee -a $LOGFILE
fi

How My Dell Keyboard Backlight Script Works

This is what my script does when you run it as root (it won’t work if you run as regular user):

  • it determines the WORKDIR (I defined it as /home/greys/scripts/backlight)
  • it starts writing log file backlight.log in that $WORKDIR
  • it checks for lock file backlight.kbd in the same $WORKDIR
  • it confirms current hour and checks if it’s a rather late hour (when it must be dark). For now I’ve set it between 21 (9pm) and 4 (4am, that is)
  • if checks current keyboard backlight status ($KDBBACKLIGHT variable)
  • it compares this status to the desired state (based on which hour that is)
  • if we need to update keyboard backlight setting, we check for lockfile.
    • If a recent enough file exists, we skip updates
    • Otherwise, we set the backlight to new value
  • all actions are added to the $WORKDIR/backlight.log file

Log file looks like this:

greys@xps:~/scripts $ tail backlight/backlight.log 
---------------->
Tue May 28 00:10:00 BST 2019
HOUR 00 is rather late! Must turn on backlight
Current backlight 2 is different from desired backlight 3
Lockfile /home/greys/scripts/backlight/backlight.kbd found, skipping action...
---------------->
Tue May 28 00:15:00 BST 2019
HOUR 00 is rather late! Must turn on backlight
Current backlight 2 is different from desired backlight 3
Lockfile /home/greys/scripts/backlight/backlight.kbd found, skipping action...

How To Activate Keyboard Backlight cronjob

I have added this script to the root user’s cronjob. In Ubuntu 19.04 running on my XPS laptop, this is how it was done:

greys@xps:~/scripts $ sudo crontab -e
[sudo] password for greys:

I then added the following line:

*/5 * * * * /home/greys/scripts/backlight.sh

Depending on where you place similar script, you’ll need to update full path to it from /home/greys/scripts. And then update WORKDIR variable in the script itself.

Keyboard Backlight Unix Tutorial Project Follow Up

Here are just a few things I plan to improve:

  • see if I can access Ubuntu’s Night Light settings instead of hardcoding hours into the script
  •  fix the timezone – should be IST and not BST for my Dublin, Ireland location
  • Just for fun, try logging output into one of system journals for journalctl

See Also

 




Projects: Start tmux Instead of Login Prompt on tty1

Screen Shot 2019-05-01 at 00.14.06.png

I’m writing about so many experiments and Unix Tutorial projects lately that I’m going to create a hardware lab section on this blog soon!

As you may know, I have a Raspberry Pi system named becky, and it has a 7″ touchscreen attached to it. I recently started the Centralised RSyslog Project based around that Raspberry Pi system, and finally got the time to implement another super important improvement: configure tmux auto-start on the main Raspberry Pi screen.

Project Objectives

Right now, Raspberry Pi screen shows a text login prompt. So before I can see any RSyslog logs or work with files, I have to attach keyboard, login with username and password and then use tail to review logs.

Since becky isn’t my primary workstation, I don’t have a dedicated keyboard for it. Which means after each reboot I have to find and reconnect keyboard to start logs monitoring.

I dediced to optimise this by using tmux. But if I SSH into Raspberry Pi and start tmux from command line, my tmux session won’t show on the primary display (text console). So I want to find a way of auto-starting tmux with system boot AND on the primary text console.

Once this is done, I can always ssh remotely and reconnect to the tmux session – meaning anything I type or do via SSH will also be showing on the primary Raspberry Pi screen. Once I start the command I need, it should be possible to detach from tmux and to close my SSH session – but primary screen on Raspberry Pi will keep my commands running and showing.

Here’s what I want to accomplish:

  • automatically start tmux session with system boot
    • ensure tmux starts on the primary screen of Raspberry Pi (so I can see it)
    • eliminate the need to connect keyboard to Raspberry Pi in order to login
  • create basic tmux startup script to allow for later improvements

Linux console: text login prompt

If you ever connected monitor to your Raspberry Pi or any server running Linux like CentOS/RedHat or Debian/Ubuntu, chances are you’ve seen the black screen with login prompt.

This login prompt is what’s called a console login prompt – it’s a level of protention created specifically around direct access to a sever or desktop. Much like Windows or MacOS system will show you a login screen when you power the system up or connect a monitor, Linux systems will do the same. But for servers you rarely have graphics login screen, so insted you’re getting a text version of it.

You must enter a valid username and password to log into text console.

getty and tty1

Because Unix is always made for scale, your text console isn’t everything there is to accessing your Linux remotely: in addition to attaching monitor you can also connect using serial or USB connection or via SSH.

All of such methons of access involve working with virtual consoles: special streams of interacting with your Unix system that are created every time and for every user accessing the server.

getty is one of the most common software implementations of managing virtual consoles – it’s a command that allows you creating multiple virtual consoles for various kinds of access. So for login prompt on the screen you’re attaching to Raspberry Pi, getty creates and manages the tty1 console.

Creating tmux start script

We’ll need to write a simple script to start our tmux. Having this tmux.sh script will let you expand it later, to create multiple panes and starting different commands in them – all automatially.

I have a pretty basic script right now – it just starts tmux and gives me command line prompt. I called this script /home/greys/tmux.sh:

#!/bin/bash

su greys -c "/usr/bin/tmux kill-session -t start 2> /dev/null"
su greys -c "/usr/bin/tmux new-session -s start"

In this example, start is the name of my initial tmux session. And yes, this is a very simple way of achieving my result – I’ll update this tutorial later showing a slightly more complex but also more correct waty of doing the same.

IMPORTANT: don’t forget to make this script executable:

chmod a+rx /home/greys/tmux.sh

I suggest you try running this script now to see if it’s working as expected (you should end up being inside a tmux session named start).

Auto-starting tmux in tty1

By default, each console like tty1 is running login prompt – with the idea that there’s no way to access and to harm your Linux system without knowing username and password.

For my RSyslog server based on Raspberry Pi, I’m not particularly worried about unathorised access. More over, I’m interested in having easy enough way of changing what’s shown in the text console of the Raspberry Pi screen. That’s why I decided to replace default login with tmux auto-start.

IMPORTANT: please don’t follow the same steps unless and until you have other means (SSH) of accessing your server remotely. There are ways to configure multiple virtual consoles, but in this tutorial I’m only showing how to override the default tty1, which means after you apply changes you may lose login promt to your system.

Update systemd to make getty start your script

Now that we have the tmux.sh script from previous step, let’s configure systemd script to make getty start tmux.sh in tty1:

Here’s how you can configure auto start of any software in your tty1 console (should be the default one in Ubuntu or Raspbian): you need to create the override.conf file in /etc/systemd/system/getty@tty1.service.d directory:

root@becky:/home/greys# cd /etc/systemd/system/getty@tty1.service.d
root@becky:/etc/systemd/system/getty@tty1.service.d # vi override.conf
[Service]
ExecStart=
ExecStart=-/home/greys/tmux.sh
StandardInput=tty
StandardOutput=tty

This will call the /home/greys/tmux.sh script instead of the default login.

Refresh systemd and restart getty service

The moment of truth: need to refresh systemd and restart getty to apply our changes.

IMPORTANT: login using SSH, do NOT run these commands from the primary screen/login console on your server.

# systemctl daemon-reload
# systemctl restart getty@tty1.service

If everything works fine, your screen on Raspberry Pi should show a tmux session like this one:

If something’s not right, back out the changes before rebooting your Raspberry Pi:

# mv /etc/systemd/system/getty@tty1.service.d/override.conf /root
# systemctl daemon-reload
# systemctl restart getty@tty1.service

Yes, you recognised it correctly: we simply move override.conf file out of the way and restart getty the same way we’ve done in previous step.

That’s it for today!

See Also




Projects: NAS storage with Helios 4

This past week (actually past two weeks) I worked on building a compact NAS storage system using the Helios 4 kit I had received a few weeks ago. The expected end result is a 4-drive RAID5 storage available to Windows, Linux and Mac backup clients via native file transfer protocols.

Helios 4 NAS storage

Helios 4 looks pretty great: it’s a system-on-the-chip with 4 SATA ports and a gigabit network interface, supplied with additional hardware functionality for speeding up RAID operations.

  • Hardware base: Marvell ARMADA® 388 MicroSOM
  • CPU: dual-core ARM Cortex A9 CPU clocked at 1.6 Ghz
  • RAM: 2GB of ECC (very cool that it’s ECC!)
  • Storage: 4 SATA ports + microSD card for the OS image

I got my Helios 4 as part of the 3rd wave on the campaign, it seems to be available for pre-order now.

Building Helios 4

The kit arrives as a batch of parts that are practically ready to be assembled:

There’s a great Wiki with setup instructions that I followed.

Let’s peel the packaging paper off:

… now attach the disks:

… install motherboard:

… and add the fans:

And that’s it! We have a brand new shiny network storage, ready for software install and configuration:

Armbian installation for Helios 4

You can download the latest Armbian version here: Kobol.io – Helios 4 images.

Let’s burn the IMG of Armbian onto a microSD card:

root@xps:~# dd if=/home/greys/Downloads/Armbian_5.77_Helios4_Debian_stretch_next_4.14.106.img | pv | dd of=/dev/mmcblk0p1 bs=1M
2121728+0 records in3MiB/s] [ <=> ]
2121728+0 records out
1086324736 bytes (1.1 GB, 1.0 GiB) copied, 39.7346 s, 27.3 MB/s
1.01GiB 0:00:39 [26.1MiB/s] [ <=> ]
0+13928 records in
0+13928 records out
1086324736 bytes (1.1 GB, 1.0 GiB) copied, 108.371 s, 10.0 MB/s
root@xps:~# ls

Cool! Now we just put this into the Helios 4 system and power it on. Now would also be the time to connect the USB to microUSB cable into my XPS Laptop and use the picocom to connect over it to the Helios 4 console (I had to apt install picocom). Since I use the Dell XPS 13 2019 model, there are no USB ports – only USB-C. So in my case the console connection is: USB-C adapter -> USB cable -> micro-USB connector plugged into Helios 4.

greys@xps:~$ sudo picocom -b 115200 /dev/ttyUSB0

this will eventually show the fully booted OS and progress with Helios 4 setup:

helios4-first-boot.png

I have configured my account greys and then progresses with post configuration.

Helios 4 Post Configuration

When you boot, you get a fully working Debian Stretch based system, ready for post-configuration.

Step 1: configure static IP address

Once you login, it’s probably best to set the static IP address and then continue setup using SSH connection instead of USB serial.

As root, edit the /etc/network/interfaces file and add eth0 settings similar to these:

auto eth0
iface eth0 inet static
address 192.168.1.XXX
netmask 255.255.255.0
gateway 192.168.1.1

IMPORTANT: don’t forget to replace 192.168.1.XXX with a correct IP address in your network, like 192.168.1.123. Same goes for the gateway IP address.

Once changes are made, reboot using shutdown -r now and reconnect using ssh.

Step 2: install the OpenMediaVault

Run the armbian-config command as root and select Software

… then select Softy:

… and then select OMV:

that’s it: we can now open the IP address in our browser and login using default admin user (username: admin, password: openmediavault):

Step 3: change default OpenMediaVault login

This is admin/openmediavault initially, so change the password as soon as you can using General Settings and Web Administrator Password section:

Always Apply Your OpenMediaVault Changes

VERY IMPORTANT: one rather annoying feature of OpenMediaVault is that you have to apply all of your changes. You can create new users or configure shares or even build RAID arrays, but that’s all happening in the OMV interface. You have to click the Apply in the yellow status prompt at the top part of your browser window with OMV to actually commit changes and restart relevant services. The reason this is annoying is because this yellow prompt doesn’t show up immediately, so when you move to a different section of OpenMediaVault administration you may forget about it and not apply changes until much later.

Here’s an example of this thing:

RAID5 setup on Helios 4

From the main menu of the OMV, I created a RAID5 array using the 4 disks I have available.

I contemplated getting a 2-disk parity setup, but decided against it because this NAS server is a secondary storage device in my home office – I have a Synology system running 2-disk parity with larger disks.

The plan for Helios 4 system is therefore to be about 8.5TB of NAS storage for immediate backups and temporary projects. Critical data will be backed up to the Synology NAS system in parallel (definitely NOT from Helios 4 to Synology, but directly from backup clients).

Check which disks you have:

Now erase (wipe) each one of them:

Here’s how RAID5 is created:

And here you can see how it looks from the command line:

root@helios4:~# mdadm --detail /dev/md0
mdadm: Unknown keyword INACTIVE-ARRAY
/dev/md0:
Version : 1.2
Creation Time : Sun Apr 7 23:22:37 2019
Raid Level : raid5
Array Size : 8790405120 (8383.18 GiB 9001.37 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Apr 7 23:23:14 2019
State : clean, resyncing
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Resync Status : 0% complete

Name : helios4:0 (local to host helios4)
UUID : 97115daa:993d28e6:50a61a7c:c980f755
Events : 9

Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 0 1 active sync /dev/sda
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
root@helios4:~#

The performance is pretty impressive: initial RAID5 re-sync is happening with the average speed of 80MB/sec (default non-prioritised sync on my 2-disk parity Synology system was around 20MB/sec last time I checked):

root@helios4:~# cat /proc/mdstat

Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

md0 : active raid5 sdd[3] sdc[2] sda[1] sdb[0]

8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] [=>……………….] resync = 8.5% (250402304/2930135040) finish=555.8min speed=80342K/sec bitmap: 21/22 pages [84KB], 65536KB chunk

Even before this re-syncing completes, you can create filesystem on the RAID5 array that was just created:


Once filesystem is created, it’s possible to progress to shared folders and network shares setup.

Network Shares Setup

After wasting quite a few days trying to access basic network shares using my primary user on the new NAS system, I realised that it was a mistake: that user was for administrative purposes, and not a member of the users group. Turns out, I either need to add users group membership for greys or create another user specifically for network share access.

After brief consideration I decided to go with a separate user called nas: passwords have to be in clear text form in some of my automation scripts, and so it makes sense to use specific user instead of risking password of an admin user to be leaked.

Add shared folder:

Now you need to enable sharing service, select the ones you need from the menu and tick Enable option like this:

You can always select the Dashboard in the menu to see which services are enabled:

For every shared folder you create, be sure to set the privileges:

Mounting NAS storage from MacOS

So far I only accessed storage from Windows and MacOS.

Here are the commands I used for mounting APF (Apple Filing Protocol) share and CIFS (Windows/Samba) share – they both point to the same folder on the Helios 4 storage, actually.

First, create mountpoints:

maverick:~ root# mkdir /windows_try /afp_try

Now, let’s try Windows share access:

maverick:~ root# mount_smbfs smb://nas:SECRETPASSWORD@192.168.1.XXX/Stuff /windows_try

And now, MacOS style via AFP:

maverick:~ root# mount_afp afp://nas:SECRETPASSWORD@192.168.1.XXX/Stuff /afp_try

Note: I replaced the last octet in the helios4 IP with XXX and also swapped my real nas user password with the SECRETPASSWORD word – these need to be set to the real values of your NAS storage and user if you want to try the same setup.

Things achieved with this project or scheduled for the nearest future:

  • Setup open-source NAS storage with RAID5 setup using 4 disks – DONE
  • Setup network share access for Windows, Linux and MacOS – DONE
  • Setup nas user with SSH keys for passwordless access in MacOS and Linux – PENDING
  • Setup automated rsync of project areas on my laptops to NAS storage – PENDING
  • Setup automated downloads of my hosting backups to new NAS storage – PENDING
  • Setup secondary RSyslog server on the new NAS storage – PENDING

That’s it for now! I’m glad I completed this project at last – Helios 4 seems like a fun system to use and a great option for exploring ways of configuring and presenting NAS storage using latest software solutions available.

See Also




Unix Tutorial Projects: Compiling Brave browser on Linux Mint

brave-logotype-full-color

Some of you may have noticed: I added the link to Brave browser to the sidebar here on Unix Tutorial. That’s because I’m giving this new browser a try and support its vision to reward content producers via Brave’s Basic Attention Token cryptocurrency. If you aren’t using Brave browser already, download and try Brave browser using my link.

In this Unix Tutorial Project, just because it seems fun and educational enough, I’ll attempt compiling Brave browser on my Dell XPS 13 laptop running Linux Mint 19. There’s a much easier way to install Brave browser from official repositories: official instructions here.

Make sure you have enough disk space

This project suprised me a bit. I had 20GB of space and thought it would be enough! Then I saw the git download alone would be almost 15GB, but hoped I had enough.

I was wrong! Ended up resizing Windows 10 partition on my laptop to free up space for another 100GB Linux filesystem.

The final space consumption is 67GB, that’s a lot of source code with an impressive amount (32 thousand of them!) object files (intermidiary binary files you need when compiling large project. they’re used to make up the final binary:

root@xps:/storage/proj# du -sh brave-browser
67G brave-browser

Prepare Linux Mint 19 for Compiling Brave Browser

Following instructions from https://github.com/brave/brave-browser/wiki/Linux-Development-Environment, I first installed the packages:

greys@xps:~$ sudo apt-get install build-essential libgnome-keyring-dev python-setuptools npm
[sudo] password for greys: 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
build-essential is already the newest version (12.4ubuntu1).
The following package was automatically installed and is no longer required:
libssh-4
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed:
gir1.2-gnomekeyring-1.0 gyp libc-ares2 libgnome-keyring-common libgnome-keyring0 libhttp-parser2.7.1 libjs-async libjs-inherits libjs-node-uuid libjs-underscore
libssl1.0-dev libssl1.0.0 libuv1-dev node-abbrev node-ansi node-ansi-color-table node-archy node-async node-balanced-match node-block-stream node-brace-expansion
node-builtin-modules node-combined-stream node-concat-map node-cookie-jar node-delayed-stream node-forever-agent node-form-data node-fs.realpath node-fstream
node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-hosted-git-info node-inflight node-inherits node-ini node-is-builtin-module node-isexe
node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream node-node-uuid node-nopt node-normalize-package-data node-npmlog
node-once node-osenv node-path-is-absolute node-pseudomap node-qs node-read node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-slide
node-spdx-correct node-spdx-expression-parse node-spdx-license-ids node-tar node-tunnel-agent node-underscore node-validate-npm-package-license node-which node-wrappy
node-yallist nodejs nodejs-dev python-pkg-resources
Suggested packages:
node-hawk node-aws-sign node-oauth-sign node-http-signature debhelper python-setuptools-doc
Recommended packages:
javascript-common libjs-jquery nodejs-doc
The following packages will be REMOVED:
libssh-dev libssl-dev
The following NEW packages will be installed:
gir1.2-gnomekeyring-1.0 gyp libc-ares2 libgnome-keyring-common libgnome-keyring-dev libgnome-keyring0 libhttp-parser2.7.1 libjs-async libjs-inherits libjs-node-uuid
libjs-underscore libssl1.0-dev libuv1-dev node-abbrev node-ansi node-ansi-color-table node-archy node-async node-balanced-match node-block-stream node-brace-expansion
node-builtin-modules node-combined-stream node-concat-map node-cookie-jar node-delayed-stream node-forever-agent node-form-data node-fs.realpath node-fstream
node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-hosted-git-info node-inflight node-inherits node-ini node-is-builtin-module node-isexe
node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream node-node-uuid node-nopt node-normalize-package-data node-npmlog
node-once node-osenv node-path-is-absolute node-pseudomap node-qs node-read node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-slide
node-spdx-correct node-spdx-expression-parse node-spdx-license-ids node-tar node-tunnel-agent node-underscore node-validate-npm-package-license node-which node-wrappy
node-yallist nodejs nodejs-dev npm python-pkg-resources python-setuptools
The following packages will be upgraded:
libssl1.0.0
1 upgraded, 80 newly installed, 2 to remove and 286 not upgraded.
Need to get 10.7 MB of archives.
After this operation, 37.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libgnome-keyring-common all 3.12.0-1build1 [5,792 B]
Get:2 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libgnome-keyring0 amd64 3.12.0-1build1 [56.1 kB]
...
Get:81 http://archive.ubuntu.com/ubuntu bionic/universe amd64 npm all 3.5.2-0ubuntu4 [1,586 kB]
Fetched 10.7 MB in 2s (6,278 kB/s)
Extracting templates from packages: 100%
Preconfiguring packages ...
(Reading database ... 267928 files and directories currently installed.)
...

You should end up with a whole bunch of npm (node-*) packages installed.

You need to install gperf package as well – npm run build (last step below) failed for me because gperf wasn’t found.

greys@xps:~$ sudo apt-get install gperf

Clone Brave Browser git Repo

We’re now ready to clone the repo:

greys@xps:~/proj$ git clone git@github.com:brave/brave-browser.git
Cloning into 'brave-browser'...
Enter passphrase for key '/home/greys/.ssh/id_rsa': 
remote: Enumerating objects: 43, done.
remote: Counting objects: 100% (43/43), done.
remote: Compressing objects: 100% (36/36), done.
remote: Total 6466 (delta 27), reused 17 (delta 7), pack-reused 6423
Receiving objects: 100% (6466/6466), 1.28 MiB | 833.00 KiB/s, done.
Resolving deltas: 100% (4425/4425), done.

and then do npm install. This is how it should look:

git-clone-brave-browser.png

Download Chromium source code using npm

npm run init command will download the source code of Chromium browser (open source original Chrome is being built on), Brave browser is based on it. This should take a while – on my 100Mbit connection it took 25min to download 13.5GB (that’s comporessed, mind you!) of Chromium’s source code and then another 25min to download the rest of dependencies:

greys@xps:~/proj/brave-browser$ npm run init

> brave@0.64.2 init /home/greys/proj/brave-browser
> node ./scripts/sync.js --init

git submodule sync
git submodule update --init --recursive
Submodule 'vendor/depot_tools' (https://chromium.googlesource.com/chromium/tools/depot_tools.git) registered for path 'vendor/depot_tools'
Submodule 'vendor/jinja' (git://github.com/pallets/jinja.git) registered for path 'vendor/jinja'
Cloning into '/home/greys/proj/brave-browser/vendor/depot_tools'...
Cloning into '/home/greys/proj/brave-browser/vendor/jinja'...
Submodule path 'vendor/depot_tools': checked out 'eb2767b2eb245bb54b1738ebb7bf4655ba390b44'
Submodule path 'vendor/jinja': checked out '209fd39b2750400d51bf571740fe5ba23008c20e'
git -C /home/greys/proj/brave-browser/vendor/depot_tools clean -fxd
git -C /home/greys/proj/brave-browser/vendor/depot_tools reset --hard HEAD
HEAD is now at eb2767b2 Roll recipe dependencies (trivial).
gclient sync --force --nohooks --with_branch_heads --with_tags --upstream
WARNING: Your metrics.cfg file was invalid or nonexistent. A new one will be created.

________ running 'git -c core.deltaBaseCacheLimit=2g clone --no-checkout --progress https://chromium.googlesource.com/chromium/src.git /home/greys/proj/brave-browser/_gclient_src_JunGAS' in '/home/greys/proj/brave-browser'
Cloning into '/home/greys/proj/brave-browser/_gclient_src_JunGAS'...
remote: Sending approximately 14.36 GiB ... 
remote: Counting objects: 161914, done 
remote: Finding sources: 100% (949/949) 
Receiving objects: 3% (362855/12095159), 163.33 MiB | 10.38 MiB/s 
[0:01:00] Still working on:
[0:01:00] src
Receiving objects: 5% (632347/12095159), 267.23 MiB | 9.94 MiB/s 
[0:01:10] Still working on:
[0:01:10] src
...
├─┬ tape@4.10.1 
│ ├── deep-equal@1.0.1 
│ ├─┬ for-each@0.3.3 
│ │ └── is-callable@1.1.4 
│ ├── function-bind@1.1.1 
│ ├── object-inspect@1.6.0 
│ ├── resumer@0.0.0 
│ ├─┬ string.prototype.trim@1.1.2 
│ │ ├─┬ define-properties@1.1.3 
│ │ │ └── object-keys@1.1.0 
│ │ └─┬ es-abstract@1.13.0 
│ │   ├─┬ es-to-primitive@1.2.0 
│ │   │ ├── is-date-object@1.0.1 
│ │   │ └─┬ is-symbol@1.0.2 
│ │   │   └── has-symbols@1.0.0 
│ │   └── is-regex@1.0.4 
│ └── through@2.3.8 
└── tweetnacl@1.0.1 

npm WARN ajv-keywords@2.1.1 requires a peer of ajv@^5.0.0 but none was installed.
npm run build

> brave-crypto@0.2.1 build /home/greys/proj/brave-browser/src/brave/components/brave_sync/extension/brave-crypto
> browserify ./index.js -o browser/crypto.js

Hook '/usr/bin/python src/brave/script/build-simple-js-bundle.py --repo_dir_path src/brave/components/brave_sync/extension/brave-crypto' took 27.09 secs
Running hooks: 100% (83/83), done.

Build Brave Browser from Source Code

Here we go! Let’s build this thing. Should take an hour or two on a fast PC:

greys@xps:~/proj/brave-browser$ npm run build Release

This is a release build, meaning this is a fully performance and release-grade build of the source code. If you’re going to contribute to Brave browser open source project, you should know that npm run build (without Release parameter) will provide a debug build.

This is how the end process looks (took a few hours to compile on the 8-core CPU of my XPS laptop):

brave-browser-fully-compiled.png

Start the Newly built Brave Browser

This is it! Let’s try starting the browser, this should complete our Unix Tutorial project today:

brave-browser-you-are-not-a-product.png

And an about page, just for the history:

brave-browser-about.png

That’s it for today!

See Also




Unix Tutorial Projects: GitHub Pages with Jekyll

Screen Shot 2019-03-11 at 14.17.35.png

This past weekend I decided to finally learn how to use GitHub Pages and to publish my static website using Jekyll. Please let me know if you find anything wrong with my approach, I’m not a software developer and have only used GitHub very little so far.

GitHub Pages

It’s possible to host your basic website directly from GitHub repository. By default, this must be a public repository, but you can make it private if you upgrade to GitHub Pro account.

Benefits of using GitHub Pages

  • use GitHub and git repository for making, tracking and pushing your website changes
  • no hosting fees – GitHub Pages are free
  • no need to install CMS or blogging software, unless you actually need a blog
  • save a copy of your website (no need for your hosting backups)
  • pick up and improve your git and GitHub skills as you go!

Project Plan for GitHub Page with Jekyll

  • setup a new GitHub repository named greys.github.io (it must match the GitHub username of yours, so if you’re UnixGuy on GitHub, your URL will be unixguy.github.io)
  • Learn Jekyll basics
  • Pick a Jekyll theme, clone it into my local working directory of website repo
  • Update the necessary files
  • Push website copy onto GitHub
  • Once greys.github.io works, update domain name

Project Implementation

New GitHub repo

  • The repository for GitHub Pages must follow strict naming convention. For a user page (not a project), it must be username.github.io.  
  • This should be a public repository, unless you have GitHub Pro account. Kind of makes sense for most websites, cause they’re meant for public accesso on the Internet. Still, double-check that you don’t publish any sensitive information on your Jekyll website!

    Usernames wise, f you’re UnixGuy on GitHub, your URL will be unixguy.github.io and your repo will be github.com/unixguy/unixguy.github.io)

For instructions, visit the https://help.github.com/en/articles/create-a-repo page.

That’s it! My new repo is public and available at the expected URL: https://github.com/greys/greys.github.io

Learning Jekyll basics

Jekyll is a static website generator written in Ruby. It depends on Ruby packages (gems) and uses bundle package manager. 

First, install Jekyll. On my MacBook, I did the following:

$ sudo gem install jekyll bundler

Jekyll has a great website, including the Quick Start guide. I also have done the step-by-step tutorial – give it a try, it’s really straightforward.

Jekyll Theme: Sustain

After browsing through a bunch of Jekyll themes, I decided on the Sustain theme.

Firstly, I cloned it into a local directory:

greys@maverick:~ $ cd /Users/greys/proj
greys@maverick:~/proj $ git clone https://github.com/jekyller/sustain.git

Now I rename it to proj/gleb.reys.net (just so that I know what project this is):

greys@maverick:~/proj $ mv sustain gleb.reys.net
greys@maverick:~/proj $ cd gleb.reys.net

Jekyll related updates (bundle update will take a while to install required packages and plugins):

greys@maverick:~/proj/gleb.reys.net $ mkdir .bundle
greys@maverick:~/proj/gleb.reys.net $ bundle update

And that’s it! We can start Jekyll’s local webserver to view the resulting website:

greys@maverick:~/proj/gleb.reys.net $ bundle exec jekyll serve
Configuration file: /Users/greys/Documents/proj/gleb.reys.net/_config.yml
Source: /Users/greys/Documents/proj/gleb.reys.net
Destination: /Users/greys/Documents/proj/gleb.reys.net/_site
Incremental build: disabled. Enable with --incremental
Generating...
done in 0.339 seconds.
Auto-regeneration: enabled for '/Users/greys/Documents/proj/gleb.reys.net'
Server address: http://127.0.0.1:3000/sustain//
Server running... press ctrl-c to stop.

After this I can access my page in the local browser – http://127.0.0.1:3000/sustain:

Screen Shot 2019-03-11 at 13.46.41.png
 

Now it was time to make the updates. For now I just commented out the original values in code:

  • fixed colours in static/css/main.css
  • updated font to Verdana
  • updated default font size to 18px
  • updated _layouts/layouts.html to remove the Fork Me on GitHub ribbon (there’s still a link to the project at the bottom of resulting page)
  • changed projects.html and created a few more pages for my online interests
  • updated the _config.yml with my profiles and full name

Pushing changes to GitHub

This is the most fun part. For this tutorial, I actually initialise git repo only here, but in reality I’ve created it at the very start and had plenty of fun editing and committing changes – I discarded them all cause they’re not relevant for this task.

Tidy up git repos

Since we closed git repository of , there’s going to be .git directory in our website’s project. So it’s best to remove it:

greys@maverick:~/proj/gleb.reys.net $ pwd
/Users/greys/proj/gleb.reys.net
greys@maverick:~/proj/gleb.reys.net $ rm -rf .git

Now we can proceed with initialising git repo of our own.

First, let’s initialise the repository and add files:

$ git init .
Initialized empty Git repository in /Users/greys/Documents/proj/gleb.reys.net/.git/

… let’s add all the files:

$ git add -A

… and commit them to git repository:

$ git commit -m "First commit" 

Now, let’s add the remote repository, the online one from GitHub:

$ git remote add origin https://github.com/greys/greys.github.io

We are ready to push the code:

$ git push --set-upstream origin master
Counting objects: 54, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (49/49), done.
Writing objects: 100% (54/54), 477.84 KiB | 6.29 MiB/s, done.
Total 54 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), done.
To github.com:greys/greys.github.io.git
* [new branch] master -> master
Branch master set up to track remote branch master from origin.

After a minute or two, your GitHub Pages URL should start serving your website. In my case, http://greys.github.io showed my pages.

Setup custom domain name

Since I’m using gleb.reys.net as the website URL, I need to update it in the GitHub settings for the repository:

Secure website with HTTPS

This may take a bit, the Enforce HTTPS option is not immediately available:
Screen Shot 2019-03-11 at 13.32.48.png

While you’re waiting, here’s the action plan:

Final website check

This is it! The website should be online and ready – in my case at the https://gleb.reys.net URL. As you can see, it’s a secure website served over HTTPS now:

Screen Shot 2019-03-11 at 13.35.25.png

That’s it for today. Am really happy with this project!

Give it a try and let me know if you need any help getting this setup. 

See Also