Home Uniquely NZ Travel Howto Pauline Small Firms Search
A Linux Mint 'Grab Bag'
What you need for a re-build if disaster strikes

Summary, Rationale and Progress to Date

This is a completely new page looking at what one needs for a re-build, in terms of hardware, software, configuration information and data. Why should one need to consider this? We spend significant periods away from home, both on our boat and abroad on cruises and in New Zealand. Laptops in particular are fragile and if dropped are, at the minimum, likely to need major repairs and more likely to be replaced. Disk Drives fail. It is not difficult to exist with a mobile phone and/or a pad for a few weeks but Pauline's Open University work, writing up holidays for the web and handling pictures and video would be challenging for long periods without a computer.

This document covers in detail the backup philosophy and procedures which are required to enable disaster recover at home and the extensions to enable an easy rebuild with minimum hardware overhead away from home are covered in detail.

It has been written in parallel with setting up Gemini, a new Mini PC from Chillblast and there is a considerable overlap between the two documents which may be rationised in the future.


Contents

What is a Grab Bag?

I used the title Grab-bag as that is a concept which many have in areas where natural disasters are common, as it is is when participating in dangerous activities. In this case it is not the items for survival but those to rebuild a system away from home and retain valuable memories. This is about the items one one would keep in a waterproof box whilst away from home in the way of hardware and backups to be able to rebuild the crucial hardware and software onto a fresh machine. We already have some of our backup data stored away from the house and some backup media travel with us but this is intended to bring it all together and inform our current backup/synchronisation procedures and look at the role of the cloud in what we do. It also needs to look at the role of phones/tablets in disaster mitigation.

We are currently adding another fixed machine to our existing laptops and this gives perfect opportunity to stress test our existing procedures and put together this set of instructions on how to set up new hardware in the minimum time yet enable it to be progressively enhanced to reach our existing long term configuration.

What is the minimum 'hardware' contents it should have?

I have been trying to minimise what I have been using whilst setting up our new machine and the following seems to be close to the minimum Hardware one needs to have in the Grab-Bag

Note: USB 3 and higher Type A connectors have a Blue Tongue

The LiveUSB loaded with Mint 20 is all that is needed to get an operational Linux System capable of access the Backup Drive and decrypting it. The procedure is covered in a number of places but the write up of the Gemini is probably the best. It should take less than an hour to get a Linux or Dual booted system operational with a suitable specification of machine. If you want to keep Windows you will need to shrink the Windows System in Windows to make space - that is much safer and the install will then use any available space.

Minimum Computer Requirements for our Linux Mint System.

The following assume that you want to have a practical system which you will want to keep but not necessarily to the specification of the machine you are temporarily replacing. It assumes one is intending to have a similar user and data layout to our other machines ie 2 real users with at least some shared data and encrypted data. I have just set up a comparatively low specification machine (Gemini) and I would not go much lower in most aspects.

Considerations when using Solid State Drives (SSDs) under Linux

The first time I had used an SSD was in the Helios and at that time I did considerable background reading which revealed that there were many more nuances in getting a good setup than I had realised. I initially correlated the various pieces of information I had on optimising performance into a checklist. Many areas which do not need action on the the modern machines as they are already set up or are now defaults in Mint and Ubuntu. The in depth coverage has therefore moved to a dedicated page on The Use of Solid State Drives in Linux and only the links in Bold below will be covered in the appropriate sections of this page, the links here all point to in depth coverage in the dedicated page and you should not need them.

Checklist for the use of Solid State Drives

  1. The SATA controller mode needs to be set to AHCI in the BIOS. AHCI provides a standard method for detecting, configuring, and programming SATA/AHCI adapters. AHCI is separate from the SATA standards, but it exposes SATA's more advanced capabilities that are required in the BIOS to fully support an SSD. Changes normal not available in current BIOSes as already the default.
  2. Partition Alignment is critical but should be correct on a recent system is essential for optimal performance and longevity as SSDs are based on flash memory, and thus differ significantly from hard drives. While reading remains possible in a random access fashion on pages of typically 4 KB, erasure is only possible for blocks which are much larger, typically 512 KByte, so it is necessary to align the absolute start of Every partition to a multiples of the erase block size ie 1 Mbyte
  3. Use a file system supporting TRIM: in practice this means EXT4 in Linux, normally not supported by Windows.
  4. Automate TRIM. A SSD system needs some form of automatic TRIM enabled to assist garbage collection otherwise the speed decreases and the number of writes also increases at the expense of SSD life.
  5. Check Queued TRIM is Blacklisted. A number of drives do not correctly support TRIM reliably and, in particular, the queued TRIM command which may need to be inhibited. The latest kernels take care of this but it is a real issue if updating an existing machine or using an LTS (Long Term Support) Distribution with an elderly kernel.
  6. Overprovision. This is the reserving of some areas of disk and leaving unformatted. This also desirable for similar reasons to TRIM, namely maintaining speed and decreasing disk writes - a certain amount is already reserved by manufacturers (~7% ) but it is best increased by another 10 to 20%.
  7. Control use of Swapping to disk. SSDs have a large but finite number of write cycles and frequent swapping uses that up. The use of swap files is not optimised for desktop machines in Linux for SSDs (or even Hard Drives) and needs to be changed.
  8. Inhibit Hibernation. (suspend to disk): This should be inhibited as it causes a large number of write actions, which is very bad for an SSD. If you are dual booting make sure Windows also has hibernation inhibited - in any case it is catastrophic if both hibernate to the same disk.
  9. Avoid Defragmentation. It is not required in Linux and is never done automatically. It must be avoided because the many write actions it causes will wear an SSD rapidly - make sure a dual booted system does not kill your SSD by defragmentation and avoid the need by maintaining at least 20% spare capacity on each partition, even in Linux this has benefits.
  10. Consider changes to the file access. Changes can be made to reduce the number of 'writes' by options in the configuration files such as noatime. (relatime is the default in Ubuntu and Mint and is the best compromise)
  11. Optimise the disk access scheduler. The scheduler may be optimised for hard drives rather than SSDs. The default scheduler for Ubuntu/Mint is 'deadline' which is acceptable for both but noop may be better if only SSDs are in use. No Action Planned

Only four of the above are likely to need addressing in a dedicated Linux system using Ubuntu or Mint - they are in bold. The first two are factors in the initial partitioning of the SSD and the other two are carried out during the setting up procedure.

Setting Up a New machine - Partitioning and Installing

There are two basic ways to proceed.

If you have used Linux before and understand something about the system and partitioning the second is actually much quicker and easier over the longer term and can be entirely in a GUI. For a newcomer it would be daunting. However with the first approach you can progress in stages over the following weeks but each step will be still need some understanding and involve using the terminal. I did it as a trial on the Gemini but normally take the second approach.

Here I plan to use a default Mint install and show how to build on it over the next couple of weeks.

Summary of actions before installing.

So before we do anything about installing we need to:

The key to enter the BIOS varies between machines but often there is little note at the bottom of the screen press F2 repeatedly during the self test period with the logo displaying which you should note then try again, there is rarely time to get it first time. You may also see and be able to note the code for the access to the boot menu which we will need shortly

We will now look at those activities in more detail:

Burning a LiveUSB to run Linux Mint - (not needed if there is a current one in the Grab Bag)

This is based on the Instructions on the Mint Web Site and in many ways the most important step towards pruning Linux on a new or old machine as everything can then be done in Linux even on a new machine without any existing operating system. This can save £45 to £90 in the future by not having Windows on your purchase of a specialist machine.

In Linux Mint

In Windows, Mac OS, or other Linux distributions

Download balenaEtcher , install it and run it.

In windows the portable version does not even install anything on your machine or can live on a USBdrive. I do not have a Windows system to test it but it ran under the wine, the Windows 'emulator' under Mint

Using Etcher

Access the BIOS and Set it up for Linux.

To enter BIOS Setup, turn on the computer and press Esc or whatever code the computer uses . The key to enter the BIOS varies between machines but often there is little note at the bottom of the initial screen presented during the self test period (often with a logo displaying) which you should note then try again, there is rarely time to get it first time. You may also see and be able to note the code for the access to the boot menu which we will need shortly.

The BIOS is often a fairly standard one by American Megatrends Inc (AMI) but may have a very reduced set of options available. There are several tabs and you have to navigate using the keyboard. Help Instructions are always shown on the right. There are only three settings you need to check and mine were all correctly set in the BIOS as supplied. They do need to be checked and correct before you start. The following is an example from Gemini

Most modern bioses only support UEFI and the drive will be formatted with a GPT partition layout and that is assumed here.

In Summary, the only setting you are likely to need to change is to set Secure Boot [Disabled] but it is important to ensure Fast Boot [Disabled] as it is often used in Windows systems and may end up being set.

Booting and checking the LiveUSB system

The boot menu is accessed by pressing a key during the time the BIOS is doing the POST checks, ie when the initial Logo is being displayed. This may be on the screen or it may need an internet search as for accessing the Bios. Possible codes are F1, F2, Del, Ctrl+Alt+Esc, Ctrl+Alt+Ins, Ctrl+Alt+S.

You will then see a menu with the Internal Drive at the top followed by two entry points on the USB Sticks which correspond to a conventional and UEFI configuration. One needs to select the UEFI version in the USB boot options if the machine supports UEFI. If you use the wrong entry it will probably work as a LiveUSB but when you come to do an install you will end up installing the wrong version which will almost certainly not work.

It will take a couple of minutes to boot up from a LiveUSB, the faster the USB drive the quicker it is. Mine was a Kingston G100 USB3.1 Drive. You should now have a Mint Desktop and a fully working system although much slower than the installed version will be.

Checks on what is working: At this point I always do a quick check to see what is working, the obvious things to check are:

Check the Existing Partitioning:

We can now check what the partitioning is currently on the existing drive and decide what to do on the new SSD. The partition editor is called gparted and can be run by Menu -> gparted. When first opened it will look like this:

Screenshot

At the top right the drop down allows you to choose the device and the machine as delivered has the one Solid State Drive divided up into 4 partitions. The first is part of the UEFI booting system. The second is tiny and has been reserved for something, I have no idea what. The third contains the Widows System and the fourth is a recovery partition which can be used to rebuild a Windows system supplied without any DVDs or the like. We want to leave all the alone. The machine above has no space for Linux and an extra SSD drive was installed .

You can get more information on the Device if you go to the menu at the top and View -> Device information and another panel will open up. It tells us that the partition table is the new GPT type which is not a surprise. If we select a partition and then Partition -> Information we can get a matching level of information about the partition including one piece of information which we will find useful in the future - the UUID which is a Universal Unique IDentifier for that partition - we can take that device to a different machine and the UUID will remain the same. The device names eg /dev/sda1 can change depending on the order they are plugged in. If and when we need the UUID in the future this is a way to find it in a GUI.

If we plug in an extra USB stick and look for it in the drop down menu we will find it has the old MBR partitioning which shows as msdos and it has a single partition with an old fashioned fat32 file system.

Exploring the LiveUSB

If you are new to Linux or Linux Mint this is also a good time to experiment and explore the system. Just remember that when you leave the LiveUSB everything will be lost.

Background on General Linux File Systems

I have recently realised how much I take for granted in the implementation of our system and it it quite difficult to explain to some one else how to install it and the logic behind the 'partitioning' and encryption of various sections. So I have decided to try to go back to basics and explain a little about the underlying structure of Linux.

Firstly Linux is quite different to Windows which more people understand. In Windows individual pieces of hardware are given different Drive designations A: was in the old days a floppy disk, C: was the main hard drive, D: might be a hard drive for data and E: a DVD or CD drive. Each would have a file system and directory and you would have to specify which drive the a file lived on. Linux is different - there is only one file system with a hierarchical directory structure. All files and directories appear under the root directory /, even if they are stored on different physical or virtual devices. Most of these directories are standard between all Unix-like operating systems and are generally used in much the same way; however, there is a File System Hierarchy Standard which explicitly defines the directory structure and directory contents in Linux distributions. Some of these directories only exist on a particular system if certain subsystems, such as the X Window System, are installed and some distributions follow slightly different implementations but the basics are identical where it be a web server, supercomputer or android phone. Wikipedia has a good and comprehensive Description of the Linux Standard Directory Structure and contents

That however tells us little about how the basic hardware (hard drives, solid state drives, external USB drives, USB stickers etc are linked (mounted in the jargon) into the filesystem or how and why parts of our system are encrypted. Nor does it tell us enough about users or about the various filesystems used. Before going into that in details we also need to look at another major feature of Linux which is not present in Windows and that is ownership and permissions for files in Linux. Without going into details every file and folder in Linux has an owner and belongs to a group. Each user can belong to many groups. Each Folder and File has associated permission for owners, groups and the rest of the world which include write, read and execute permissions. This is vastly more secure than Windows which has no similar restrictions over access to information.

So coming back to Devices, how does a storage device get 'mapped' to the Linux file system? There are a number of stages. Large devices such as Hard Disks and Solid State Drives are usually partitioned into a number os sections before use. Each partition is treated by the operating system as a separate 'logical volume', which makes it function in a similar way to a separate physical device. A major advantage is that each partition can have its own file system which helps protect protect data from corruption. Another advantage is that each partition can also use a different file system - a swap partition will have a completely different format to a system or data partition. Different partitions can be selectively encrypted when a block encryption is used. Older operating systems only allowed you to partition a disk during a formatting or reformatting process. This meant you would have to reformat a hard drive (and erase all of your data) to change the partition scheme. Linux now has disk utilities now allow you to resize partitions and create new partitions without losing all your data - that said it is not a process without risks and important data should be backed up. Once the device is partitioned each partition needs to be formatted with a file system which provides a directory structure and defines the way files and folders are organized on the disk and hence become part of the overall Linux Directory structure once mounted.

If we have several different disk drives we can chose which partitions to use to maximise reliability and performance. Normally the operating system (kernel/root) will be on a Solid State Drive (SSD) for speed and large amounts of data such as pictures and music can be on a slower hard disk drive whilst the areas used for actually processing video and perhaps pictures is best on a high speed drive. Sensitive data may be best encrypted so ones home folder and part of the data areas will be encrypted. Encryption may however slow down the performance with a slow processor.

Making Space for Linux

This is the point where we need to provide the space for the Linux system and what we do will depend on the the machine we are installing on to. The major options are:

Making Space in an existing Windows system

You must use the Windows software to modify the drives used by Windows to make the initial space needed - if you do not use their software there ia a high chance the machine will no longer boot into Windows. It is very likely that a Windows system initially supplied with Windows 8 or 10 (ie under 8 years old) and 64 bit will be a UEFI system with a GBT partition table although there are exceptions. Older systems and 32 bit systems will probably be MBR.

Windows 8.0 is no longer supported and Windows 8.1 reached the end of Mainstream Support on January 9, 2018, and will reach end of Extended Support on January 10, 2023. It is probably not sensible to dual boot a system without Mainstream support, especially if it using a MBR partition table or one that does not have UEFI support so I would suggest that if you want to proceed with such a system that you bite the bullet and start afresh with an all Linux system. There are plenty of older machines with Core i5 or i7 processors which can form the basis of excellent Linux Mint only systems if they also have adequate RAM >4Gbytes and disk space especially if the existing drive can be replaced or augmented in the longer term by an SSD. There are examples of setting up older machines on my web site but I here I am going to concentrate on Dual Booting Windows 10 or, at a minimum, GPT/UEFI systems.

So what we need to do is to use the built in Disk Management tool in Windows 8.1/10. This is covered by Microshaft at https://docs.microsoft.com/en-us/windows-server/storage/disk-management/shrink-a-basic-volume. In summary, from the Main Menu, Search for Disk Manager and open, right-click the basic volume you want to shrink, Click Shrink Volume (in submenu) and follow the on-screen instructions. You will be limited in how much you can shrink the volume as it may be limited by the position of fixed files such as those for paging if you are shrinking the volume with the operating system. There is also a good explanation here https://itsfoss.com/guide-install-linux-mint-dual-boot-windows/. You may not be able to obtain sufficient unallocated space to be able to proceed. The longer the system has been in use the less chance of being able to make use of all the unused space in a operating system partition. If you can not make space then you have the hard choice of giving up on that machine or creating a Linux only system.

Initialising and Partitioning an existing, replacement or extra drive.

Before starting we need to choose the Partitioning Table Scheme for the new drive. There are two choice, the old and familiar to many MBR with its various restrictions or the more modern GPT scheme which I strongly recommend.

A little background may be in order on all this AS (Alphabet Soup). Wikipedia tells us "A GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk, using globally unique identifiers (GUID). Although it forms a part of the Unified Extensible Firmware Interface (UEFI) standard (Unified EFI Forum proposed replacement for the PC BIOS), it is also used on some BIOS systems because of the limitations of Master Boot Record (MBR) partition tables, which use 32 bits for storing logical block addresses (LBA) and size information. Most current operating systems support GPT. Some, including OS X and Microsoft Windows on x86, only support booting from GPT partitions on systems with EFI firmware, but most Linux distributions can boot from GPT partitions on systems with either legacy BIOS firmware interface or EFI."

So in this case we will configuring the drive to have a GUID Partition Table (GPT) with an unformatted partition occupying the whole space using gparted. This will completely destroy anything on it. It is best to do this even if you are replacing an old Windows System as you can ensure you end up with GPT partition management.

So start up the LiveUSB again and open gparted. You should now see an extra device on the drop down menu at top right. Select it and go to Device -> Create Partition Table . You will have a screen with dire warnings so make sure it is the correct drive then select GPT from the tiny box in the middle and Apply and that is it. The install should use the empty space and add the filesystems if you use the "Install Linux Mint alongside Windows 10" option.

The actual Install

Having completed the partitioning I start the install from the LiveUSB using the "Install Linux Mint alongside Windows 10" option immediately if I am creating a Dual Boot system and follow the instructions. There is only one point to take into account and that is choosing the user which I cover fully in the next section. The first user will have addition administrative rights which are needed when it comes to synchronising between machines. It may all seem very much in the future but it is sensible for the first user to be a family or firm name rather than a personal user so the it can be used for tasks when a real user must not be active. Backing up a user whilst all the files are changing leads to disasters. If you already have machine with multiple users install them in the same order as there is an id number associated with every user and group - for the moment just trust me on this.

If you want to see what will happen during the install there are lots of examples on the internet which have screen shots of every step although most concentrate on the hard way where you specify the formatting yourself. Search for "Install Linux Mint alongside Windows 10" but look at a couple just in case.

If all has gone well you will have dual boot system where you can chose which system to use as you boot up - initially you have 10 seconds to make the choice.

What you should do in the short term to get the best from the system

So I am now going to cover some of the things have mentioned which should be done on a timescale of weeks rather than years to keep the machine running reliably. Most of this has been lifted from earlier documents and the essential parts brought together and simplified. First however I need to introduce new users to use of the Terminal. Experience users should skip this or send suggestions on how to improve the section!

What is a Terminal? Why use it?

Up to now I have managed to avoid use if the terminal. Most people these days to do all their interaction via a mouse and the keyboard is only used to enter text, addresses searches etc., and never to actually do things. Historically it was very different and even now most serious computer administration uses what is known as a terminal. It is horses for courses. There is nothing difficult or magic using a "terminal" and there are often ways to avoid its use but they are slow, clumsy and often far from intuitive. However some things which need to be done are near impossible to do. I will not expect much understanding of the few things we have to be one in a terminal, think of the things you cut and paste in as magic incantations. I am not sure I could explain the details of some of the things I do.

The terminal can be opened by Menu -> Terminal, it is also so important that it is already in favourites and in the panel along with the file manager. When it has opened you just get window with a prompt. You can type commands and activate them by return. In almost all cases in this document you will just need to cut and paste into the line before hitting return. Sometimes you need to edit the line to match your system ie a different disk UUID or user name.

You need to know that there are difference in cut and paste and even entering text when using the terminal.

On the positive side

What is sudo

We have spoken briefly about permissions and avoiding changes to system files etc. When you use a normal command you are limited to changing things you own or are in a group with rights over them. All system files belong to root and you cannot change them or run any utilities which affect the system.

If you put sudo in front of a command you take on the mantle of root and can do anything hence the sudo which stand for SUperuser Do. Before the command is carried out you are asked for your password to make sure you do have those rights. Many machines are set up so retain the Superuser status for 15 minutes so do not ask repeatedly while you carry out successive commands. Take care when using sudo, you have divine powers and wreak destruct with a couple of keystrokes.

Tutorials

There are many good tutorials including a five minute one from Mint here and a longer one from Ubuntu here

Can I avoid the terminal?

There are ways to reduce the use of the terminal. One I use a lot when I need to edit or execute as root is to start in the file manager then right click on empty space and click Open as Root. I now get another file manager window with a Red Banner and I can now see and do anything as I am a Superuser, I can also destroy everything so beware. I can now double click on a file to open it for editing as root and when editing in Xed there will again be a Red Banner. You can also Right Click -> Open in a Terminal which will open a terminal at the appropriate position in the folder tree.

Warning about Deleting as a superuser: Do not 'Move into the Recycle Bin' as root, it will go into the root trash which is near impossible to empty, if you need to Delete use right click -> Delete which deletes instantly.

There are however some activities which will still demand use of sudo in a terminal so you will have to use it on occasion.

Changes to the System which ought to be done at an early stage

Firstly we will look at the two changes required because we are using a SSD, these are best done at an early stage say during the first few days and then look at some changes to speed up the boot process and finally how to inhibit Hibernate which is bad for SSDs and disastrous on a dual boot system.

Reduce Swapping to Disk.

The changes that are described here are desirable for all disk drives and I have already implemented them on all my systems. They are even more important when it comes to SSDs. A primary way to reduce disk access is to reduce the use of Swap Space which is the area on a hard disk which is part of the Virtual Memory of your machine, which is then a combination of accessible physical memory (RAM) and the Swap space. Swap space temporarily holds memory pages that are inactive. Swap space is used when your system decides that it needs physical memory for active processes and there is insufficient unused physical memory available. If the system happens to need more memory resources or space, inactive pages in physical memory are then moved to the swap space therefore freeing up that physical memory for other uses. This is rarely required these days as most machines have plenty of real memory available. If Swapping is required the system tries to optimise this by making moves in advance of their becoming essential. Note that the access time for swap is much slower, even with an SSD, so it is not a complete replacement for the physical memory. Swap space can be a dedicated Swap partition (normally recommended), a swap file, or a combination of swap partitions and swap files. The hard drive swap space is also used for Hibernating the machine if that feature is implemented

It is normally suggested that the swap partition size is the same as the physical memory, it needs to be if you ever intend to Hibernate (Suspend to disk by copying the entire memory to a file before shutting down completely). It is easy to see how much swap space is being used by using the System Monitor program or by using one of the system monitoring applets. With machines with plenty of memory like my Defiant, Helios and Lafite which all have 8 Gbytes you will rarely see even a few percent of use if the system is set up correctly which brings us to swappiness.

There is a parameter called Swappiness which controls the tendency of the kernel to move processes out of physical memory and on a swap disk. See Performance tuning with ''swappiness'' As even SSD disks are much slower than RAM, this can lead to slower response times for system and applications if processes are too aggressively moved out of memory and also causes wear on solid state disks.

Reducing the default value of swappiness will improve overall performance for a typical installation. There is a consensus that a value of swappiness=10 is recommended for a desktop/laptop and 60 for a server with a hard disk. I have been using a swappiness of 10 on my two MSI U100 Wind computers for many years - they used to have 2 Gbyte of RAM and swap was frequently used. In the case of the Defiant I had 8 Gbytes of memory and Swap was much less likely to be used. The consensus view is that optimum value for swappiness is 1 or even 0 in these circumstances. I have set 1 at present on both the Helios and the Lafite with an SSD to speed them up and minimise disk wear and 0 on the Gemini as the swap is small and so is the memory.

To check the swappiness value

cat /proc/sys/vm/swappiness

or open the file from the file manager.

For a temporary change (lost on reboot) to a swappiness value of 1:

sudo sysctl vm.swappiness=1

To make a change permanent you must edit a configuration file as root:

xed admin:///etc/sysctl.conf

Or use Files and right click -> Open as Root, Navigate and double click on file

Search for vm.swappiness and change its value as desired. If vm.swappiness does not exist, add it to the end of the file like so:

vm.swappiness=1

Save the file and reboot for it to take effect.

Modifications to GRUB.

No changes are Essential to boot successfully but they save time during booting and give persistence if you have a multiboot system of any sort.

To make these changes we need to edit /etc/default/grub as root:

xed admin:///etc/default/grub

shows /etc/default/grub the start of which typically contains these lines with changed/added lines coloured:

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'

GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true
#GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=2
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""

#...................

GRUB_DEFAULT=0 will boot the first menu item and so on. GRUB_DEFAULT="saved" will boot the same entry as last time which I prefer.
GRUB_SAVEDEFAULT=true - needed to make sure the last used kernel used is saved
GRUB_TIMEOUT=2 will display the grub menu for 2 seconds rather than the default of 10 - life is too short to waste 8 seconds every boot!

After making any changes involving grub you must run sudo update-grub in a Terminal

sudo update-grub

to save the changes into the file actually used during booting - I keep forgetting.

There is no way to avoid a terminal here that I can think of.

Change auto-mount point for USB drives back to /media

Ubuntu (and therefore Mint) have recently changed the mount points for USB drives from media/USB_DRIVE_NAME to /media/USERNAME/USB_DRIVE_NAME. There is a sort of logic as it makes it clear who mounted the drive and protects the drive from other users. It is however an irritation or worse if you have drives permanently plugged in as they move depending on the users who logs in first and when you change user or come to syncronise, back-up etc. I therefore continued to mount mine to /media/USB_DRIVE_NAME. One can change the behaviour by using a udev feature in udisks version 2.0.91 or higher which covers all the distributions you will ever use.

Create and edit a new file /etc/udev/rules.d/99-udisks2.rules - the easiest way to create a new file as root is by ising the right click menu in the file browser and Open as Root then right click -> New file and double click to open to edit. You willhave red toolbars to remind you.

and cut and paste into the file

ENV{ID_FS_USAGE}=="filesystem", ENV{UDISKS_FILESYSTEM_SHARED}="1"

then activate the new udev rule by restarting or by

sudo udevadm control --reload

When the drives are now unplugged and plugged back in they will mount at /media/USB_DRIVE_NAME

Do not ask me exactly what the line does - I think it sets some system 'flag' which changes the mode.

Inhibit Hibernation [Now hidden from most Mint Menus so less of a problems and possible already inhibited in latest versions]

Hibernation (suspend to disk) should be inhibited as it causes a huge amount of write actions, which is very bad for an SSD. If you are dual booting you should also make sure Windows also has hibernation inhibited - in any case it is catastrophic if both hibernate to the same disk when you start up. Ubuntu has inhibited Hibernation for a long time but Mint did [/does] not and I prefer to change it. An easy way is to, in a terminal, do:

sudo mv -v /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla /

Note this is a single line and is best copied and pasted it into the terminal.

It moves the settings file that enables hibernation, to the main directory / (root) rendering it ineffective. The new location is a safe storage, from which you can retrieve it again, should you ever wish to restore hibernation. Thanks to https://sites.google.com/site/easylinuxtipsproject/mint-cinnamon-first for this idea. (I have not checked and have no views on any of the other information on that page.)

Note: The file is missing on the latest version 20.1 of Mint so I assume Hibernation is already inhibited

One needs to reboot before this is active. After the reboot Hibernation should now no longer be one of the options when you close the computer. Applets or Keyboard Shortcuts which try to hibernate may still display but will demand root access.

Additional Software Requirements during setup and use

It is convenient to start loading programs at this point as some of the utilities will be needed in the next stages. There are two sorts of programs required in addition to those loaded during the install, in summary they are:

The second group of programs need to be available in the backup media which travels with us and easily identified with it. There is currently a top level folder in DATA called My Programs and the latest copies of these programs (or folders containing the installations) is being put into a subfolder called Installed in Mint 20 so the path will be /media/DATA/My Programs/Installed in Mint 20 and will need to be periodically updated.

Several of the programs are needed for the next stage or will make it much more convenient so I recommend doing an install of most of our standard programs from repositories before continuing. It also removes a few anomalies where programs which are available on the LiveUSB are not after a full install! gparted seems to be one of them.

Installing our standard set of programs after a fresh install of the operating system.

Before we can re-install our backed-up user folders into freshly installed system we need to also install any programs which are not installed by default. Our system has quite a number of extra programs and utilities installed.

Normal Programs available from Mint Repositories

Most of the programs we use can be installed from the Software Manager or the Synaptic Packager manager both both take time to find and install programs, and usually it is done one by one. The Synaptic Package Manager also needs one utility (apt-xapian-index) to be installed before the indexing is enable to allow the quick search to be used.

There are therefore good reasons to do the installs via the command line where the list of our standard programs can be copied in and installed in a single batch. Other users may want to check the list and leave out programs they do not need. The end result is the programs are downloaded and installed in as fast a manner as possible. Over time the list will change but at any time it can be rerun to check and those not installed or needing updating will be updated or installed. Missing programs will however cause the install to terminate so the output should be watched carefully. This is our current list ready to paste as a single line into a terminal.

sudo apt-get install gparted vnstat vnstati zenity sox libsox-fmt-mp3 ssh unison-gtk gdebi hardinfo apt-xapian-index libpam-mount ripperx sound-juicer lame filezilla keepass2 udftools libudf0 libudf-dev i7z meld mousepad sublime-text git gitk cheese skypeforlinux v4l-utils guvcview gimp jhead wine wine64 wine-binfmt

Programs only able to be installed from terminal

If you are using wine (The program that enables you to run windows programs) then you have to install extra fonts using a Microsoft installer. This is the only program that we use (or maybe altogether) that has to be installed in the terminal. Unfortunately this may appear as a dependency of ubuntu-restricted-extras (initially included above because it includes useful codecs) when the install is done on the latest versions of Mint. It is not installed on all my machines but if you find you need ubuntu-restricted-extras it should be installed after ttf-mscorefonts-installer has been installed in a terminal.

sudo apt install ttf-mscorefonts-installer

You need to agree the T&C during the install of ttf-mscorefonts-installer before they will download and an undesirable feature is that you must install in a terminal to be able to read and agree them. Use cursor - up down keys to get to the bottom then left - right to highlight OK and Click. If you get it wrong search my web site for instructions what to do.

Programs installed from PPAs (Private Personal Archives)

None in use at present - Chrome/Chromium may have to be installed that way from Mint 20 but is available directly in Mint 20.1

Programs which have to be installed from .deb (installer files) or appimages

Most programs installed this way to not automatically update although there are a few exceptions so there has to be a very good reason why any were installed that way, usually old programs or where a PPA had not been updated for the latest version of Cinnamon. You can see what has been installed from .deb files manually in the Synaptic Package Manager by selecting Status and then Installed (local or obsolete) on left.

This section also includes installs from appimages the most essential of which is pCloud and will be covered separately,

pCloud - download - find it - right click -> properties -> permissions and click box to allow execute - close properties and double click and fill in credentials. It is an appimage and runs in the home folder so a copy is needed for each user. Once installed it will occasional put up a notification asking you to update. Eventually you will need to log in and also set up a folder to sync for keepass2 and jdotxt. Saved copy in My Programs

jdotxt I downloaded the .deb file as the repository has not been updated to support the latest releases of ubuntu/mint Also saved Copy in My Programs

opera - https://www.opera.com/download - choose open with debi - wait - Install - Accept updating automatically with system - job done except Run from menu - right Click on icon in toolbar and 'pin to panel' - Finished. Integrated with system so manual updates not needed. Also saved Copy in My Programs

Veracrypt In My Programs - downloaded from https://www.veracrypt.fr/en/Downloads.html Also there is a saved Copy in My Programs

Zoom from https://zoom.us/download and zoom_amd64.deb with a Copy in My Programs

Virtualgl - Not essential only used to run a video test. Copy in My programs

Old Windows programs still in use under Wine

Wine programs are specific to each user and each wine ecosystem is in .wine so if you restore an archive of your home folder it will include all the wine programs. Those we use are:

Thunderbird

This section only applies to our own system. Others should ignore this

Thunderbird is a real problem for us. Put very simply Thunderbird followed Firefox and installed a complete new system for handling extensions. This upset many of the writers of extensions which we have come to rely on for much of our functionality and we would risk losing some of our audit trail. The author of several of our extensions has refused to rewrite any of his extensions although he continues to offer some support. We took the dangerous path of freezing our version of Thunderbird until someone else provides compatible support to our google calendars and address books and a few other functions we have come to depended on, otherwise we may switch to one of the Thunderbird forks which maintains support for the old extensions.

In the meantime on our systems the existing the Thunderbird installed from the LiveUSB has to be uninstalled completely and the old version reinstalled from .deb files I have preserved. The new version must not be run with our old profiles which are effectively destroyed. The procedure to revert to the old version is in detail:

Thunderbird - Howto Revert to legacy version 52.9 (Only fpr our system)

Uninstall Thunderbird using the Synaptic Package Manager and search for and use 'complete uninstall' for the following six packages

thunderbird
     thunderbird-locale-en
     thunderbird-locale-en-uk
     thunderbird-locale-en-us
     thunderbird-gnome-support
xul-ext-lightning

Note: marking thunderbird should automatically removes the 4 others indented below

Reinstall version 52.9 from the on the backup drive in /My Programs/Installed in Mint 20 folder in same order by double clicking on files which opens loader. ...-en-uk may be ...uk-gb. Use the ones with 18.04.1_amd64.deb ending. IGNORE message about newer version! We are doing this to get back to older version.

The new versions now need to be locked to make sure they are never upgraded

The next phase involves separating the functions of the system into partitions and restoring the home folders from the backups on the USB Disk in the grab bag. In other words getting back to our 'Standard Installation'. It is best to do it before you end up with things to undo. However if you are away and just do not want to get invoved then you can probably just restore a single user folder without and partitioning etc but all the DATA and VAULT would be missing. Before doing anything you may want to look back at the section Introduction to the General Linux File System and then read the next section carefully.

Adding the Users, Allocating to Groups and a bit about permissions.

The First thing I do once the Install is complete is to set up the Users on the system in the correct order.

Once the three users are set up they need to be put into certain groups. Groups fall into two quite different sorts - groups which are in effect users. Every user is in a group of the same name. The second sort are to do with rights, eg users in the sudo group have superuser powers and can use sudo to assume all the rights of root and those in lp-admin have the right to set up printers.

The first user which was set up during the install has, you will notice, membership of more groups than subsequent users which have to be manually added to those groups. This is easy in Users and Groups by clicking on the list of groups which brings up the full list with tick boxes. You either need a very good memory or it become iterative as you go back to have a look.

This is what the process looks like after clicking in the middle of the list of groups

Screenshot

You can also cheat if you are happy with the terminal as id username brings up a full list including the numeric values. Here is a current list from gemini

peter@gemini:~$ id
uid=1001(peter) gid=1001(peter) groups=1001(peter),4(adm),24(cdrom),
27(sudo), 30(dip),46(plugdev),114(lpadmin),134(sambashare),
1000(pcurtis),1002(pauline)
peter@gemini:~$

Now another very important step. We need to allow all of the users access to each others files (assuming we have nothing to hide) and the shared areas. This means all are users must be in each others groups. You will find the list of groups includes all the current users so add pauline and peter to pcurtis's groups, peter and pcurtis to pauline's groups etc

Recall: Every file and folder has permissions associated. The permissions are read, write and execute (ie run as a program). The permissions are separately set for the owner, the group and others (the rest of the world). The owner always has read and write (and execute as required). The groups in our case will have the same as we trust each other. The rest of the world will certainly not have the permission to write and probably not execute. For various reason to do with synchronisation the shared files in our data and secure encrypted areas (I will often refer to the areas my their mount points) DATA and VAULT are usually changed in bulk have owner pcurtis and group adm with owner and group having read, write and execute permissions. The end result of all this is that we all have access to everything as if we had created and owned it

There is only one thing that a group member can not do that the owner can do and that is change a date stamp on the file. That is why we set all the files in DATA and VAULT to be owned by pcurtis and use Unison to synchronise from User pcurtis as the process requires the ability to restore original timestamps.

The owner, group and permissions are best set in a terminal, the File Manager looks as it could be used via the set "set permissions for all enclosed files" option all but the recursion down the file tree does not seem to work in practice. The following will set all the owners, groups and permissions in DATA. The primary user can be referred to by id 1000 to make this more general. This uses the octal way of designation and sets the owner and group to read and write and other to read for files and read write and execute for folders which always requir execute permission. I am not going to try o explain the following magic incantation at this time or maybe never, but it works and very fast with no unrequired writes to wear out your SSD or waste time. These need to be run from user pcurtis to change the permissions.

sudo chown -R pcurtis:adm /media/DATA && find /media/DATA -type f -print0 | xargs -0 chmod 664 && find /media/DATA -type d -print0 | xargs -0 chmod 775

sudo chown -R pcurtis:adm /media/VAULT && find /media/VAULT -type f -print0 | xargs -0 chmod 664 && find /media/VAULT -type d -print0 | xargs -0 chmod 775

Background on the 'Standard Installations' we have implemented on our machines

We currently have four machines in everyday use, three are laptops of which two are lightweight ultrabooks, the oldest machine still has the edge in performance being a Core i7 processor nVidia Optimus machine with discrete Video card and 8 thread processor but is heavy and has a limited battery. The other two are true ultrabooks - light, small, powerful and many hours of use on battery power. We also have a desktop which has scanners, printers and old external drives connected and a big screen which is in the process of being replaced by Gemini, a lightweight microsystem, dual booted with Windows 10 although we have rarely used it.

All the machines are multi-user so either of us can, in an emergency, use any machine for redundancy or whilst travelling. Most of the storage is shared between users and periodically synchronized between machines. The users home folders are also periodically backed up and the machines are set up so backups can be restored to any of the three machines. In fact the machines actually have three users. The first user installed with id 1000 is used primarily as an administrative convenience for organising backups and synchronisation for users from a 'static' situation when they are logged out.

There is no necessity for the id_1000 user to have an encrypted home folder as no sensitive information is accessed by that user and there is no email with saved logins or saved information in the browsers. In contrast the other users have email and browsers where the saved logins are the weak point in most systems as most passwords can be recovered by emails which is an incredibly weak point for many web sites where a minimum for password recover should be two factor using a txt or other extra information. The other users need encrypted home folders where email and browser information is kept safe. I favour that over a separate encrypted folder holding email and browser 'profiles' as I have had problems with encrypted folders initial mounted at login getting de-mounted when email and browsers have been active with undesirable results and real potential for considerable data loss. An encrypted folder to save sensitive documents and backups is however desirable if not essential and it is preferable for it to mounted at login.

So the end result is that each machine has:

The rationale for this comparatively complex arrangement is that it gives good separation between the various components which can be backed up and restored independently.

This has been proven over the years. The transfer of users between machines has been carried out many times and the synchronisation of DATA and VAULT is a regular monthly process. New LTS versions of Linux are usually done as fresh installs preserving Users and DATA/VAULTS. Most recently the same procedures have been used to set up the latest machine this is being written on.

Next Step: It is however intended to go slightly further and have a set timetable for comprehensive backups and synchronising to include off-site storage and in the grab-bag of the hardware and to check that LiveUSBs and the backups of programs are up-to-date

Advanced Partitioning to Separate System, Users and Data

We spoke briefly about the Linux File System above. To reiterate, separating the various parts onto different devices obviously makes the system more robust to hardware failures. The best would be to have each component on a physical different device - almost as good is to use different areas of a device with a separate file system if a directory is destroyed then the other file systems remain intact and functional. This is where partitioning comes in. Each physical device can be divided into a number of virtual devices (partitions) each with its own file system and mounted at different parts of the overall file system.

The following shows the actual partitions I have set up on Gemini. I should perhaps mention that I have been using the setting up of Gemini to check that this document reflects what I have done. I have named the partitions especially to make it clear - normally one would not bother. Also note this a view of a single hardware device, there is another which holds the Windows System which could be displayed using the dropdown menu at top right.

Screenshot

Implementing our Standard Installation

We will start by first working out exactly what we need in detail in some detail as we do not want to repeat any actual partitioning. Then start with the easiest change which is also arguably the most important as it separates our data from the system and users. So the overall plan is

  1. Look at the requirements and decide what the final arrangement should be even if we do not implement it all at once. The main decission is how much space to allocate to each partition.
  2. Repartition the device to have a final partitions layout similar to that above from the LiveUSB.
  3. Arrange for the DATA partition to be mounted automatically when the system boots. That only involves collecting some information and adding one line to a file.
  4. Change to a separate partition for the home folders, This is the tricky one.
  5. Maybe create an encrypted partition to hold our Vault or just mount the VAULT leaving encryption for latter
  6. Maybe encrypt one or more of the home folders

The question now is how to simply get from a very basic system with a single partition for the Linux system to our optimised partitioning and encryption. As I said earlier my normal way is to do it all during install but that would be considerable challenge to a new user. So we divide it into a series of small challenges each of which is manageable. Some are relatively straight forwards some are more tricky. Some can be done in a GUI and editor but some are only practical in terminal. Using a terminal will mostly involve copy and pasting a command into the terminal sometimes with a minor edit to a file or drive name. Using a terminal is not something to be frightened of - it is often easier to understand what you are doing.

Look at the requirements and decide what the final arrangement should be

There are two requirements which are special to Mint and one for our particular set up which influence our choice of the amount of space to allocate.

Disk Space Requirements when using TimeShift

TimeShift has to be considered as it has a major impact on the disk space requirements to be considered during partitioning the final system. It was one of the major additions to Mint which differentiates it from other lesser distributions. It is fundamental to the update manager philosophy as Timeshift allows one to go back in time and restore your computer to a previous functional system snapshot. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot and still have the ability to apply updates selectively (as you did in previous releases). This comes at a considerable cost as the snapshots take up space. Although TimeShift is extremely efficient my experience so far with using Timeshift means that one needs to allocate at least an extra 2 fold and preferably 3 fold extra storage over what one expects the root file system to grow to, especially if you intend to take many extra manual snapshots during early development and before major updates. I have already seen a TimeShift folder reach 21 Gbytes for a 8.9 Gbyte system before pruning manual snapshots.

Special Requirements of pCloud

We make use of pCloud for Cloud storage and that has consequences on the amount of space required in the home folders. pCloud uses a large cache to provide fast response and the default and minimum is 5 GB in each users home folder. In addition it needs an additional 2 GB for headroom (presumably for buffering) on each drive. With three users that gives an additional requirement of ~17 GB over a system not using cloud storage -other cloud storage implementations will also have consequences on storage requirements and again with will probably be per user.

Normal Constraints on the filesystem

One must take into account the various constraints from use of an SSD (EXT4 Filesystem, Partition Alignment and Overprovision) and the space required for the various other components. My normal provisioning goes like this when I have single boot system and most are the same for the Linux components of a Dual Boot System:

The first thing to understand is fairly obvious - you cannot and must not make changes to or back up a [file] system or partition which is in use as files will be continuously changing whilst you do so and some files will be locked and inaccessible.

That is where the LiveUSB is once very useful if not essential. We cannot change a partition which is in use (mounted in the jargon) so we must make the initial changes in the partitions in the SSD drive from the LiveUSB. After that we can make many of the changes from within the system provided the partition is unmounted and not from the root or home partitions which will always be in use. It is best to create an extra user who can make changes or backup a user who is logged out and therefore static, we have already set up an administrative user with id 1000 for such purposes. So even if we only have one user set up we will probably need to create a temporary user at times, but that only takes seconds to set up.

Changing partitions which contain information is not something one wants to do frequently so it is best to decide what you will need, sleep on it overnight, and then do it all in one step from the LiveUSB. One should bear in mind that shrinking partitions leaving the starting position fixed is normally not a problem, moving partitions or otherwise changing the start position takes much longer and involves more risk as well as the risk of power failures or glitches. I have never had problems but always take great care to back up everything before starting.

As soon as I was satisfied I knew that the machine was working and I understood what the hardware was I did the main partitioning of the SSD from the LiveUSB using gparted. This actual text belongs to a dual booted system with a new extra 1 TB SSD drive added before carrying out a default Mint Install.

I then shut down the LiveUSB and returned to a normal user.

Mounting the DATA Drive

Earlier on we looked at the output of gparted aand saw where we could get the UUIDs of the drives. I am going to give an alternative view of the same information again showing the important part you will need, namely their UUIDs. The output follows for Gemini after partitioning and was obtained by running the command lsblk -o NAME,FSTYPE,UUID,SIZE in a terminal:

peter@gemini:~$ lsblk -o NAME,FSTYPE,UUID,SIZE
NAME          FSTYPE      UUID                                   SIZE
sda                                                            931.5G
├─sda1        vfat        3003-2779                              512M
├─sda2        ext4        4c6d1f4d-49c6-40df-a9e9-b335fbc4cafe  97.7G
├─sda3        ext4        6c5da67e-94b8-4a50-aece-e1c02cc0c6fe 696.4G
├─sda4        ext4        95df9c27-f804-4cbb-8af1-567b87060c82  78.1G
└─sda5        crypto_LUKS e1898a1c-491d-455c-8aab-09eaa09cc74b  58.9G
  └─_dev_sda5 ext4        2a7ee96e-e97f-4cc5-becd-34f9753bc946  58.9G
mmcblk0                                                         29.1G
├─mmcblk0p1   vfat        6A15-BF9D                              100M
├─mmcblk0p2                                                       16M
├─mmcblk0p3   ntfs        AED216D6D216A29F                        28G
└─mmcblk0p4   ntfs        0086171086170636                       999M
mmcblk0boot0                                                       4M
mmcblk0boot1                                                       4M
mmcblk2                                                         29.7G
└─mmcblk2p1   vfat        3232-3630                             29.7G
peter@gemini:~$ 

You can get it from a GUI but this is easier with everything in a single place!

Now lets look at the simplest of the above to implement. Partitions are mounted at boot time according to the 'instructions' in the file /etc/fstab and each mounting is described in a single line. So all one has to do is to add a single line. I will not go into detail about what each entry means but this is based on one of my working systems - if you have to know what they all mean try a man fstab in a terminal!

UUID=6c5da67e-94b8-4a50-aece-e1c02cc0c6fe /media/DATA ext4 defaults 0 2

You do need to edit the file as a root user as the file is a system file owned by root and is protected against accidental changes. There is a simple way to get to edit it with root permissions in the usual editor (xed). Open the file browser and navigate to the /etc folder. Now right click and on the menu and click 'Open as Root'. Another file manager window will open with a big red banner to remind you that you are a superuser with god like powers in the hierarchy of the system. You can now double click the file to open it and again the editor will show a red border and you will be able to edit and save the file. I like to make a backup copy first and a simple trick is to drag and drop the file with the Ctrl key depressed and it will make a copy with (copy) in the filename.

After you have copied in the string above you MUST change the UUID (Universal Unique IDentifier) to that of your own partition. The header in the file /etc/fstab says to find the UUID using blkid which certainly works or using the other incantation above. That is what I used to do but I have found a more clear way using a built in program accessed my Menu -> Disks. This displays very clearly all the partitions on all the Disks in the system including the the UUID which is a random number which is stored in the metadata of every file system in a partition and is independent of machine mounting order and anything else which could cause confusion. So you just have to select the partition and copy the UUID which is below. To ensure you copy the whole UUID I recommend right clicking the UUID -> 'select all' then 'copy'. You can then paste it into the line above replacing the example string. Save /etc/fstab and the drive should be mounted next time you boot up. If you make a mistake the machine may not boot but you can use the LiveUSB to either correct or replace the file from your copy.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda2 during installation
UUID=4c6d1f4d-49c6-40df-a9e9-b335fbc4cafe / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/mmcblk0p1 during installation
UUID=6A15-BF9D /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
#/dev/sda3
UUID=6c5da67e-94b8-4a50-aece-e1c02cc0c6fe /media/DATA ext4 defaults 0 2
#/dev/sda4
UUID=95df9c27-f804-4cbb-8af1-567b87060c82 /home ext4 defaults 0 2

Transferring to a separate partition for home

This is much more tricky especially as it requires an understanding of the Linux file system. When you mount a partition you specify where it appears in the single folder structure starting at root. We have already mounted a Data drive which we now access at /media/DATA. When we change to a separate home folder we will be moving from a situation where /home is on the same partition as the root folder structure to one where /home is on a separate partition. This means that the new /home will be overlayed on top of the old structure which will no longer be visible or accessible and effectively wasted space on the partition with root. You need to think about and understand this before proceeding.

So we need to copy the old user folders to the new partition in some way and then mount it. Eventually we will need to recover the wasted space. Just to compound problems a simple drag and drop or even move in a terminal may not correctly transfer the folders as they are likely to contain what are called symbolic links which point to a different location rather than actually be the file or folder and a simple copy may move the item pointed to rather than the link so special procedures are required. Fortunately there is a copy command using the terminal which will work for all normal circumstances and a nuclear option of creating a compressed archive which is what I use for backups and transferring users between machines which has never failed me.

As a diversion for those of an enquiring mind I found an article on how to see if symlinks are present at https://stackoverflow.com/questions/8513133/how-do-i-find-all-of-the-symlinks-in-a-directory-tree and used the following terminal command on my home folder:

find . -type l -ls

Explanation: find from the current directory . onwards all references of -type link and list -ls those in detail. The output confirmed that my home folder had gained dozens of symlinks with the Wine and Opera programs being the worst offenders. It also confirmed that the copy command and option I am going to use does transfer standard symlinks in the expected way. It found 67 in my home folder in .wine, .pcloud, firefox, menus and skypforlinux

The method that follows minimises use of the terminal and is quite quick although it may seem to have many stages. First we need to mount the new partition for home somewhere so we can copy the user folders. Seeing we will be editing /etc/fstab an easy way is to edit fstab to mount the new partition at say /media/home just like above, reboot and we are ready to copy. The line we add at this stage will be

UUID=95df9c27-f804-4cbb-8af1-567b87060c82 /media/home ext4 defaults 0 2

We must do the copies with the users logged out, if there is only one user we need to create a temporary user just to do the transfer and we can then delete it. So we are going to forced once more to use the terminal so we can do the copies using the cp command with a special option -a which is short for –archive which never follows symbolic links whilst still preserving the links and copies folders recursively

sudo cp -a /home/user1/ /media/home/
and repeat for the other users

Remember that you must login and out of users in a way that you ensure that you never copy an active (logged in) user.

Now we edit /etc/fstab again so the new partition mounts at just /home instead of /media/home and reboot and the change should be complete. I will show another way to open fstab for editing or use the previous way. Again I am not going to explain fully how it works ut just state it is actually the recommended way to open the text editor as root

xed admin:///etc/fstab

And change the mount point to /home in fstab

UUID=95df9c27-f804-4cbb-8af1-567b87060c82 /home ext4 defaults 0 2

I suggest you leave it at that for a few days to make sure everything is OK after which you can get out your trusty LiveUSB and find and delete the previous users from the home folder in the root partition. Always use 'Delete' not 'Copy to the Recycle Bin' when using root privileges or it will go to Root's recycle bin which is very difficult to empty

Encrypting a partition for our Vault (or an external USB Drive for Back-up)

You should now be in a good position but for those who really want to push their limits I will explain how I have encrypted partitions and USB drives. This can only be done from the terminal and I am only going to give limited explanation (because my understanding is also limited!). Consider it to be like baking a cake - the recipe can be comprehensive but you do not have to understand the details of the chemical reactions that turn the most unlike ingredients into a culinary masterpiece.

Firstly you do need to create a partition to fill the space but it does not need to be formatted and have a specific filesystem. All you need to know is the device number, in my case /dev/sda5, if yours is different when you examine it in gparted or using lsblk -o NAME,FSTYPE,UUID,SIZE then change the commands. Anything in the partition will be completely overwritten. If your partition has a different device number you must therefore edit the commands which follow. There are several stages, first we add encryption directly to the drive at block level. Then we add a filesystem and finally we mount the encrypted drive when a user logs in.

Add LUKS encryption to the partition

sudo cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 2000 --use-random luksFormat /dev/sda5

The options do make some sense except for --iter-time which is the time allocated to generate the random key and likewise decode it in milliseconds. Each user may have a different password so the search so the total time spend during login can get quite long hence choosing 2 seconds rather than the 5 seconds I used to use. During the dialog you will first be asked for your normal login password as it is a root activity and then for the secure passphrase for the drive.

Now we mount and format the drive with an ext4 filesystem

Next you have open your new drive and mount it - you will be asked for the passphrase you gave above

sudo cryptsetup open --type luks /dev/sda5 sda5e

Now we format the device with a ext4 filesystem and it is important to name it at this time to VAULT or your chosen name in the case of an external USB drive, mine are called LUKS_A etc

sudo mkfs.ext4 /dev/mapper/sda5e -L VAULT

There is one further important step and that is to back up the LUKS header which can get damaged by

sudo cryptsetup -v luksHeaderBackup /dev/sda5 --header-backup-file LuksHeaderBackup_gemini.bin

and save the file somewhere safe.

If you were formatting an external USB Drive the job is done, however there is a 'bug' you need to understand when plugging in the drives under certain circumstances which I cover below.

If it is an internal drive such as we use for VAULT we need to mount it automatically ever time the machine is booted or a user logs in.

Set up to mount an internal drive (VAULT) when a user logs in.

We now need to mount it when a user logs in. This is not done in /etc/fstab for an encrypted file but by a utility called pam-mount which first has to be installed if you did not do it earlier:

It is in the list I provided earlier but if you use the following it will just tell you there is no need if already installed or you can run the Synaptic Package Manager

sudo apt-get install libpam-mount

We now need to find its UUID by use of lsblk -o NAME,FSTYPE,UUID,SIZE which will reveal that there are two associated UUIDs, one for the partition which is the one we need and another for the filesystem within it. There is already a screenshot of this higher up in this document for the example system.

Now we edit its configuration file using the information from above. The following magic incantation will, after a couple of passwords open the configuration file for editing.

xed admin:///etc/security/pam_mount.conf.xml

So the middle bit gains the single line as below but make sure you change the UUID to that of your own crypto_LUKS partition. Note this for the partition, not for the ext4 filesystem within it. See the blkid -f screenshot above in the document and check the UUIDs

...

<!-- Volume definitions -->

<volume fstype="crypt" path="/dev/disk/by-uuid/e1898a1c-491d-455c-8aab-09eaa09cc74b" mountpoint="/media/VAULT" user="*" />

<!-- pam_mount parameters: General tunables -->

..

Save the file and reboot (a logout and in should also work) and we almost finished.

When you have rebooted you will see the folder but will not be able to add anything to it as it is owned by root!

The owner, group and permissions are best set in a terminal, the File Manager looks as it could be used via the set "set permissions for all enclosed files" option all but the recursion down the file tree does not seem to work in practice. The following will set all the owners, groups and permissions in VAULT. The primary user is refereed to by id 1000 to make this more general. This uses the octal way of designation and sets the owner and group to read and write and other to read for files and read write and execute for folders which always requir execute permission. I am not going to try o explain the following magic incantation at this time or maybe never, but it works and very fast with no unrequired writes to wear out your SSD or waste time.

sudo chown -R pcurtis:adm /media/VAULT && find /media/VAULT -type f -print0 | xargs -0 chmod 664 && find /media/VAULT -type d -print0 | xargs -0 chmod 775

This may have look daunting but about 8 terminal commands and adding 1 line to file was all it needed.

Changing and adding Passwords

This final step only applies if your users have different passwords or a password has been changed. The LUKS filesystem has 8 slots for different passwords so you can have seven users with different passwords as well as the original passphrase you set up. So if you change a login password you also have to change the matching password slot in LUKS by first unmounting the encrypted partition then using a combination of

sudo cryptsetup luksAddKey /dev/sda5
sudo cryptsetup luksRemoveKey /dev/sda5
# or
sudo cryptsetup luksChangeKey /dev/sda5

You will first be asked for your login password as you are using sudo then you will be asked for a password/passphrase which can be a valid one for any of the keyslots when adding or removing keys. ie any current user password will allow you to make the change. Do not get confused with all the passwords and passphrases! It looks like this:

peter@gemini:~$ sudo cryptsetup luksAddKey /dev/sda5
[sudo] password for peter:
Enter any existing passphrase:
Enter new passphrase for key slot:
Verify passphrase:
peter@gemini:~$

Warning: NEVER remove every password or you will never be able to access the LUKS volume and I always check the new password before removing the old.

Bug when mounting USB Backup Drives Encrypted using LUKS in a multiuser environment.

This problem is easy to miss whilst testing as it only appears when one is frequently switching users. Unfortunately that is the situation when doing back-ups to an encrypted drive. The following proceedure gets round the problem, which seems to be caused by the gnome-keyring getting confused in a multiuser environment, by avoiding using it! You should never need to permanently save the passphrase which is bad for security and saving for the session is only useful if you need to keep removing and replacing the drive which is unlikely.

Auto-Mounting USB Backup Drives Encrypted using LUKS

All the USB Backup drives we currently have in use have been encrypted with LUKS and need a passphrase when they are mounted (plugged in).

There is a bug or feature when mounting under the latest versions of Mint, It occurs if you have been switching users before plugging in the LUKS encrypted drive. It is easily avoided by:

  1. Rebooting before plugging in the drive then keeping it mounted through any subsequent changes of user.
  2. or Mounting at any time using the Forget the Password Immediately option rather than the default of Remember the Password until you logout and again keeping it mounted until you have completely finished.

If you make a mistake you will get an error message: The drive will be unlocked but the only way to use it will be to open the file manager and click on it under devices which will mount it and it will then be accessible for use.

Remember - you should always eject/un-mount the drive using the Removable Drives applet in the tray when you have completely finished. This may need your normal user password if you are logged into a different user - you never need the passphrase for the drive to lock it.

Mounting of LUKS volumes from SSH, a Workaround for a bug/feature of pam_mount

I am not sure if this should be in this document but it might affect you if you are a power user used to working via a consol from a remote machine whilst others are using the machine locally. I do it on occassion when I need to do a simple task whilst my wife is using her machine or during synchronisation activities when unison can also trip the problem.

I have suffered a problem for a long time from what is either a feature or a bug depending on your viewpoint. When any user is logged out the LUKS VAULT is closed. That seems to apply even when multiple users are logged in as a 'security' feature. The same applies when using unison to synchronise between machines so my LUKS encrypted common partition which is mounted as VAULT is unmounted (closed) when unison is closed. The same goes for any remote login to users other than the when the user accessed remotely and the user on the machine were the same. This is bad news if another user was using VAULT or tries to access it - it certainly winds my wife up when a lot of work is potentially lost as her storage disappears.

I finally managed to get sufficient information from the web to understand a little more about how the mechanism (pam_mount) works to mount an encrypted volume when a user logs in remotely or locally. It keeps a running total of the logins in separate files for each user in /var/run/pam_mount and decrements them when a user logs out. When a count falls to zero the volume is unmounted REGARDLESS of other mounts which are in use with counts of one or greater in there files. One can watch the count incrementing and decrementing as local and remote users log in and out. So one solution is to always keep a user logged in either locally or remotely to prevent the the count decrementing to 1 and automatic demounting taking place. This is possible but a remote user could easily be logged out if the remote machine is shut down or other mistakes take place. A local login needs access to the machine and is still open to mistakes. One early thought was to log into the user locally in a terminal then move it out of sight and mind to a different workspace!

The solution I plan to adopt uses a low level command which forms part of the pam_mount suite. It is called pmvarrun and can be used to increment or decrement the count. If used from the same user it does not even need root privileges. So before I use unison or a remote login to say helios as user pcurtis I do a remote login from wherever using ssh then call pmvarrun to increment the count by 1 for user pcurtis and exit. The following is the entire terminal activity required.

peter@helios:~$ ssh pcurtis@defiant
pcurtis@defiant's password:

27 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sat Oct 24 03:54:43 2020
pcurtis@defiant:~$ pmvarrun -u pcurtis -o 1
2
pcurtis@defiant:~$ exit
logout
Connection to defiant closed.
peter@helios:~$

The first time you do an ssh remote login you may be asked to confirm that the 'keys' can be stored. Note how the user and machine changes in the prompt

I can now use unison and remote logins as user pcurtis to machine helios until helios is halted or rebooted which should unmount VAULT cleanly. Problem solved!

I tried adding the same line to the end of the .bashrc file in the pcurtis home folder. The .bashrc file is run when a user opens a terminal and is used for general and personal configuration such as aliases. That works but gets called every time a terminal is opened and I found a better place is .profile which only gets called at login, the count still keeps increasing but at a slower rate. You can check the count at any time by doing an increment of zero:

pmvarrun -u pcurtis -o 0

Appendix 1

Backing-up and Restoring to an Old or New system.

Introduction

This is a chicken and egg situation. It will be easy to grow the chicken but first we must create the egg. When I started to put my system together on Gemini my intention was to take backups from my other machines and restore them on Gemini thus avoiding a large amount of configuration and I quickly realised that it would provide an excellent cross check of my existing back-up procedures and a chance to refine them. In addition Gemini would be the fixed machine and, if considered able, would be the center of the backing up, ie machines would all be synchronised via Gemini and then back-ups would be made to hard disk.

In thinking it all through I started to think about when the restoration from back-up were really needed and I came up with the Grab Bag concept which I am writing about separately. In this case the Grab Bag has to contain the minimum that enable one to get back in operation when away from base if ones only machine is dropped in the ocean, abandoned due to earthquakes (yes we spend a lot of time sailing and in New Zealand and have real grab bags), or just has a hard disk failure. So this section has turned out to be about much more than a token statement that one should always back up. It does tend to concentrate on our own, somewhat complex, system but that was also designed to provide a robust system.

From the point of view of quickly getting back to business restoring a users folder is arguably the most important so we need to have procedures in place which back-up the users home folder on a regular basis and make sure they are duplicated into a drive which we have with us in the grab-bag. Home folders are complex, typically contain hundreds of thousands of files, most from email and browsers and large ~ 15 Gbytes in our case - the actual machine I am writing on has my home folder at 30 Gbytes with 138,000 files so backups to compressed archives and restores are not quick. It depends on the speed of the processor for compressing and the drives for fetching and storing but on our machines with SSD storage and Seagate USB 3.1 Hard drives encrypted with LUKS it takes about 1 minute per Gbyte to backup or restore using a tar archive, say 20 minutes to backup or restore each user from our hard drive.

The restoring of DATA and VAULT is also slow but less urgent as the important parts can always be accessed whilst still on the backup drive. Few of our machines are big enough to hold everything simultaneously, usually old video folders only exist on the backup USB drives until we want to edit them. But as an indication on the machine I am currently using DATA is about 550 Gbytes which includes our web site, our Music (70 Gbytes), Our digital Pictures from nearly 20 years (320 Gbytes) and 10 years of unedited video (150 Gbytes). In contrast VAULT only contains 4 Gbytes of information and the rest is backups of home folders etc. which, of course, also need to be kept secure. Looking at DATA a direct copy would take at least 3 hours.

What should be clear is that having recent back-up with one is crucial and it should also be clear that there is a huge security risk if some of the backup is un-encrypted. So we will first look at the back-up requirements and procedures.

Overall Backup Philosophy for Mint

My thoughts on Backing Up have evolved considerably over time and now take much more into account the use of several machines and sharing between them and within them giving redundancy as well as security of the data. They have existed at Sharing, Networking, Backup, Synchronisation and Encryption under Linux Mint for many years. That page now looks much more at the ways the backups are used, they are not just a way of restoring a situation after a disaster or loss but also about cloning and sharing between users, machines and multiple operating systems. The thinking continues to evolve to take into account the use of data encryption. We have already looked at some of this from a slightly different perspective on the overall system design of our machines.

So firstly lets look more specifically at the areas that need to be backed up on our machines:

  1. The Linux operating system(s), mounted at root ( / ). This area contains all the shared built-in and installed applications but none of the configuration information for the applications or the desktop manager which is specific to users. Mint has a built in utility called TimeShift which is fundamental to how potential regressions are handled - this does everything required for this areas backups and can be used for cloning Cloning is not easy for rebuilding away from home and a reinstall and reload of programs will be much safer on a different machine. TimeShift will be covered in detail in a separate section.
  2. The Users Home Folders which are folders mounted within /home and contain the configuration information for each of the applications as well as the desktop manager which is specific to users such as themes, panels, applets and menus. It also contains all the Data belonging to a specific user including the Desktop, the standard system folders such as Documents, Video, Music and Photos etc. It will probably also contain the email 'profiles' . This is the most challenging area with the widest range of requirements so is the one covered in the greatest depth here.
  3. Shared DATA including encrypted data in VAULT. The above covers the minimum areas required but I have an additional DATA area which is available to all operating systems and users and is periodical synchronised between machines as well as being backed up. This is kept independent and has a separate mount point. In the case of machines dual booted with Windows it could use a files system format compatible with Windows and Linux such as NTFS. The requirement for easy and frequent synchronisation means Unison is the logical tool both between machines and for the associated synchronisation to large USB hard drives for backup. Most of our machines also have a similar encrypted partition called VAULT for sensitive data.
  4. Email and Browsers (profiles). I am going to also mention Email specifically as that has specific issues as it needs to be collected on every machine as well as pads and phones and some track kept on replies regardless of source. All incoming email is retained on the POP servers we use for months if not years and all outgoing email is copied to either a separate account accessible from all machines or where that is not possible automatically such as android a copy is sent back to the senders inbox.

    Email is also a major security risk as saved passwords for the email servers have limited or no protection. Thunderbird has a self contained 'profile' where all the local configuration and the filing system for emails is retained and that profile along with the matching one for the firefox browser need to be backed up and that depends where they are held.

Physical Implications of Backup Philosophy - Partitioning

I am not going to go into this in great depth as it has already been covered in other places but to allow this section to be self contained I will reiterate the philosophy which is:

  1. The Linux system should be in a separate partition and there are advantages in having two partitions available for Linux systems so new versions can be run for a while before committing to them.
  2. The folder containing all the users home folders should be a separate partition mounted as /home. This separates the various functions and makes backup, sharing and cloning easier.
  3. When one has an SSD the best speed will result from having the Linux systems and the home folder using the SSD especially if the home folders are going to be encrypted.
  4. Shared DATA should be in a separate partition mounted at /media/DATA. If one is sharing with a Windows system it should be formatted as ntfs which also reduces problems with permissions and ownership with multiple users. DATA can be on a separate slower but larger hard drive.
  5. If you have an SSD swapping should be minimised and the swap partition should be on a hard drive if it is available to maximise SSD life. Swap obviously does not require any back-up as it is transient.
  6. Encryption should be considered on laptops which leave the home. Home folder encryption and encrypted drives are both possible and, if drive capacity allows, one should allocate space for an encrypted partition even if not implemented initially - in our systems it is mounted at /home/VAULT. It is especially important that email is in an encrypted area.

The Three Parts to Backing Up

1. System Backup - TimeShift - Scheduled Backups and more.

TimeShift which is now fundamental to the update manager philosophy of Mint and backing up the Linux system very easy. To Quote "The star of the show in Linux Mint 19, is Timeshift. Thanks to Timeshift you can go back in time and restore your computer to the last functional system snapshot. If anything breaks, you can go back to the previous snapshot and it's as if the problem never happened. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot (thus canceling the effects of the regression) and you still have the ability to apply updates selectively (as you did in previous releases)." The best information I have found about TimeShift and how to use it is by the author.

TimeShift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and some settings common to all users. User files such as documents, pictures and music are excluded. This ensures that your files remains unchanged when you restore your system to an earlier date. Snapshots are taken using rsync and hard-links. Common files are shared between snapshots which saves disk space. Each snapshot is a full system backup that can be browsed with a file manager. TimeShift is efficient in use of storage but it still has to store the original and all the additions/updates over time. The first snapshot seems to occupy slightly more disk space than the root filesystem and six months of additions added another approximately 35% in my case. I run with a root partition / and separate partitions for /home and DATA. Using Timeshift means that one needs to allocate an extra 2 fold storage over what one would have expected the root file system to grow to.

In the case of the Defiant the root partition has grown to about 11 Gbytes and 5 months of Timeshift added another 4 Gbytes so the partition with the /timeshift folder needs to have at least 22 Gbytes spare if one intends to keep a reasonable span of scheduled snapshots over a long time period. After three weeks of testing Mint 19 my TimeShift folder has reached 21 Gbytes for a 8.9 Gbyte system!

This space requirements for TimeShift obviously have a big impact on the partition sizes when one sets up a system. My Defiant was set up to allow several systems to be employed with multiple booting. I initially had the timeshift folder on the /home partition which had plenty of space but that does not work with a multi-boot system sharing the /home folder. Fortunately two of my partitions for Linux systems plenty big enough for use of TimeShift and the third which is 30 Gbytes is acceptable if one is prepared to prune the snapshots occasionally. With Mint 20 and a lot of installed programs I suggest the minimum root partition is 40 Gbytes and preferably 50 Gbytes.

It should be noted that it is also possible to use timeshift to clone systems between machines - that is covered in All Together - Sharing, Networking, Backup, Synchronisation and Encryption but we do not need it here.

2. Backing up Users - Home Folder Archiving using Tar.

The following covers my preferred and well tested mechanism for backing up the home folders. I have been doing this on a monthly basis on 3 machines for many years. An untested procedure is of little use so you will be pleased to know I have also done a good number of restorations usually when moving a complete user from one machine to another (cloning) and that will also be covered.

The only catch is that the only mechanism I trust does require use of a terminal although it is only a single command in a terminal to create the archive and likewise two commands to restore an archive. Once the commands are tailored to include your own username and the drive name you use they can just be repeated - the final version I provide even adds in a date stamp in the filename!

Tar is a very powerful command line archiving tool round which many of the GUI tools are based and it works on most Linux Distributions. In many circumstances it is best to access this directly to backup your system. The resulting files can also be accessed (or created) by the archive manager accessed by right clicking on a .tgz .tar.gz or .tar.bz2 file. Tar is an ideal way to backup many parts of our system, in particular one's home folder. A big advantage of tar is that (with the correct options) it is capable of making copies which preserve all the linkages within the folders - simple copies do not preserve symlinks correctly and even an archive copies (cp -aR mybackupname) are not as good as a tar archive

The backup process is slow (15 mins plus) and the archive file will be several Gbyte even for the simplest system. Mine are typically 15 Gbytes and our machines achieve between 0.7 and 1.5 Gbytes/minute. After it is complete the file should be moved to a safe location, preferably an external device which is encrypted or kept in a very secure place. You can access parts of the archive using the GUI Archive Manager by right clicking on the .tgz file - again slow on such a large archive or restore (extract in the jargon) everything, usually to the same location.

Tar, in the simple way we will be using it, takes a folder and compresses all its contents into a single 'archive' file. With the correct options this can be what I call an 'exact' copy where all the subsidiary information such as timestamp, owner, group and permissions are stored without change. Soft, symbolic links, and hard links can also be retained. Normally one does not want to follow a link out of the folder and put all of the target into the archive so one needs to take care. Tar also handles 'sparse' files but I do not know of any in a normal users home folder.

As mentioned above the objective is to back up each users home folder so it can be easily replaced on the existing machine or on a replacement machine. The ultimate test is can one back up the users home folder to the tar archive,, delete it (or safer is to rename) and restore it exactly so the user can not tell in any way. The home folder is, of course, continually changing when the user is logged in so backing up and restoring must be done when the user is not logged in, ie from a different user, a LiveUSB or from a consul. Our systems reserve the first installed user for such administrative activities. For completeness I note:

You can create a user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, Set Type: Administrator -> Add -> Click Password to set a password (otherwise you can not use sudo)

Firstly we must consider what is, arguably, the most fundamental decision about backing up, the way we specify the location being saved when we create the tar archive and when we extract it - in other words the paths must be complementary and restore the folder to the same place. If we store absolute locations we must extract in the same way. If it is relative we must extract the same way. So we will always have to consider pairs of commands depending on what we chose. In All Together - Sharing, Networking, Backup, Synchronisation and Encryption I looked at several options but in practice I have only ever used one which is occasional referred to as Method 1 in this document.

My preferred method Method (method 1 in the full document) has absolute paths and shows home when we open the archive with just a single user folder below it. This is what I have always used for my backups and the folder is always restored to /home on extraction. In its simplest form (there will be a better one latter) it looks like this.

sudo tar --exclude=/home/*/.gvfs -cvpPzf "/media/USB_DATA_DRIVE/mybackup1.tgz" /home/user1/

sudo mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA_DRIVE/mybackup1.tgz" -C /

Notes:

Archive creation options: The options used when creating the archive are: create archive, verbose mode (you can leave this out after the first time) , retain permissions, -P do not strip leading backslash, gzip archive and file output. Then follows the name of the file to be created, mybackup.tgz which in this example is on an external USB drive called 'USB_DATA' - the backup name should include the date for easy reference. Next is the directory to back up. There are objects which need to be excluded - the most important of these is your back up file if it is in your /home area (so not needed in this case) or it would be recursive! It also excludes the folders (.gvfs) which is used dynamically by a file mounting system and is locked which stops tar from completing. The problems with files which are in use can be removed by creating another user and doing the backup from that user - overall that is a cleaner way to work. If you want to do a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Other exclusions: There are other files and folders which should be excluded. In our specific case that includes the cache area for the pCloud cloud service as it will be best to let that recreate and avoid potential conflicts. ( --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" )

Archive Restoration uses options - extract, verbose, retain permissions, from file and gzip. This will take a while. The "-C / ensures that the directory is Changed to a specified location, in case 1 this is root so the files are restore to the original locations.

tar options style used: The options are in the old options style written together as a single clumped set, without spaces separating them, the one exception is that recent versions of tar >1.28 require exclusions to immediately follow the tar command in the format shown. Mint 20 has version 1.30 in November 2020 so that ordering applies.

Higher compression: If you want to use a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Deleting Files: If the old system is still present note that tar only overwrites files, it does not deleted files from the old version which are no longer needed. I normally restore from a different user and rename the users home folder before running tar as above, when I have finished I delete the renamed folder. This needs root/sudo and the easy way is to right click on a folder in Nemo and 'open as root' - make sure you use a right click delete to avoid going into a root deleted items folder.

Rename: The rename command somewhat confusingly uses the mv (move) command with option -T

Deleting Archive files: If you want to delete the archive file then you will usually find it is owned by root so make sure you delete it in a terminal - if you use a root browser then it will go into a root Deleted Items which you can not easily empty so it takes up disk space for ever more. If this happens then read http://www.ubuntugeek.com/empty-ubuntu-gnome-trash-from-the-command-line.html and/or load the trash-cli command line trash package using the Synaptic Package Manager and type

sudo trash-empty

Summary - Archiving a home folder and restoring

Everything has really been covered above so this is really just a slight expansion of the above with the addition of some suggested naming conventions.

This uses 'Method 1' where all the paths are absolute so the folder you are running from is not an issue. This is the method I have always used for my backups so it is well proven. The folder is always restored to /home on extraction so you need to remove or preferably rename the users folder before restoring it. If a backup already exists delete it or use a different name. Both creation and retrieval must be done from a different or temporary user to avoid any changes taking place during the archive operations.

sudo tar --exclude=/home/*/.gvfs -cvpPzf "/media/USB_DATA_DRIVE/backup_machine_user_a_$(date +%Y%m%d).tgz" /home/user_a/

sudo mv -T /home/user_a /home/user_a-bak
sudo tar xvpfz "/media/USB_DATA_DRIVE/backup_machine_user_YYYYmmdd.tgz" -C /

Note: the automatic inclusion of the date in the backup file name and the suggestion that the machine and user are also included. In this example the intention is send the archive straight to a plug-in USB hard drive, encrypted is best.

There is one caution, there currently a problem with using LUKS encrypted backup drives which is easily overcome - it was covered above in the section describing how to encrypt drives - you should remind yourselves of the proceedure.

Cloning between systems using a backup archive.

This is what we are, in effect, doing when we are rebuilding our system on the new machine. This is easy provided the users you are cloning were installed in the same order and you make the new usernames the same as the old. I have done that many times. There is however a catch which you need to watch for and that is that the way Linux stores user names is a two stage process. If I set up a system with the user pcurtis when I install that is actually just an 'alias' to a numeric user name 1000 in the linux operating system - the users ids in Mint start at 1000. If I then set up a second user peter that will correspond to user id 1001. If I have a disaster and reinstall and this time start with pauline she will have user id 1000 and peter is 1001. I then get my carefully backed up folders and restore, the folders which now have all the owners etc incorrect as they use the underlying numeric value apart, of course, where the name is used in hard coded scripts etc. This is why I keep reiterating in different places that the users Must not only have the same names but be installed in the same order.

You can check all the relevant information for the machine you are cloning from in a terminal by use of id in a terminal :

pcurtis@gemini:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),6(disk),
20(dialout),21(fax),24(cdrom),25(floppy),26(tape),29(audio),30(dip),
44(video),46(plugdev),104(fuse),109(avahi),110(netdev),112(lpadmin),
120(admin),121(saned),122(sambashare)

So when you install on a new machine you should always use the same usernames and passwords as on the original machine and then create an extra user with admin (sudo) rights for convenience for the next stage, or do what we do and always save the first user for this purpose.

Change to your temporary user, rename the first users folder (you need to be root) and replace it from the archived folder from the original machine. Now login to the user again and that should be it. At this point you can delete the temporary user. If you have multiple users to clone the user names must obviously be the same and, more importantly, the numeric id must be the same as that is what is actually used by the kernel, the username is really only a convenient alias. This means that the users you may clone must always be installed in the same order on both machines or operating systems so they have the same numeric UID. I have repeated myself about this several times because it is so important!

So we first make a backup archive in the usual way and take it to the other machine or switch to the other operating system and restore as usual. It is prudent to backup the system you are going to overwrite just in case.

So first check the id on both machines for the user(s) by use of

id user

If and only if the ids are the same can we proceed

On the first machine and from a temporary user:

sudo tar --exclude=/home/*/.gvfs -cvpPzf "/media/USB_DATA/backup_machine_user1_$(date +%Y%m%d).tgz" /home/user1/

On the second machine or after switching operating system and from a temporary user:

mv -T /home/user1 /home/user1-bak # Rename the user.

sudo tar xvpfz "/media/USB_DATA/mybackup_with_date.tgz" -C /

When everything is well tested you can delete the renamed folder

3. DATA Synchronisation and Backup - Unison

This is a long section because in contains a lot of background but in practice I just have one central machine which has Unison set up which offers a set of profiles each of which will check and then do all the synchronisation to a drive or machine very quickly. I do it once a month as backup and when required if, for example, I need to edit on different machines. When one is away one will want to do a selective synchronisation of the areas of data which are being changed or added to - this would include pictures, video and possible any music you had ripped.

This is what Unison looks like when editing or setting up a synchronisation profile.

Screenshot

So how does it work. Linux has this very powerful tool available called Unison to synchronise folders, and all their subfolders, either between drives on the same machine or across a local network using a secure transport called SSH (Safe 'S Hell). At its simplest you can use a Graphical User Interface (GUI) to synchronise two folders which can be on any of your local drives, a USB external hard drive or on a networked machine which also has Unison and SSH installed. Versions are even available for Windows machines but one must make sure that the Unison versions numbers are compatible even between Linux versions. That has caused me a lot of grief in the past and has been largely instrumental in causing me to upgrade some of my machines to Mint 20 from 19.3 earlier than I would have done.

If you are using the graphical interface, you just enter or browse for two local folders and it will give you a list of differences and recommended actions which you can review and it is a single keystroke to change any you do not agree with. Unison uses a very efficient mechanism to transfer/update files which minimises the data flows based on a utility called rsync. The initial Synchronisation can be slow but after it has made its lists it is quite quick even over a slow network between machines because it is running on both machines and transferring minimum data - it is actually slower synchronising to another hard drive on the same machine.

The Graphical Interface (GUI) has become much more comprehensive than it was when I started using it and can now handle most of the options you may need, however it does not allow you to save the configurations to a different name. You can however find them very easily as they are all stored in a folder in your home folder called .unison so you can copy and rename them to allow you to edit each separately, for example, you may want similar configurations to synchronise with several hard drives or other machines. The format is so simple and obvious.

For more complex synchronisation with multiple folders and perhaps exclusions you set up a more complex configuration file for each Synchronisation Profile and then select and run it from the GUI as often as you like. It is easier to do than describe - a file to synchronise my four important folders My Documents, My Web Site, Web Sites, and My Pictures is only 10 lines long and contains under 200 characters yet synchronises 25,000 files!

After Unison has done the initial comparison it lists all the potential changes for you to review. The review list is intelligent and if you have made a new folder full of sub folders of pictures it only shows the top folder level which has to transferred . You have the option to put off, agree or reverse the direction of any files with difference. Often the differences are in the metadata such as date or permissions rather than the file and there is enough information to resolve those differences is supplied - other times when changes have been made independently in different places the decisions may be more difficult but there is an ability to show differences between simple (text) files.

Both Unison and SSH are available in the Mint repositories but need to be installed using System -> Administration -> Synaptic Package Manager and search for and load the unison-gtk and ssh packages or they can be directly installed from a terminal by:

sudo apt-get install unison-gtk ssh

The first is the GUI version of Unison which also installs the terminal version if you need it. The second is a meta package which install the SSH server and Client, blacklists and various library routines.

The procedures initially written in grab_bag.htm and gemini.htm differ slightly from what I have done in the past in an attempt to get round a shortcoming in the Unison Graphical User Interface (GUI). In looking deeper into some of the aspects whilst writing up the monthly backup procedures I have realised there were some some unintended consequences which has caused me to look again at this section and update it.

Unison is then accessible from the Applications Menu and synchronising can be tried out immediately using the GUI. There are some major cautions if the defaults are used - the creation/modification date are not synchronisation by default so you lose valuable information related to files although the contents remain unchanged. The behaviour can can easily be set in the configuration files or by the setting the option times to be true in the GUI. The other defaults also pose a risk of loss of associated data. For example the user and group are not synchronised by default - they are less essential than the file dates and I normally leave them with their default values of false. Permissions however are set by perm which is a mask and read, write and execute are preserved by default which in our case is not required. What one is primarily interested in is the contents of the file and the date stamps, the rest is not required if one is interested in a common data area although they are normally essential to the security of a Linux.

WARNINGs about the use of the GUI version of Unison:

I find it much easier and predictable to do any complex setting up by editing the configuration/preference files (.prf files) stored in .unison . My recommendation is that you do a number of small scale trials until you understand and are happy and start to edit the configuration files to set up the basic parameters and only edit details of the folders etc you are synchronising in the GUI. I have put an example configuration file with comments further down this write up as well as the following is a basic file which synchronises changes in the local copy of My Web Site and this years pictures with backup drive LUKS_H which you can use as a template

# Unison preferences - example file recent.prf in .unison
label = Recent changes only
times = true
perms = 0o0000
fastcheck = true
ignore = Name temp.*
ignore = Name *~
ignore = Name .*~
ignore = Name *.tmp
path = My Pictures/2020
path = My Web Site
root = /media/DATA
root = /media/LUKS_H

Note the use of the root and path gives considerable flexibility and they are both safe to edit in the GUI unlike label and perms. Also note the letter o designating octal in perms - do not mistake for an extra zero!

Choice of User when running Unison

There is, however, a very important factor which must be taken into account when synchronising using the GUI and wish to preserving the file dates and times. When using the GUI you are running unison as a user not as a root and there are certain actions that can only be carried out by the owner or by root. It is obvious that, for security, only the owner can change the group and owner of a file or folder and the permissions. What is less obvious is that only the owner or root can change the time stamp. Synchronisation therefore fails if the time stamp needs to updated for files you do not own which is common during synchronisation. I therefore carry out all synchronisations from the first user installed (id 1000) and set the owners of all files to that user (id 1000) and the group to adm (because All users who can use sudo are automatically in group adm). This can be done very quickly with recursive commands in the terminal for the folders DATA and VAULT on the machine and the USB drives which are plugged in (assuming they are formatted with a linux file system). If Windows formatted USB drives or sticks are plugged in they will pick up the same owner as the machine at the time regardless of history.

sudo chown -R 1000:adm /media/DATA

sudo chown -R 1000:adm /media/VAULT

sudo chown -R 1000:adm /media/USB_DRIVE

Ownership requirements for Trash (Recycle Bins) to work.

There is however an essential extra step required. In changing the ownership of the entire partition we have also changed the ownership of the .Trash-nnnn folders and the result is that one can no longer use the recycle bin (trash) facilities. I have only recently worked that one out! There is an explanation of how trash is implemented on drives which are on separate partitions at http://www.ramendik.ru/docs/trashspec.html and one can see on each drive that there is a trash folder for each user which looks like .Trash-1000 .Trash-1001 etc and these need to be set to owned and grouped with their matching user. So with two extra users we need to do:

sudo chown -R 1001:1001 /media/DATA/.Trash-1001 && sudo chown -R 1002:1002 /media/DATA/.Trash-1002

sudo chown -R 1001:1001 /media/VAULT/.Trash-1001 && sudo chown -R 1002:1002 /media/VAULT/.Trash-1002

Note: All the above applies to Linux Files systems such as ext4 - old fashioned Windows file systems which do not implement owners, groups or permissions have to be mounted with the user set to 1000 and group to adm a different way (using fstab) which is beyond this document.

Considerations about permissions and synchronisation.

I initially tried to set all the permissions for the drives being synchronised in bulk when I found the GUI did not allow permissions to be ignored. I now find it is better to just edit the preferences files for each synchronisation to ignore permissions

If you do want to set the permissions recursively for a whole partition or folder you have to recall that folders need the execution bit set otherwise they can not be accessed. The most usual permissions used are read and write for both the owner and group and read only for others. As we are considering shared Data one can usually ignore preserving or synchronising the execution bits although sometimes one has backups of executable programs and scripts which might need it to be reapplied if one retrieves them for use.

The permissions are best set in a terminal, the File Manager looks as it could be used via the set "set permissions for all enclosed files" option but the recursion down the file tree does not seem to work in practice.

The following will set all the owners, groups and permissions in DATA, VAULT and a USB drive if you need to, it is not essential . The primary user is referred to by id 1000 to make this more general. This uses the octal way of designation and sets the owner and group to read and write and other to read for files and read write and execute for folders which always require execute permission. I am not going to try to explain the following magic incantations at this time or maybe never, but it works and very fast with no unrequired writes to wear out your SSD or waste time. These need to be run from user id 1000 to change the permissions (adding an extra sudo in front of every chmod will also work from all users.

find /media/DATA -type f -print0 | xargs -0 chmod 664 && find /media/DATA -type d -print0 | xargs -0 chmod 775

find /media/VAULT -type f -print0 | xargs -0 chmod 664 && find /media/VAULT -type d -print0 | xargs -0 chmod 775

find /media/USB_DRIVE -type f -print0 | xargs -0 chmod 664 && find /media/USB_DRIVE -type d -print0 | xargs -0 chmod 775

This only takes seconds on a SSD and appears instantaneous after the first time. USB backup drives take longer as they are hard drives.

SSH (Secure aS Hell)

In the introduction I spoke of using the ssh protocol to synchronise between machines.

When you set up the two 'root' directories for synchronisation you get four options when you come to the second one - we have used the local option but if you want to synchronise between machines you select the SSH option. You then fill in the hostname which can be an absolute IP address or a hostname. You will also need to know the folder on the other machine as you can not browse for it. When you come to synchronise you will have to give the password corresponding to that username, often it asks for it twice for some reason.

Setting up ssh and testing logging into the machine we plan to synchronise with.

I always checked out that ssh has been correctly set up on both machines and initialise the connection before trying to use Unison. In its simplest form ssh allows one to logon using a terminal on a remote machine. Both machines must have ssh installed (and the ssh daemons running which is the default after you have installed it).

The first time you use SSH to a user you will get some warnings that it can not authenticate the connection which is not surprising as you have not used it before and will ask for confirmation and you have type yes rather than y. It will then tell you it has saved the authentication information for the future and you will get a request for a password which is the start of your log in on the other machine. After providing the password you will get a few more lines of information and be back to a normal terminal prompt but note that it is now showing the address of the other machine. You can enter some simple commands such as a directory list (ls) if you want to try it out.

pcurtis@gemini:~$ ssh pcurtis@lafite
The authenticity of host 'lafite (192.168.1.65)' can't be established.
ECDSA key fingerprint is SHA256:C4s0qXX9GttkYQDISV1fXpNLlXlXQL+CXjpNN+llu0Y.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'lafite,192.168.1.65' (ECDSA) to the list of known hosts.
pcurtis@lafite's password: **********

109 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sun Oct 18 03:29:24 2020 from 192.168.1.64
3
pcurtis@lafite:~$ exit
logout
Connection to lafite closed.
pcurtis@defiant:~$

This is how computing started with dozens or even hundreds of users logging into machines less powerful than ours via a terminal to carry out all their work. How things change!

The hostname resolution seems to work reliably with Mint 20 on my network however if username@hostname does not work try username@hostname.local. If neither work you will have to use the numeric IP address which can be found by clicking the network manager icon in the tool tray -> network settings - > the settings icon on the live connection. The IP addresses can vary if the router is restarted but can often be fixed in the router internal setup but that is another story.

Example of a complex profiles for Unison with notes on the options:

The profiles live in /home/username/.unison and can be created or edited with xed.

# Example Profile to synchronise to helios with username pcurtis on helios
# Note: if the hostname is a problem then you can also use an absolute address
# such as 192.168.1.4 on helios
#
# Optional label used in a profile to provide a descriptive string documenting its settings.
label = Recent changes only
#
# Roots for the synchronisation
root = /media/DATA
root = ssh://pcurtis@helios//media/DATA
#
# Paths to synchronise
path = My Backups
path = My Web Site
path = My Pictures/2020
path = My Video/2020
#
# Some typical regexps specifying names and paths to ignore
ignore = Name temp.*
ignore = Name *~
ignore = Name .*~
ignore = Name *.tmp
#
# Some typical Options - only times is essential
#
# When fastcheck is set to true, Unison will use the modification time and length of a
# file as a ‘pseudo inode number’ when scanning replicas for updates, instead of reading
# the full contents of every file. Faster for Windows file systems.
fastcheck = true
#
# When times is set to true, file modification times (but not directory modtimes) are propagated.
times = true
#
# When owner is set to true, the owner attributes of the files are synchronized.
# owner = true
#
# When group is set to true, the group attributes of the files are synchronized.
# group = true
#
# The integer value of this preference is a mask indicating which permission bits should be synchronized.
# In general we do not want (or need) to synchronise the permission bits (or owner and group)
# Note the letter o designating octal - do not mistake for an extra zero
perms = 0o0000

The above is a fairly comprehensive profile file to act as a framework and the various sections are explained in the comments.

Appendix 2 - Encrypting your home folder

The ability to encrypt your home folder has been built into Mint for a long time and it is an option during installation for the initial user. It is well worth investigating if you have a laptop but there are a number of considerations and it becomes far more important to back-up your home folder in the mounted (un-encrypted) form to a securely stored hard drive as it is possible to get locked out in a number of less obvious ways such as changing your login password incorrectly.

There is a passphrase generated for the encryption which can in theory be used to mount the folder but the forums are full of issues with less solutions! You should generate it for each user by

ecryptfs-unwrap-passphrase

Now we will find there is considerable confusion in what is being produced and what is being asked for in many of the ecryptfs options and utilities as it will request your passphrase to give you your passphrase!. I will try to explain. When you login as a user you have a login password or passphrase. The folder is encrypted with a much longer randomly generated passphrase which is looked up when you login with your login password and that is whatt you are being given and what is needed if something goes dreadfully wrong. These are [should be] kept in step if you change your login password using the GUI Users and Groups utility but not if you do it in a terminal. It is often unclear which password is required as both are often just referred to as the passphrase in the documentation.

Encrypting an existing users home folder.

It is possible to encrypt an existing users home folder provided there is at least 2.5 times the folder's size available in /home - a lot of waorkspace is required and a backup is made.

You also need to do it from another users account. If you do not already have one an extra basic user with admin (sudo) priviledges is required and the user should be given a password otherwise sudo can not be used.

You can create this basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, and set Type to Administrator provide username and Full name... -> Create -> Highlight User, Click Password to set a password otherwise you can not use sudo.

Restart and Login in to your new basic user. You may get errors if you just logout as the gvfs file system may still have files in use.

Now you can run this command to encrypt a user:

sudo ecryptfs-migrate-home -u user

You'll have to provide your user account's Login Password. After you do, your home folder will be encrypted and you should be presented with some important notes In summary, the notes say:

  1. You must log in as the other user account immediately – before a reboot!
  2. A copy of your original home directory was made. You can restore the backup directory if you lose access to your files. It will be of the form user.8random8
  3. You should generate and record the recovery passphrase (aka Mount Passphrase).
  4. You should encrypt your swap partition, too.

The highlighting is mine and I reiterate you must log out and login in to the users whose account you have just encrypted before doing anything else.

Once you are logged in you should also create and save somewhere very safe the recovery phrase (also described as a randomly generated mount passphrase). You can repeat this any time whilst you are logged into the user with the encrypted account like this:

user@lafite ~ $ ecryptfs-unwrap-passphrase
Passphrase:
randomrandomrandomrandomrandomra
user@lafite ~ $

Note the confusing request for a Passphrase - what is required is your Login password/passphrase. This will not be the only case where you will be asked for a passphrase which could be either your Login passphrase or your Mount passphrase! The Mount Passphrase is important - it is what actually unlocks the encryption. There is an intermediate stage when you login into your account where your account login is used to used to temporarily regenerate the actual mount passphrase. This linkage needs to updated if you change your login password and for security reasons this is not done if you change your login password in a terminal using passwd user which could be done remotely. If you get the two out of step the mount passphrase may be the only way to retrieve your data hence the great importance. It is also required if the system is lost and you are accessing backups on disk.

The documentation in various places states that the GUI Users and Groups utility updates the linkage between the Login and Mount passphrases but I have found that the password change facility is greyed out in Users and Groups for users with encrypted home folders. In a single test I used just passwd from the actual user and that did seem to update both and everything kept working and allowed me to login after a restart.

Mounting an encrypted home folder independently of login.

A command line utility ecryptfs-recover-private is provided to mount the encrypted data but it currently has several bugs when used with the latest Ubuntu or Mint.

  1. You have to specify the path rather than let the utility search.
  2. You have to manually link keychains with a magic incantation which I do not completely understand namely sudo keyctl link @u @s after every reboot. A man keyctl indicates that it links the User Specific Keyring (@u) to the Session Keyring (@s). See https://bugs.launchpad.net/ubuntu/+source/ecryptfs-utils/+bug/1718658 for the bug report

The following is an example of using ecryptfs-recover-private and th e mount passphrase to mount a home folder as read/write (--rw option), doing a ls to confirm and unmounting and checking with another ls.

pcurtis@lafite:~$ sudo keyctl link @u @s
pcurtis@lafite:~$ sudo ecryptfs-recover-private --rw /home/.ecryptfs/pauline/.Private
INFO: Found [/home/.ecryptfs/pauline/.Private].
Try to recover this directory? [Y/n]: y
INFO: Found your wrapped-passphrase
Do you know your LOGIN passphrase? [Y/n] n
INFO: To recover this directory, you MUST have your original MOUNT passphrase.
INFO: When you first setup your encrypted private directory, you were told to record
INFO: your MOUNT passphrase.
INFO: It should be 32 characters long, consisting of [0-9] and [a-f].

Enter your MOUNT passphrase:
INFO: Success! Private data mounted at [/tmp/ecryptfs.8S9rTYKP].
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
Desktop Dropbox Pictures Templates
Documents Videos Downloads Music Public
pcurtis@lafite:~$ sudo umount /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$

The above deliberately took the long way rather than use the matching LOGIN passphrase as a demonstration.

I have not bothered yet with encrypting the swap partition as it is rarely used if you have plenty of memory and swappiness set low as discussed earlier.

Once you are happy you can delete the backup folder to save space. Make sure you Delete it (Right click delete) if you use nemo and as root - do not risk it ending up in a root trash which is a pain to empty!

Feature or Bug - home folders remain encrypted after logout?

In the more recent versions of Ubuntu and Mint the home folders remain mounted after logout. This also occurs if you login in a consul or remotely over SSH. This is useful in many ways and you are still protected fully if the machine is off when it is stolen. You have little protection in any case if you are turned on and just suspended. Some people however logout and suspend expecting full protection which is not the case. In exchange it makes backing up and restoring a home folder easier.

Backing up an encrypted folder.

A tar archive can be generated from a mounted home folder in exactly the same way as before as the folder stays unencrypted when you change user to ensure the folder is static. If that was not the case you could use a consul (by Ctrl Alt F2) to login then switch back to the GUI by Ctrl Alt F7 or login via SSH to make sure it was mounted to allow a backup. Either way it is best to logout at the end.

Summary and Conclusions

The overall page has been writen in parallel with the setting up of my Chillblast WAP Pro v2 (aka Gemini) which has served as a test of much of the content. The proceedures stood up well as I had a real problems when several features went missing on gemini after some work on the system and I did not havve time to chase down exactly what I had deleted. I did a Timeshift back two days which went perfectly and quickly followed by a restore of the last home folder backup to compressed tar archive which was only a few days old. That got me back up and working with only a couple of files to copy back from the re-named folder in quick time- you might as well not bother if you do not test and trust backup proceedures.

 

Before You Leave

I would be very pleased if visitors could spare a little time to give me some feedback - it is the only way I know who has visited, if it is useful and how I should develop it's content and the techniques used. I would be delighted if you could send comments or just let us know you have visited by Sending a quick Message to me.

Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Fonts revised: 28th April, 2021