Home Uniquely NZ Travel Howto Pauline Small Firms Search
Diary of System and Website Development
Part 33 (October 2020 -> December 2020 )
Much of the initial section of this Diary page consists of drafts of sections subsequently transfered to the new "Gemini" and "Linux Grab Bag" pages which should be checked for the latest information

25th October 2020

Burning a LiveUSB to run Linux Mint

This is based on the Instructions on the Mint Web Site and in many ways the most important step towards runing Linux on a new or old machine as everything can then be done in Linux even on a new machine without any existing operating system. This can save £45 to £90 in the future by not having Windows on your purchase of a specialist machine.

The easiest way to test and install Linux Mint is with a USB stick. If you cannot boot from USB, you can use a DVD but optical discs are slow and burning to disc is prone to errors.

First download the latest iso from the Linux Mint Download page

In Linux Mint

Screenshot

In Windows, Mac OS, or other Linux distributions

Download balenaEtcher , install it and run it.

In windows the portable version does not even install anything on your machine or can live on a USBdrive. I do not have a Windows system to test it but it ran under the wine, the Windows 'emulator' under Mint

Screenshot

Using Etcher

How to make a bootable DVD

This is for completeness - a LiveUSB is better as it is more reliable and the Live system runs faster. However if you do not have a spare USB and lots of old DVDs and a machine with built in DVD reader.

In Linux

Install and use xfburn.

In Windows

Right-click the ISO file and select Burn disk image.

To make sure the ISO was burned without any errors, select Verify disc after burning.

In Mac OS

Right-click the ISO file and select Burn Disk Image to Disc.

Our 'Standard Installation' with an introduction to General Linux File Systems

I have recently realised that I take a lot for granted in the implementation of our system and it it quite difficult to explain to some one else how to install it and the logic behind the 'partitioning' and encryption of various sections. So I have decided to try to go back to basics and explain a little about the underlying structure of Linux.

Firstly Linux is quite different to Windows which more people understand. In Windows individual pieces of hardware are given different Drive designations A: was in the old days usually a floppy disk, C: was the main hard drive, D: might be a hard drive for data and E: a DVD or CD drive. Each would have a file system and directory and you would have to specify which drive the a file lived on. Linux is different - there is only one file system with a hierarchical directory structure. All files and directories appear under the root directory /, even if they are stored on different physical or virtual devices. Most of these directories exist in all Unix-like operating systems and are generally used in much the same way; however, there is a File system Hierarchy Standard which defines the directory structure and directory contents in Linux distributions. Some of these directories only exist on a particular system if certain subsystems, such as the X Window System, are installed and some distributions follow slightly different implementations but the basics are identical where it be a web server, supercomputer or android phone. Wikipedia has a good and comprehensive Description of the Standard Directory structure and contents

That however tells us little about how the basic hardware (hard drives, solid state drives, external USB drives, USB stickers etc are linked (mounted in the jargon) into the filesystem or how and why parts of our system are encrypted. Nor does it tell us enough about users or about the various filesystems used. before going into that in details we also need to look at another major feature of Linux which is not present in Windows and that is ownership and permissions for files in Linux. [ ]

So how does a storage device get 'mapped' to the Linux file system? There are a number of stages. Large devices such as Hard Disks and Solid State Drives are usually partitioned into a number os sections before use. Each partition is treated by the operating system as a separate 'logical volume', which makes it function in a similar way to a separate physical device. A major advantage is that each partition can have its own file system which helps protect protect data from corruption. Another advantage is that each partition can also use a different file system - a swap partition will have a completely different format to a system or data partition. Different partitions can selectively encrypted when a block encryption is used. Older operating systems only allowed you to partition a disk during a formatting or reformatting process. This meant you would have to reformat a hard drive (and erase all of your data) to change the partition scheme. Linux now has disk utilities now allow you to resize partitions and create new partitions without losing all your data - that said it is still a process with risks and important data should be backed up. Once the device is partitioned each partition needs to be formatted with a file system which provides a directory system and defines the way files and folders are organized on the disk and hence become part of the overall Linux Directory structure once mounted.

If we have several different disk drives we can chose which partitions to use to maximise reliability and performance. Normally the operating system (kernel/root) will be on a Solid State Drive (SSD) for speed and large amounts of data such as pictures and music can be on a slower hard disk drive whilst the areas used for actually processing video and perhaps pictures is best on a high speed drive. Sensitive data may be best encrypted so ones home folder and part of the Data areas will be encrypted. Encryption may however slow down the performance with a slow processor.

The 'Standard Installation' we have implemented on our machines

We now have have four machines in everyday use, three are laptops but only two are lightweight ultrabooks, the other and oldest machine still has the edge in performance being a nVidia Optimus machine with discrete Video card and 8 thread processor but is heavy and has a limited battery. The other two are true ultrabooks - light, small, powerful and many hours of use on battery power. We also have a desktop which has scanners, printers and old external drives connected and a big screen.

All the machines are multi-user so both of us can in an emergency use any machine for redundancy and whilst traveling and most of the storage is shared and periodically synchronized. The users home folders are also periodically backed up and the machines are set up so backups can be restored to any of the three machines. In fact the machines all have three users. The first user installed with id 1000 is used primarily as an administrative convenience for organising backups and synchronisation for users from a 'static' situation when they are logged out. There is no necessity for the id_1000 user to have an encrypted home folder as no sensitive information is accessed by that user and there is no email with saved logins or saved information in the browsers. In contrast the other users have email and browsers where the saved logins are the weak point in most systems as most passwords can be recovered by emails which is an incredibly weak point for many web sites where a minimum for password recover should be two factor using a txt or other extra information. The other users need encrypted home folders where email and browser information is kept safe. I favour that over a separate encrypted folder holding email and browser 'profiles' as I have had problems with encrypted folders initial mounted at login getting demounted when email and browsers have been active with undesirable results and real potential for considerable data loss. An encrypted folder to save sensitive documents and backups is however desirable if not essential and it is preferable for it to mounted at login.

So the end result is that each machine has:

The following shows the actual partitions I have set up on Gemini. I have named the partitions especially to make it clear - normally one would not bother.

Screenshot

Implementing our Standard Installation

We will start with first work out what we need in detail as we do not want to repeat any actual partitioning then start with the easiest which is also arguably the most important as it separates our data from the system and users. So the overall plan is

  1. Look at the requirements and decide what the final arrangement should be even if we do not implement it all at once.
  2. Repartition the device to have the final partitions layout similar to that above from the LiveUSB
  3. Arrange for the DATA partition to be mounted automatically when the system boots. Only involves collecting some information and adding one line to a file.
  4. Change to a separate partion for the home folders, This is the tricky one.
  5. Maybe create an encrypted partition to hold our Vault.
  6. Maybe encrypt one or more of the home folders

The question now is how to get from a very basic system with a single partition for the Linux system to our optimised partitioning and encryption. As I said earlier my normal way is to do it all during install but that would be considerable challenge to a new user. So we divide it into a series of small challenges each of which is manageable. Some are relatively straight forwards some are more tricky. Some can be done in a GUI and editor but some are only practical in terminal. Using a terminal will mostly involve copy and pasting a command into the terminal sometimes with a minor edit to a file or drive name. Using a terminal is not something to be frightened of - it is often easier to understand what you are doing.

1. Look at the requirements and decide what the final arrangement should be

There are two requirements which are special to Mint and to our particular set up which influence our choice of the ammount of space to allocate.

TimeShift Disk Space Requirements

TimeShift has to be considered as it has a major impact on the disk space requirements to be considered during partitioning the final system. It was one of the major additions to Mint which differentiates it from other lesser distributions. It is fundamental to the update manager philosophy as Timeshift allows one to go back in time and restore your computer to a previous functional system snapshot. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot and still have the ability to apply updates selectively (as you did in previous releases). This comes at a cost as the snapshots take up space. Although TimeShift is extremely efficient my experience so far with using Timeshift means that one needs to allocate at least an extra 2 fold and preferably 3 fold extra storage over what one expects the root file sytem to grow to, especially if you intend to take many extra manual snapshots during early development and before major updates. I have already seen a TimeShift folder reach 21 Gbytes for a 8.9 Gbyte system before pruning manual snapshots.

pCloud Requirements

We make use of pCloud for Cloud storage and that has consequences on the amount of space required in the home folders. pCloud uses a large cache to provide fast response and the default and minimum is 5 GB in each users home folder. In addition it needs an additional 2 GB for headroom (presumably for buffering) on each drive. With three users that gives an additional requirement of ~17 GB over a system not using cloud storage -other cloud storage implementatios will also have consequences on storage requirements and again with will probably be per user.

Normal Constraints on the filesystem

One must take into account the various constraints from use of an SSD (EXT4 Filesystem, Partition Alignment and Overprovision)) and the space required for the various other components. My normal provisioning goes like this when I have single boot system and most are the same for the Linux components of a Dual Boot System:

The first thing to understand is fairly obvious - you cannot and must not make changes to or back up a [file] system or partition which is in use as files will be continuously changing whilst you do so and some files will be locked and inaccessible.

That is where the LiveUSB is once very useful if not essential. We cannot change a partition which is in use (mounted in the jargon) so we must make the initial changes in the partitions in the SSD drive from the LiveUSB. After that we can make many of the changes from within the system provided the partition is unmounted. We can also create an extra user who can make changes or backup a user who is logged out and therefore static. So even if we only have one user set up we will probably need to create a temporary user at times, but that only takes seconds to set up.

Changing partitions which contain information is not something one wants to do frequently so it is best to decide what you will need, sleep on it overnight, and then do it all in one step from the LiveUSB. One should bear in mind that shrinking partitions leaving the starting position fixed is normally not a problem, moving partitions or otherwise changing the start position takes much longer and involves more risk of power failures or glitches. I have never had problems but always take great care to back up everything before starting.

As soon as I was satisfied I knew that the machine was working and I understood what the hardware was I did the main partitioning of the SSD from the LiveUSB using gparted

I then shut down the LiveUSB and returned to a normal user.

Before we go any further, I am going to give an alternative viw of the same information showing the important part of the information from a terminal command. The reason for this is that it provides the piece of information you will need when you actually come to mount the partitions, namely their UUIDs, which you will need to copy into the single line you need to add to a file to mount the partition. You can get it from a GUI but this is easier!

Screenshot

Mounting the Data Drive

Now lets look at the simplest of the above to implement. Partitions are mounted at boot time according to the 'instructions' in the file /etc/fstab and each mounting is described in a single line. So all one has to do is to add a single line. I will not go into detail about what each entry means but this is what I copied from my working system - if you have to know do man fstab in a terminal!

UUID=d8e192cf-0c5a-4bd6-a3a9-96589a68cde5 /media/DATA ext4 defaults 0 2

You do need to edit the file as a root user as the file is a system file owned by root and is protected against accidental changes. There is a simple way to get to edit it with root permissions in the usual editor (xed). Open the file browser and navigate to the /etc folder. Now right click and on the meny click 'open as root'. Another file manager window will open with a big red banner to remind you that you are a superuser with god like powers in the hierarchy of the system. You can now double click the file to open it and again the editor will show a red border and you will be able to edit and save the file. I like to make a backup copy first and a simple trick is to drag and drop the file with the Ctrl key depressed and it will make a copy with (copy) in the filename.

After you have copied in the string above you MUST change the UUID (Universal Unique IDentifier) to that of your own partition. The header in the file /etc/fstab says to find the UUID using blkid which certainly works and is what I used to do but I have found a more clear way using a built in program accessed my Menu -> Disks. This displays very clearly all the partitions on all the Disks in the system including the the UUID which is a random number which is stored in the metadata of every file system in a partition and is independent of machine mounting order and anything else which could cause confusion. So you just have to select the partition and copy the UUID which is below. To ensure you copy the whole UUID I recommend right clicking the UUID -> 'select all' then 'copy'. You can then paste it into the line above replacing the example string. Save /etc/fstab and the drive should be mounted next time you boot up. If you make a mistake the machine may not boot but you can use the LiveUSB to either correct or replace the file from your copy.

Transferring to a separate partition for home

This is much more tricky especially as it requires an understanding of the Linux file system. When you mount a partition you specify where it appears in the single folder structure starting at root. We have already mounted a Data drive which we now access at /media/DATA. When we change to a separate home folder we will be moving from a situation where /home is on the same partition as the root folder structure to one where /home is on a separate partition. This means that the new /home will be on top of the old structure which will no longer be visible or accessible and effectively wasted space on the partition with root.

So we need to copy the old user folders to the new partition in some way and then mount it. Eventually we will need to recover the wasted space. Just to compound problems a simple drag and drop or even move in a terminal may not correctly transfer the folders as they are likely to contain what are called symbolic links which point to a different location rather than actually be the file or folder and a simple copy may move the item pointed to rather than the link so special procedures are required. Fortunately there is a copy command using the terminal which will work for all normal circumstances and a nuclear option of creating a compressed archive which is what I use for backups and transferring users between machines which has never failed me.

As a diversion for those of an enquiring mind I found an article on how to see if symlinks are present at https://stackoverflow.com/questions/8513133/how-do-i-find-all-of-the-symlinks-in-a-directory-tree and used the following terminal command on my home folder:

find . -type l -ls

Explanation: find from the current directory . onwards all references of -type link and list -ls those in detail. The output confirmed that my home folder had gained dozens of symlinks with the Wine and Opera programs being the worst offenders. It also confirmed that the copy command and option I am going to use does transfer standard symlinks in the expected way.

The method that follows minimises use of the terminal and is quite quick although it may seem to have many stages. First we need to mount the new partition for home somewhere so we can copy the user folders. Seeing we will be editing /etc/fstab an easy way is to edit fstab to mount the new partition at say /media/home just like above, reboot and we are ready to copy. We must do the copies with the users logged out, if there is only one user we need to create a temporary user just to do the transfer and we can then delete it. So we are going to forced once more to use the terminal so we can do the copies using the cp command with a special option -a which is short for – archive which never follows symbolic links whilst still preserving the links and copies folders recursively

sudo cp -a /home/user1/ /media/home/
and repeat for the other users

Remember that you must login and out of users in a way that you ensure that you never copy an active (logged in) user.

Now we edit /etc/fstab again so the new partition mounts at /home instead of /media/home and reboot and the change should be complete.

xed admin:///etc/fstab

I suggest you leave it at that for a few days to make sure everything is OK after which you can get out your trusty LiveUSB and find and delete the previous users from the home folder in the root partition.

Encrypting the a partition for our Vault

You should now be in a good position but for those who really want to push their limits I will explain how I have encrypted partitions and USB drives. This can only be done from the terminal and I am only going to give limited explanation (because my understanding is also limited). Consider it to be like baking a cake - the recipe can be comprehensive but you do not have to understand the details of the chemical reactions that turn the most unlike ingredients into a culinary masterpiece.

Firstly you do need to create a partition to fill the space but it does not need to be formatted and have a specific filesystem. All you need to know is the device number, in my case /dev/sdc5, if yours is different when you partition or examine it in gparted. Anything in the partition will be completely overwritten. If your partition has a different device number edit the commands which follow. There are several stages, first we add encryption directly to the drive at block level. Then we add a filesystem and finally we mount the encrypted drive when a user logs in.

Add LUKS encryption to the partition

sudo cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 2000 --use-random luksFormat /dev/sdc1

The options do make some sense except for --iter-time which is the time allocated to generate the random key and likewise decode it in milliseconds. Each user may have a different password so the search so the total time spend during login can get quite long hence choosing 2 seconds rather than the 5 seconds I used to use. During you will first be asked for your normal login password as it is a root activity and then for the secure passphrase.

Next you have open your new drive and mount it - you will be asked for the passphrase you gave above

sudo cryptsetup open --type luks /dev/sdc1 sdc1e

Now we format the device with a ext4 filesysten and it is important to name it at this time to VAULT

sudo mkfs.ext4 /dev/mapper/sdc1e -L VAULT

There is one further important step and that is to back up the Luks header which can get damaged by

sudo cryptsetup -v luksHeaderBackup /dev/sda5 --header-backup-file LuksHeaderBackup_gemini.bin

We now need to mount it when a user logs in. This is not done in /etc/fstab but by a utility called pam-mount which first has to be installed:

It is in the list I provided earlier but if you use the following it will just tell you there is no need
or you can run the Synaptic Package Manager

sudo apt-get install libpam-mount

We now need to find its UUID by use of lsblk -f which will reveal that there are two associated UUIDs, one for the partition which is the one we need and another for the filesystem within it . Before you run lsblk you should widen the terminal window as it does some clever formatting which depends on the width when it is run.

Now we edit its configuration file using the information from above. The following magic incantation will, after a couple of passwords open the configuration file for editing.

xed admin:///etc/security/pam_mount.conf.xml

So the middle bit gains the single line as below but make sure you change the UUID to that of your own crypto_LUKS partition

...

<!-- Volume definitions -->

<volume fstype="crypt" path="/dev/disk/by-uuid/e1898a1c-491d-455c-8aab-09eaa09cc74b" mountpoint="/media/VAULT" user="*" />

<!-- pam_mount parameters: General tunables -->

..

Save the file and reboot (a logout and in should also work) and we almost finished.

When you have rebooted you will see the folder but will not be able to add anything to it as it is owned by root!

The following incantation goes further than required as it can also be used to set the owner, group and permissions for everything in VAULT which is useful for synchronising in the future. They are set to the 'primary' owner 1000 and group adm which all my users belong to. The permission 770 gives read, write and execute access to the owner and everyone in group admin but not to anybody else.

sudo chown -R 1000:adm /media/VAULT && sudo chmod -R 664 /media/VAULT

8 terminal commands and adding line to file was all it needed.

The final step only applies if your users have different passwords or a password has been changed. The LUKS filesystem has 8 slots for different passwords so you can have seven users with different passwords as well as the original passphrase you set up. So if you change a login password you also have to change the matching password slot in LUKS by first unmounting the encrypted partition then using a combination of

sudo cryptsetup luksAddKey /dev/sda5
sudo cryptsetup luksRemoveKey /dev/sda5
# or
sudo cryptsetup luksChangeKey /dev/sda5

You will be asked for a password/passphrase which can be a valid one for any of the keyslots when adding or removing keys. ie any current user password will allow you to make the change.

Warning: NEVER remove every password or you will never be able to access the LUKS volume

Mounting of LUKS volume from SSH - Workaround for bug/feature of pam_mount

I have suffered a problem for a long time that when using unison to synchronise between machines my LUKS encrypted common partition which is mounted as VAULT has been unmounted (closed) when unison is closed. The same goes for any remote login to users other than the when the user accessed remotely and the user on the machine were the same. This is bad news if another user was using VAULT or tries to access it.

I finally managed to get sufficient information from the web to understand a little more about how the mechanism (pam_mount) works to mount an encrypted volume when a user logs in remotely or locally. It keeps a running total of the logins in separate files for each user in /var/run/pam_mount and decrements them when a user logs out. When a count falls to zero the volume is unmounted REGARDLESS of other mounts which are in use with counts of one or greater in there files. One can watch count incrementing and decrementing as local and remote users log in and out. So one solution is to always keep a user logged in either locally or remotely to prevent the the count decrementing to 1 and automatic demounting taking place. This is possible but a remote user could easily be logged out if the remote machine is shut down or other mistakes take place. A local login needs access to the machine and is still open to mistakes. One early thought was to log into the user locally in a terminal then move it out of sight and mind to a different workspace!

The solution I plan to adopt uses a low level command which forms part of the pam_mount suite. It is called pmvarrun and can be used to increment or decrement the count. If used from the same user it does not even need root privileges. So before I use unison or a remote login to say helios as user pcurtis I do a remote login from wherever using ssh then call pmvarrun to increment the count by 1 for user pcurtis and exit. The following is the entire terminal activity required.

peter@helios:~$ ssh pcurtis@defiant
pcurtis@defiant's password:

27 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sat Oct 24 03:54:43 2020
pcurtis@defiant:~$ pmvarrun -u pcurtis -o 1
2
pcurtis@defiant:~$ exit
logout
Connection to defiant closed.
peter@helios:~$

The first time you do an ssh remote login you may be asked to confirm that the 'keys' can be stored. Note how the user and machine changes in the prompt

I can now use unison and remote logins as user pcurtis to machine helios until helios is halted or rebooted. Problem solved!

I tried adding the same line to the end of the .bashrc file in the pcurtis home folder. The .bashrc is run when a user opens a terminal and is used for general and personal configuration such as aliases. That works but gets called every time a terminal is opened and I found a better place is .profile which only gets called at login, the count still keeps increasing but at a slower rate. You can check the count at any time by:

pmvarrun -u pcurtis -o 0

 

28th October 2020

Useful Information on the Chillblast WAP Pro v2

Screenshot

Screenshot

Note - because the system is UEFI the USB or DVD must be UEFI bootable for it to show up in the list

Q. What windows drivers are required after a clean reinstall?

A. As of our testing of Windows v1903: none are 'required' for functionality in Windows 10. The system should work on a 'clean' install of Windows 10 once all Windows Updates are completed. Any driver 'warnings' in Device Manager are just 'labels' that are missing from the devices and it will not negatively affect the functionality of the system at all.

 

11th November 2020

Backing Up - Draft for Gemini or separate document

This is arguably the most essential activity on our machines and one I carry out frequently and am now adding to a once monthly schedule. There are several parts to backing up and routine maintenance. The first is not really a backup activity but this is the time to do it - that is to check the machine has been updated through the Update Manager so including that the monthly activites are:

It takes little effort and users time to carry out and although the elapsed time is 20 - 40 minutes the time take by the user is less. It does involves logging in and out of users, plugging in one of the hard drives and entering a password if it encrypted and the home folder backup activity involves entering a terminal and running a single line command and leaving it to complete (~15-20 mintes) then repeating once or twice more. The commands will be saved in history if you use the same drive as before.

This is a self contained simplification of the extensive sections in All Together - Sharing, Networking, Backup, Synchronisation and Encryption

Firstly I am going to add a section on using the terminal, there is some essential use of it in the following sections so a reminder seems in order.

What is a Terminal? Why use it?

Up to now I have managed to avoid use if the terminal. Most people these days to do all their interaction via a mouse and the keyboard is only used to enter text, addresses searches etc., and never to actually do things. Historically it was very different and even now most serious computer administration uses what is known as a terminal. It is horses for courses. There is nothing difficult or magic using a "terminal" and there are often ways to avoid its use but they are slow, clumsy and often far from intuitive. However some things which need to be done are near impossible to do. I will not expect much understanding of the few things we have to be one in a terminal, think of the things you cut and paste in as magic incantations. I am not sure I could explain the details of some of the things I do.

The terminal can be opened by Menu -> Terminal, it is also so important that it is already in favourites and in the panel along with the file manager. When it has opened you just get window with a prompt. You can type commands and activate them by return. In almost all cases in this document you will just need to cut and paste into the line before hitting return. Sometimes you need to edit the line to match your system ie a different disk UUID or user name.

You need to know that there are difference in cut and paste and even entering text when using the terminal.

On the positive side

What is sudo

We have spoken briefly about permissions and their protection of changes to system files etc.

When you use a normal command you are limited to changing things you own or are in a group with rights over them. All system files belong to root and you cannot change them or run any utilities which affect the system.

If you put sudo in front of a command you take on the mantle of root and can do anything hence the sudo which stand for SUperuser Do. Before the command is carried out you are asked for your password to make sure you do have those rights. Many machines are set up so retain the Superuser status for 15 minutes so do not ask repeatedly while you carry out successive commands. Take care when using sudo, you can destroy a whole system in a couple of keystrokes.

Tutorials

There are several good tutorials including a five minute one from Mint here and a longer one from Ubuntu here

Can I avoid the terminal

There are ways to reduce the use of the terminal. One I use a lot when I need to edit or execute as root is to start in the file manager then right click on empty space and click Open as Root. I now get another file manager window with a Red Banner and I can now see and do anything I am a Superuser, I can also destroy everything so beware. I can also now double click on a file to open it for editing as root and when editing in Xed there will again be a Red Banner. You can also Right Click -> Open in a Terminal which will open a terminal at the appropriate position in the folder tree.

Warning: Do not Move into the Recycle Bin as root, it will go into the root trash which is near impossible to empty, if you need to Delete use right click Delete which deletes instantly.

There are however many activities which demand use of sudo in a terminal so you will have to use it on occasion.

Firstly we will look at the two changes required because we are using a SSD, these are best done at an early stage say during the first few days.

Appendix to Gemini

Overall Backup Philosophy for Mint

My thoughts on Backing Up have evolved considerably over time and now take much more into account the use of several machines and sharing between them and within them giving redundancy as well as security of the data. They now look much more at the ways the backups are used, they are not just a way of restoring a situation after a disaster or loss but also about cloning and sharing between users, machines and multiple operating systems. They continue to evolve to take into account the use of data encryption.

So firstly lets look at the broad areas that need to be backed up on our machines:

  1. The Linux operating system(s), mounted at root. This area contains all the shared built-in and installed applications but none of the configuration information for the applications or the desktop manager which is specific to users. Mint has a built in utility called TimeShift which is fundamental to how potential regressions are handled - this does everything required for this areas backups and can be used for cloning. TimeShift will be covered in detail in a separate section.
  2. The Users Home Folders which are folders mounted within /home and contain the configuration information for each of the applications as well as the desktop manager which is specific to users such as themes, panels, applets and menus. It also contains all the Data belonging to a specific user including the Desktop, the standard folders such as Documents, Video, Music and Photos etc. It will probably also contain the email 'profiles' if an SSD is in use. This is the most challenging area with the widest range of requirements so is the one covered in the greatest depth here.
  3. Shared DATA. The above covers the minimum areas but I have an additional DATA area which is available to all operating systems and users and is periodical syncronised between machines as well as being backed up. This is kept independent and has a separate mount point. In the case of machines dual booted with Windows it uses a files system format compatible with Windows and Linux such as NTFS. The requirement for easy and frequent syncronisation means Unison is the logical tool for DATA between machines with associated synchonisation to a large USB hard drive for backup.
  4. Email and Browsers (profiles). I am going to also mention Email specifically as that has specific issues as it needs to be collected on every machine as well as pads and phones and some track kept on replies regardless of source. All incoming email is retained on the servers for months if not years and all outgoing email is copied to either a separate account accessible from all machines or where that is not posible automatically such as android a copy is sent back to the senders inbox. It is also a huge security risk as saved passwords have limited or no security. Thunderbird has a self contained 'profile' where all the local configuration a filing system for emails is retained and that profile along with the matching one for the firefox browser need to be backed up and that depends where they are held. The obvious places are the DATA area allowing sharing between operating systems and users, in each users home folder which offers more speed if an SSD is used and better security if encryption is implemented or in a securely mounted encrypted partition.

Physical Implications of Backup Philosophy - Partitioning

I am not going to go into this in great depth as it has already been covered in other places but my philosophy is:

  1. There are advantages in having two partitions for linux systems so new versions can be run for a while before commiting to them.
  2. The folder containing all the users home folders should be a separate partition mounted as /home. This separates the various functions and makes backup, sharing and cloning easier.
  3. When one has an SSD the best speed will result from having the linux systems and the home folder using the SSD especially if the home folders are going to be encrypted.
  4. Shared DATA should be in a separate partition mounted at /media/DATA. If one is sharing with a Windows system it should be formatted as ntfs which also reduces problems with permissions and ownership with multiple users. DATA can be on a separate slower but larger hard drive.
  5. If you have an SSD swaping should be minimised and the swap partition should be on a hard drive if it is available to maximise SSD life.
  6. Encryption should be considered on laptops which leave the home. Home folder encryption and encrypted drives are both possible and, if drice capacity allows, one should allocate space for an encrypted partition even if not implemented initially - in our systems it is mounted at /VAULT. It is especially important that email is in an encrypted area.

The Three Parts to Backing Up

1. System Backup - TimeShift - Scheduled Backups and more.

TimeShift which is now fundamental to the update manager philosophy of Mint and backing up the linux system very easy. To Quote "The star of the show in Linux Mint 19, is Timeshift. Thanks to Timeshift you can go back in time and restore your computer to the last functional system snapshot. If anything breaks, you can go back to the previous snapshot and it's as if the problem never happened. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot (thus canceling the effects of the regression) and you still have the ability to apply updates selectively (as you did in previous releases)." The best information I hve found about TimeShift and how to use it is by the author.

TimeShift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded. This ensures that your files remains unchanged when you restore your system to an earlier date. Snapshots are taken using rsync and hard-links. Common files are shared between snapshots which saves disk space. Each snapshot is a full system backup that can be browsed with a file manager. TimeShift is efficient in use of storage but it still has to store the original and all the additions/updates over time. The first snapshot seems to occupy slightly more disk space than the root filesystem and six months of additions added another approximately 35% in my case. I run with a root partition / and separate partitions for /home and DATA. Using Timeshift means that one needs to allocate an extra 2 fold storage over what one would have expected the root file sytem to grow to.

In the case of the Defiant the root partition has grown to about 11 Gbytes and 5 months of Timeshift added another 4 Gbyes so the partition with the /timeshift folder neeeds to have at least 22 Gbytes spare if one intends to keep a reasonable span of sheduled snapshots over a long time period. After three weeks of testing Mint 19 my TimeShift folder has reached 21 Gbytes for a 8.9 Gbyte system!

This space requirements for TimeShift obviously have a big impact on the partition sizes when one sets up a system. My Defiant was set up to allow several systems to be emplyed with multiple booting. I initially had the timeshift folder on the /home partition which had plenty of space but that does not work with a multiboot system sharing the /home folder. Fortunately two of my partitions for Linux systems plenty big enough for use of TimeShift and the third which is 30 Gbytes is accceptable if one is prepared to prune the snapshotss occasionally. With Mint 20 and a lot of installed programs I suggest the minimum root partition is 40 Gbytes.

It should be noted that it is also possible to use timeshift to clone systems between machines - that is covered in All Together - Sharing, Networking, Backup, Synchronisation and Encryption

2. Users - Home Folder Archiving using Tar.

The following covers my prefered and well tested mechanism for backing up the home folders. I have been doing this on a monthly basis on 3 machines for many years. An untested proceedure is of little application and I have also done a good number of restorations usually when moving a complete user from one machine to another (cloing) and that will also be covered.

The only catch is that the only mechanism I trust does require a single command in a terminal and likewise two commands to restore an archive. Once the commands are taylored to include your own username and the drive name you use they can just be repeated - the final version I provide even adds in a date stamp in the filename!

Tar is a very powerful command line archiving tool round which many of the GUI tools are based which should work on most Linux Distributions. In many circumstances it is best to access this directly to backup your system. The resulting files can also be accessed (or created) by the archive manager accessed by right clicking on a .tgz .tar.gz or .tar.bz2 file. Tar is an ideal way to backup many parts of our system, in particular one's home folder. The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system. A big advantage of tar is that (with the correct options) it is cabable of making copies which preserve all the linkages within the folders - simple copies do not preserve symlinks correctly and even an archive copies (cp -aR mybackup) are not as good as a tar archive

The backup process is slow (15 mins plus) and the archive file will be several Gbyte even for the simplest system. Mine are typically 15 Gbytes and our machines achieve between 0.7 and 1.5 Gbytes/minute. After it is complete the file should be moved to a safe location, preferably an external device which is encrypted or kept in a very secure palce. You can access parts of the archive using the GUI Archive Manager by right clicking on the .tgz file - again slow on such a large archive or restore (extract in the jargon) everything, usually to the same location.

Tar, in the simple way we will be using it, takes a folder and compresses all its contents into a single 'archive' file. With the correct options this can be what I call an 'exact' copy where all the subsiduary information such as timestamp, owner, group and permissions are stored without change. Soft, symbolic links, and hard links can also be retained. Normally one does not want to follow a link out of the folder and put all of the target into the archive so one needs to take care. Tar also handles 'sparse' files but I do not know of any in a normal users home folder.

As mentioned above the objective is to back up each users home folder so it can be easily replaced on the existing machine or on a replacement machine. The ultimate test is can one back up the users home folder to the tar archive,, delete it (or safer is to rename) and restore it exactly so the user can not tell in any way. The home folder is, of course, continually changing when the user is logged in so backing up and restoring must be done when the user is not logged in, ie from a different user, a LiveUSB or from a consul. Our systems reserve the first installed user for such administrative activities.

You can create a user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, Set Type: Administrator -> Add -> Click Password to set a password (otherwise you can not use sudo)

Firstly we must consider what is, arguably, the most fundamental decision about backing up, the way we specify the location being saved when we create the tar archive and when we extract it - in other words the paths must usually restore the folder to the same place. If we store absolute locations we must extract in the same way. If it is relative we must extract the same way. So we will always have to consider pairs of commands depending on what we chose. In All Together - Sharing, Networking, Backup, Synchronisation and Encryption we look at several options but in practice I have only ever used one which is occasional still refereed to as Method 1

My preferred method Method (method 1 in the full document)has absolute paths and shows home when we open the archive with just a single user folder below it. This is what I have always used for my backups and the folder is always restored to /home on extraction.

sudo tar --exclude=/home/*/.gvfs -cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/

sudo mv -T /home/pauline /home/pauline-bak
sudo tar xvpfz "/media/LUKS_G/mybackup1.tgz" -C /

Notes:

Rename: The rename command somewhat confusingly uses the mv (move) command with option -T

Archive creation options: The options used when creating the archive are: create archive, verbose mode (you can leave this out after the first time) , retain permissions, -P do not strip leading backslash, gzip archive and file output. Then follows the name of the file to be created, mybackup.tgz which in this example is on an external USB drive called 'USB_DATA' - the backup name should include the date for easy reference. Next is the directory to back up. There are objects which need to be excluded - the most important of these is your back up file if it is in your /home area (so not needed in this case) or it would be recursive! It also excludes the folders (.gvfs) which is used dynamically by a file mounting system and is locked which stops tar from completing. The problems with files which are in use can be removed by creating another user and doing the backup from that user - overall that is a cleaner way to work. If you want to do a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Other exclusions: There are other files and folders which should be excluded. In our specific case that includs the cache area for the pCloud cloud service as it will be best to let that recreate and avoid potential conflicts. ( --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" )

Archive Restoration uses options - extract, verbose, retain permissions, from file and gzip. This will take a while. The "-C / ensures that the directory is Changed to a specified location, in case 1 this is root so the files are restore to the original locations. In case two you can chose but is normally /home. Case 3 is useful if you mount an encrypted home folder independently of login using ecryptfs-recover-private --rw which mounts to /tmp/user.8random8

tar options style used: The options are in the old options style written together as a single clumped set, without spaces separating them, the one exception is that recent versions of tar >1.28 require exclusions to immediately follow the tar command in the format shown. Mint 20 has version 1.30 in November 2020 so that ordering applies.

Higher compression: If you want to use a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Deleting Files: If the old system is still present note that tar only overwrites files, it does not deleted files from the old version which are no longer needed. I normally restore from a different user and rename the users home folder before running tar as above, when I have finished I delete the renamed folder. This needs root/sudo and the easy way is to right click on a folder in Nemo and 'open as root' - make sure you use a right click delete to avoid going into a root deleted items folder.

Deleting Archive files: If you want to delete the archive file then you will usually find it is owned by root so make sure you delete it in a terminal - if you use a root browser then it will go into a root Deleted Items which you can not easily empty so it takes up disk space for ever more. If this happens then read http://www.ubuntugeek.com/empty-ubuntu-gnome-trash-from-the-command-line.html and/or load the trash-cli command line trash package using the Synaptic Package Manager and type

sudo trash-empty

Archiving a home folder and restoring

Everything has really been covered above so this is really just a slight expansion of the above for a specific case and the addition of some suggested naming conventions.

This uses 'Method 1' where all the paths are absolute so the folder you are running from is not an issue. This is the method I have always used for my backups so it is well proven. The folder is always restored to /home on extraction so you need to remove or preferably rename the users folder before restoring it. If a backup already exists delete it or use a different name. Both creation and retrieval must be done from a different or temporary user to avoid any changes taking place during the archive operations.

sudo tar --exclude=/home/*/.gvfs -cvpPzf "/media/USB_DATA/backup_machine_user_a_$(date +%Y%m%d).tgz" /home/user_a/

sudo mv -T /home/user_a /home/user_a-bak
sudo tar xvpfz "/media/USB_DATA/backup_machine_user_YYYYmmdd.tgz" -C /

Note: the automatic inclusion of the date in the backup file name and the suggestion that the machine and user are also included. In this example the intention is send the archive straight to a plug in USB hard drive, encrypted is best.

Cloning between machines and operating systems using a backup archive.

It is possible that you want to clone a machine, for example when you buy a new machine. This is easy provided the users you are cloning were installed in the same order and you make the new usernames the same as the old. I have done that many times. There is however a catch which you need to watch for and that is that the way Linux stores user names is a two stage process. If I set up a system with the user pcurtis when I install that is actually just an 'alias' to a numeric user name 1000 in the linux operating system - the users ids in Mint start at 1000. If I then set up a second user peter that will correspond to user id 1001. If I have a disaster and reinstall and this time start with pauline she will have user id 1000 and peter is 1001. I then get my carefully backed up folders and restore, the folders which now have all the owners etc incorrect as they use the underlying numeric value apart, of course, where the name is used in hard coded scripts etc. This is why I said above the users Must not only have the same names but be installed in the same order.

You can check all the relevant information for the machine you are cloning from in a terminal by use of id in a terminal :

pcurtis@gemini:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),6(disk),
20(dialout),21(fax),24(cdrom),25(floppy),26(tape),29(audio),30(dip),
44(video),46(plugdev),104(fuse),109(avahi),110(netdev),112(lpadmin),
120(admin),121(saned),122(sambashare)

So when you install on a new machine you should always use the same usernames and passwords as on the original machine and then create an extra user with admin (sudo) rights for convenience for the next stage, or do what we do and always save the first user for this pupose.

Change to your temporary user, rename the first users folder (you need to be root) and replace it from the archived folder from the original machine. Now login to the user again and that should be it. At this point you can delete the temporary user. If you have multiple users to clone the user names must obviously be the same and, more importantly, the numeric id must be the same as that is what is actually used by the kernel, the username is really only a convenient alias. This means that the users you may clone must alaways be installed in the same order on both machines or operating systems so they have the same numeric UID. I have repeated myself because it is so important!

So we first make a backup archive in the usual way and take it to the other machine or switch to the other operating system and restore as usual. It is prudent to backup the system you are going to overwrite just in case.

So first check the id on both machines for the user(s) by use of

id user

If and only if the ids are the same can we proceed

On the first machine and from a temporary user:

sudo tar --exclude=/home/*/.gvfs -cvpPzf "/media/USB_DATA/backup_machine_user1_$(date +%Y%m%d).tgz" /home/user1/

On the second machine or after switching operating system and from a temporary user:

mv -T /home/user1 /home/user1-bak # Rename the user.

sudo tar xvpfz "/media/USB_DATA/mybackup_with_date.tgz" -C /

When everything is well tested you can delete the renamed folder

Moving a Mint Home Folder to a separate Partition

This is sightly more complex than above as you are moving information to an initial mount point in new partition and then changing the automatic mounting so it become /home and covers up the previous information (which then has to be deleted from a LiveUSB or other system).

The problem of exactly copying a folder is not as simple as it seems - see https://stackoverflow.com/questions/19434921/how-to-duplicate-a-folder-exactly. You not only need to preserve the contents of the files and folders but also the owner, group, permissions and timestamp. You also need to be able to handle symbolic links and hard links. I initially used a complex proceedure using cpio but am no longer convinced that covers every case especially if you use wine where .wine is full of links and hard coded scripts. The stackoverflow thread has several sensible options 'exact' copies. However we already have a well proven way of creating and restoring backups of home folders exactly using tar and we would create a backup before proceeding in any case!

When we back up normally (Method 1 above) we use tar to create a compressed archive which can be restored exactly and To the Same place, even during cloning we are still restoring the user home folders to be under /home. If you are moving to separate partition you want to extract to a different place, which will become /home eventually after the new mount point is set up in file system mount point list in /etc/fstab. It is convenient to always use the same backup proceedure so you need to get at least one user in place in the new home folder before changing the mount point. I am not sure I trust any of the copy methods for my real users but I do believe it is possible to move a basic user (created only for the transfers) that you can use to do the initial login after changing the location of /home and can then use to extract all the real users from their backup archives.

An 'archive' copy using cp is good enough in the case of a very basic user which has recently been created and little used, such a home folder may only be a few tens of kbytes in size and not have a complex structure with links: So we first mount the target partition at wherever then copy to it.

sudo cp -ar /home/basicuser /media/whereever

The -a option is an archive copy preserving most attributes and the -r is recursive to copy sub folders and contents.

So the proceedure to move /home to a different partition is to:

3. DATA Synchronisation and Backup - Unison

This is a long section because in contains a lot of background but in practice I just have one central machine which has Unison set up which offers a set of profiles each of which will check and then do all the synchronisations to a drive or machine very quickly. I do it once a month as backup and when required if, for example, I need to edit on different machines. This is what Unison looks like when editing or setting up a synchronisation profile.

Screenshot

So how does it work. Linux has a very powerful tool available called Unison to synchronise folders, and all their subfolders, either between drives on the same machine or across a local network using a secure transport called SSH (Safe 'S Hell). At its simplest you can use a Graphical User Interface (GUI) to synchronise two folders which can be on any of your local drives, a USB external hard drive or on a networked machine which also has Unison and SSH installed. Versions are even available for Windows machines but one must make sure that the Unison versions numbers are compatible even between Linux versions. That has caused me a lot of grief in the past and has been largely instumental in causing me to upgrade some of my machines to Mint 20 from 19.3 earlier than I would have done.

If you are using the graphical interface, you just enter or browse for two local folders and it will give you a list of differences and recommended actions which you can review and it is a single keystroke to change any you do not agree with. Unison uses a very efficient mechanism to transfer/update files which minimises the data flows based on a utility called rsync. The initial Synchronisation can be slow but after it has made its lists it is quite quick even over a slow network between machines because it is running on both machines and transferring minimum data - it is actually slower synchronising to another hard drive on the same machine.

The Graphical Interface (GUI) has become much more comprehensive than it was when I started using it and can now handle most of the options you may need, however it does not allow you to save the configurations to a different name. You can however find them very easily as they are all stored in a folder in your home folder called .unison so you can copy and rename them to allow you to edit each separately, for example, you may want similar configurations to synchronise with several hard drives or other machines. The format is so simple and obvious.

For more complex synchronisation with multiple folders and perhaps exclusions you set up a more complex configuration file for each Synchronisation Profile and then select and run it from the GUI as often as you like. It is easier to do than describe - a file to synchronise my four important folders My Documents, My Web Site, Web Sites, and My Pictures is only 10 lines long and contains under 200 characters yet synchronises 25,000 files!

After Unison has done the initial comparison it lists all the potential changes for you to review. The review list is intelligent and if you have made a new folder full of sub folders of pictures it only shows the top folder level which has to transferred . You have the option to put off, agree or reverse the direction of any files with difference. Often the differences are in the metadata such as date or permissions rather than the file and there is enough information to resolve those differences is supplied - other times when changes have been made independently in different places the decissions may be more difficult but there is an ability to show differences between simple (text) files.

Both Unison and SSH are available in the Mint repositories but need to be installed using System -> Administration -> Synaptic Package Manager and search for and load the unison-gtk and ssh packages or they can be directly installed from a terminal by:

sudo apt-get install unison-gtk ssh

The first is the GUI version of Unison which also installs the terminal version if you need it. The second is a meta package which install the SSH server and Client, blacklists and various library routines. The third is no longer essntial to allow ssh to use host names rather than just absolute IP addresses if you have an entirely linux network.

=======================================================================

23 December 2020

Modifications to Unison Backup [WIP]

The following is the draft of a modified section for grab_bag.htm and gemini.htm etc

The first is the GUI version of Unison which also installs the terminal version if you need it. The second is a meta package which install the SSH server and Client, blacklists and various library routines.

The procedures initially written in grab_bag.htm and gemini.htm differ slightly from what I have done in the past in an attempt to get round a shortcoming in the Unison Graphical User Interface (GUI). In looking deeper into some of the aspects whilst writing up the monthly backup procedures I have realised there were some some unintended consequences which has caused me to look again at this section and update it.

Unison is then accessible from the Applications Menu and synchronising can be tried out immediately using the GUI. There are some major cautions if the defaults are used - the creation/modification date are not synchronisation by default so you lose valuable information related to files although the contents remain unchanged. The behaviour can can easily be set in the configuration files or by the setting the option times to be true in the GUI. The other defaults also pose a risk of loss of associated data. For example the user and group are not synchronised by default - they are less essential than the file dates and I normally leave them with their default values of false. Permissions however are set by perm which is a mask and read, write and execute are preserved by default which in our case is not required. What one is primarily interested in is the contents of the file and the date stamps, the rest is not required if one is interested in a common data area although they are normally essential to the security of a Linux.

WARNINGs about the use of the GUI version of Unison:

I find it much easier and predictable to do any complex setting up by editing the configuration/preference files (.prf files) stored in .unison . My recommendation is that you do a number of small scale trials until you understand and are happy and start to edit the configuration files to set up the basic parameters and only edit details of the folders etc you are synchronising in the GUI. I have put an example configuration file with comments further down this write up as well as the following is a basic file which synchronises changes in the local copy of My Web Site and this years pictures with backup drive LUKS_H which you can use as a template

# Unison preferences - example file recent.prf in .unison
label = Recent changes only
times = true
perms = 0o0000
fastcheck = true
ignore = Name temp.*
ignore = Name *~
ignore = Name .*~
ignore = Name *.tmp
path = My Pictures/2020
path = My Web Site
root = /media/DATA
root = /media/LUKS_H

Note the use of the root and path gives considerable flexibility and they are both safe to edit in the GUI unlike label and perms. Also note the letter o designating octal in perms - do not mistake for an extra zero!

Choice of User when running Unison

There is, however, a very important factor which must be taken into account when synchronising using the GUI and wish to preserving the file dates and times. When using the GUI you are running unison as a user not as a root and there are certain actions that can only be carried out by the owner or by root. It is obvious that, for security, only the owner can change the group and owner of a file or folder and the permissions. What is less obvious is that only the owner or root can change the time stamp. Synchronisation therefore fails if the time stamp needs to updated for files you do not own which is common during synchronisation. I therefore carry out all synchronisations from the first user installed (id 1000) and set the owners of all files to that user (id 1000) and the group to adm (because All users who can use sudo are automatically in group adm). This can be done very quickly with recursive commands in the terminal for the folders DATA and VAULT on the machine and the USB drives which are plugged in (assuming they are formatted with a linux file system). If Windows formatted USB drives or sticks are plugged in they will pick up the same owner as the machine at the time regardless of history.

sudo chown -R 1000:adm /media/DATA

sudo chown -R 1000:adm /media/VAULT

sudo chown -R 1000:adm /media/USB_DRIVE

New:Ownership requirements for Trash (Recycle Bins) to work

There is however an essential extra step required that I have missed out in the past. In changing the ownership of the entire partition we have also changed the ownership of the .Trash-nnnn folders and the result is that one can no longer use the recycle bin (trash) facilities. I have only recently worked that one out! There is an explanation of how trash is implemented on drives which are on separate partitions at http://www.ramendik.ru/docs/trashspec.html and one can see on each drive that there is a trash folder for each user which looks like .Trash-1000 .Trash-1001 etc and these need to be set to owned and grouped with their matching user. So with two extra users we need to do:

sudo chown -R 1001:1001 /media/DATA/.Trash-1001 && sudo chown -R 1002:1002 /media/DATA/.Trash-1002

sudo chown -R 1001:1001 /media/VAULT/.Trash-1001 && sudo chown -R 1002:1002 /media/VAULT/.Trash-1002

Note: All the above applies to Linux Files systems such as ext4 - old fashioned Windows file systems which do not implement owners, groups or permissions have to be mounted with the user set to 1000 and group to adm a different way (using fstab) which is beyond this document.

Considerations about permissions and synchronisation.

I initially tried to set all the permissions for the drives being synchronised in bulk when I found the GUI did not allow permissions to be ignored. I now find it is better to just edit the preferences files for each synchronisation to ignore permissions

If you do want to set the permissions recursively for a whole partition or folder you have to recall that folders need the execution bit set otherwise they can not be accessed. The most usual permissions used are read and write for both the owner and group and read only for others. As we are considering shared Data one can usually ignore preserving or synchronising the execution bits although sometimes one has backups of executable programs and scripts which might need it to be reapplied if one retrieves them for use.

The permissions are best set in a terminal, the File Manager looks as it could be used via the set "set permissions for all enclosed files" option but the recursion down the file tree does not seem to work in practice.

The following will set all the owners, groups and permissions in DATA, VAULT and a USB drive if you need to, it is not essential . The primary user is referred to by id 1000 to make this more general. This uses the octal way of designation and sets the owner and group to read and write and other to read for files and read write and execute for folders which always require execute permission. I am not going to try to explain the following magic incantations at this time or maybe never, but it works and very fast with no unrequired writes to wear out your SSD or waste time. These need to be run from user id 1000 to change the permissions (adding an extra sudo in front of every chmod will also work from all users.

find /media/DATA -type f -print0 | xargs -0 chmod 664 && find /media/DATA -type d -print0 | xargs -0 chmod 775

find /media/VAULT -type f -print0 | xargs -0 chmod 664 && find /media/VAULT -type d -print0 | xargs -0 chmod 775

find /media/USB_DRIVE -type f -print0 | xargs -0 chmod 664 && find /media/USB_DRIVE -type d -print0 | xargs -0 chmod 775

This only takes seconds on a SSD and appears instantaneous after the first time. USB backup drives take longer as they are hard drives.

 

Windows Files Systems (fat32, vfat and ntfs) (Most users ignore this section so commented out in final version)

The old fashioned Windows systems can be mounted with the same user 1000 and group adm when mounted

Ownership of fat and ntfs drives Mounted permanently using fstab: There is one 'feature' of mounting using the File System Table in the usual way is that the owner is root and only the owner (root) can set the time codes - this means that any files or directories that are copied by a user have the time of the copy as their date stamp which can cause problems with Unison when Synchronising. What seems to happen is this:

A solution for a single user machine is to find out your user id and mount the partition with option uid=user-id, then all the files on that partition belong to you - even the newly created ones. This way when you copy you keep the original file date.

# /dev/sda5
UUID=706B-4EE3 /media/DATA vfat iocharset=utf8,uid=yourusername,gid=adm,umask=000 0 0

In the case of multiple user machines you should not mount at boot time and instead mount the drives from Places.

The uid can also be specified numerically and the first user created has user id 1000.

End of Section with 23rd December Modifications

=========================================================================

SSH (Secure aS Hell)

In the introduction I spoke of using the ssh protocol to synchronise between machines.

When you set up the two 'root' directories for synchronisation you get four options when you come to the second one - we have used the local option but if you want to synchronise between machines you select the SSH option. You then fill in the hostname which can be an absolute IP address or a hostname. You will also need to know the folder on the other machine as you can not browse for it. When you come to synchronise you will have to give the password corresponding to that username, often it asks for it twice for some reason.

Setting up ssh and testing logging into the machine we plan to synchronise with.

I always checked out that ssh has been correctly set up on both machines and initialise the connection before trying to use Unison. In its simplest form ssh allows on to log using a terminal on a remote machine. Both machines must have ssh installed (and the ssh daemons running which is the default after you have installed it). The first time you use SSH to a user you will get some warnings that it can not authenticate the connection which is not surprising as and will ask for confirmation and you have type yes rather than y. It will then tell you it has saved the authentication information for the future and you will get a request for a password which is the start of your log in on the other machine. After providing the password you will get a few more lines of information and be back to a normal terminal prompt but note that it is now showing the address of the other machine. You can enter some simple commands such as a directory list (ls) if you want.

pcurtis@gemini:~$ ssh pcurtis@lafite
The authenticity of host 'lafite (192.168.1.65)' can't be established.
ECDSA key fingerprint is SHA256:C4s0qXX9GttkYQDISV1fXpNLlXlXQL+CXjpNN+llu0Y.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'lafite,192.168.1.65' (ECDSA) to the list of known hosts.
pcurtis@lafite's password: **********

109 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sun Oct 18 03:29:24 2020 from 192.168.1.64
3
pcurtis@lafite:~$ exit
logout
Connection to lafite closed.
pcurtis@defiant:~$

This is how computing started with dozens or even hundreds of users logging into machines less powerful than ours via a terminal to carry out all there work.

The hostname resolution seems to work reliably with Mint 20 on my newtword however if username@hostname does not work try username@hostname.local. If neither work you will have to use the numeric IP address which can be found by clicking the network manager icon in the tool tray -> network settings - > the settings icon on the live connection. The IP addresses can vary if the router is restarted but can often be fixed in the router internal setup but that is another story.

More complex profiles for Unison with notes on the options:

The profiles live in /home/username/.unison and can be created or edited with xed.

# Profile to synchronise from gemini to helios
# with username pcurtis on helios
# Note: if the hostname is a problem then you can also use an absolute address
# such as 192.168.1.4 on helios
#
# Roots for the synchronisation
root = /media/DATA
root = ssh://pcurtis@helios//media/DATA
#
# Paths to synchronise
path = My Backups
path = My Web Site
path = My Documents
path = Web Sites
#
# Some typical regexps specifying names and paths to ignore
ignore = Name temp.*
ignore = Name *~
ignore = Name .*~
ignore = Name *.tmp
#
# Some typical Options - only times is essential
#
# When fastcheck is set to true, Unison will use the modification time and length of a
# file as a ‘pseudo inode number’ when scanning replicas for updates, instead of reading
# the full contents of every file. Faster for Windows file systems.
fastcheck = true
#
# When times is set to true, file modification times (but not directory modtimes) are propagated.
times = true
#
# When owner is set to true, the owner attributes of the files are synchronized.
# owner = true
#
# When group is set to true, the group attributes of the files are synchronized.
# group = true
#
# The integer value of this preference is a mask indicating which permission bits should be synchronized other than set-uid.
perms = 0o1777

The above is a fairly comprehensive profile file to act as a framework and the various sections are explained in the comments.

17 November 2020

Wansview 1080p Webcam with Microphone for Zoom meetings

Now we are back in lockdown and making more use of Zoom it seemed sensible to purchase a discrete Webcam. I chose a Wansview 1080p Webcam with Microphone from Amazon solely because it was giving the best performance of all the camers in the Zoom meetings we were attending. We were also fortunate as the prices had dopped and it was an Amazon favourite down to £21.99 from close to £40 a few months ago. It clips easily to a monitor and Laptop and was detected immediately.

Manual settings of Webcams v4l2-ctl

The video was good out of the box and there seemes to be plenty of sensitivity indoors. There is no way in Zoom to adjust the brightness, contrast or saturation so I looked for prograsm to adjust the defaults and eventually found there is a way to do so but only from the terminal. Most programs using the cameras allow such adjustments. The original source of my information which ledon to my gaing control of some features was https://www.kurokesu.com/main/2016/01/16/manual-usb-camera-settings-in-linux/ which led me on to https://wiki.kurokesu.com/books/recipes/page/v4l2. These, combined witha lot of reading of the man page which is incomplete and the various --help options gave me a glimmering of the power of the v4l2-ctl program and how to extract the capabilities.

First we need to install v4lutils

sudo apt-get install v4l-utils

Now let’s see what we have connected on USB port

peter@gemini:~$ v4l2-ctl --list-devices
HD Web Camera: HD Web Camera (usb-0000:00:15.0-2):
/dev/video0
/dev/video1

There seem to be two interfaces for the HD Web Camera– this is dual stream output. By trial and error I found the stream on /dev/video0 seems to be the standard H264 output

Video for Linux V4L2 can report all available controls to single list. List is self explanatory with possible value ranges.

peter@gemini:~$ v4l2-ctl -d /dev/video0 --list-ctrls
brightness 0x00980900 (int) : min=1 max=255 step=1 default=128 value=100
contrast 0x00980901 (int) : min=1 max=255 step=1 default=128 value=128
saturation 0x00980902 (int) : min=1 max=255 step=1 default=128 value=100
peter@gemini:~$

It seems that there are only a limited number of options which can be changed

You can read a value like this:

peter@gemini:~$ v4l2-ctl -d /dev/video0 --get-ctrl=brightness
brightness: 100
peter@gemini:~$

and set using a short form of the options by

peter@gemini:~$ v4l2-ctl -d0 --set-ctrl=brightness=80
peter@gemini:~$ v4l2-ctl -d /dev/video0 --get-ctrl=brightness
brightness: 80
peter@gemini:~$

Note the check at the end

I found that the ideal settings were brightness=100 , saturation =100 and left the contrast at the default of 128

Looking at the main settings we now have:

peter@gemini:~$ v4l2-ctl -d0 --all
Driver Info:
Driver name : uvcvideo
Card type : HD Web Camera: HD Web Camera
Bus info : usb-0000:00:15.0-2
Driver version : 5.4.65
Capabilities : 0x84a00001
Video Capture
Metadata Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 1280/720
Pixel Format : 'MJPG' (Motion-JPEG)

Field : None
Bytes per Line : 0
Size Image : 1048576
Colorspace : Default
Transfer Function : Default (maps to Rec. 709)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Full Range)
Flags :
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 1280, Height 720
Default : Left 0, Top 0, Width 1280, Height 720
Pixel Aspect: 1/1
Selection Video Capture: crop_default, Left 0, Top 0, Width 1280, Height 720, Flags:
Selection Video Capture: crop_bounds, Left 0, Top 0, Width 1280, Height 720, Flags:
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 30.000 (30/1)
Read buffers : 0
brightness 0x00980900 (int) : min=1 max=255 step=1 default=128 value=100
contrast 0x00980901 (int) : min=1 max=255 step=1 default=128 value=128
saturation 0x00980902 (int) : min=1 max=255 step=1 default=128 value=100

peter@gemini:~$

I have maked some of the settings of interest. The resolution has been set to 1280 x 720 by Zoom

It can be set like this and checked afterwards

peter@gemini:~$ v4l2-ctl -v width=1920,height=1080
peter@gemini:~$ v4l2-ctl -d0 --all| grep -i Width/Height
Width/Height : 1920/1080
peter@gemini:~$

Look at https://wiki.kurokesu.com/books/recipes/page/v4l2 before trying any of this with a differerent camera as the settings have to be permitted. Also look at v4l2-ctl --help-all in detail

GUI testing and setting of Webcam - guvcview

Having spent a lot of time working out how to use the low level commands I have discovered that there is a program called guvcview which can be installed via Synaptic allows one to test the webcam and also set up contrast, brightness and saturation which stick when Zoom is subsequently loaded, just like the low level routines. Duh.

26th November 2020

Backup Schedule [WIP]

This section is undergoing some extensive rewriting

Background

I set out at the end of November to write a check list of monthly activities - mainly backing up but also various maintanace activities and checks. The idea was to test and validate the overall proceedures at the start of December. It was intended to have sufficient background and supporting information to enable Pauline to be abe to carry out the backing up etc to the level required for the procedures in the Linux Grab Bag page to be used to rebuild a machine when disaster strikes. I found that not to be as easy as I expected as there were a number of quirkes (aka bugs) and potential problems that really needed to be addressed before a satisfactory and fool proof proceedure and check list could be finalised. This led to considerable background work including developing some scripts for the routine activities.

I went back to basics and looked at what counted as critical. I worked much of my life in the Space Game on Satelites where reliability was paramont and one component in many of the reviews was called a FMECA (Failure Mode Effects and Criticality Analysis) and I used a similar approach to look at the overall system of back-ups and redundancy that was in place starting with the importance of particular information including the timeliness. This showed a couple of failure or loss points which would have very serious impacts. It also revealed there was a false sense of security as what seemed to be multiple redundancy in many areas was actually quite the opposite and allowed a single mistake or system error to propagate near instantaneously to all copies.

I am going to look at an example you possibly have. Passwords and Passphrases have become much more critical as hacks have increased so they need to be more complex and difficult to remember and should not be repeated. Use of password managers has become common and loss of the information in a password manager would certainly count as a critical failure and even a monthly backup could result in lose of access to many things, especially if you routinely change passwords as recommended. A quick check showed we have over 250 passwords in our manager of which I would guess 200 are current and a month of changes could easily be over 10. So you think its not that risky because they are present on several machines but that is an illusion as they are linked through the cloud but that means that a mistake on one machine could destroy lose that key on all machines and serious fault could your database on all machines. This makes the configuration and reliability of ones clouds systems to be another souce of critical single point failures.

So by the end I have decided that I am going to have to complete the following before coming back to complete a section on a backup schedule

  1. Backup a number of critical pieces of data on a daily basis independent of the cloud
  2. Investigate the robustness of pCloud and monitor its performance on a daily basis
  3. Convince myself the proceedures for the remainer of the critical information were robust.
  4. Understand and work round problems in monting encrypted drives
  5. Write some scripts to make various backup activities easier and more repeatable

 

Started December 4 2020

Backing up Todo and Keepass files locally.

This section addresses a major concern that the existing back-up procedures were vulnerable to the propagation of mistakes or system errors through the cloud which could effect a number of critical files where information was changing on a daily basis. Loss of a months password information, for example, would be a major disaster. It became clear that synchronising such information through the cloud only provides illusion of redundancy because of its presence on multiple machines and independent local daily backups were even more important and could then be further backed up on a monthly basis to an external drive along with all the other information. It turned out to be a small additional step to add log information on the pCloud synchronising relevant to these files. The overall activity has been now been implemented on all our machines running Mint 20 and Mint 20.1 including the additional code to check and log pCloud activity. The backup files (Daily, Weekly and Monthly) and go back to the start of December 2020 and the log files to late January 2021.

The daily backup can be carried out by a short script which copies the files to a folder with the date appended. The script can be run by one of several system facilities. Cron is one commonly used for schedules activities and runs as a daemon but it has a big problem for laptops as it can only be run at a fixed time and if the machine is off or suspended that backup is missed. Anacron is a much better choice as it runs the next time the machine is started and checks the interval since the last run and if necessary runs immediately. Anacron never misses a scheduled task and is used in Mint for all important scheduled tasks. Better still it can be scheduled for variable delay so the tasks do not all run immediately the machine is turned on with an added bonus is that it gives time for the machine to become stable, for encrypted drives to be mounted and cloud drives to synchronise. Mint has a folder set up for Daily, Weekly and Monthly tasks scripts to live or, if you want extra flexibility over timing you can run them directly.

Anacron jobs are scheduled by adding a single line to /etc/anacrontab using the format below:

period delay job-identifier command

where

Anacron itself is run at startup and subsequently every hour and will check if a job has been executed within the specified period in the period field. If not, it executes the command after waiting the number of minutes specified in the delay field. Once the job has been executed, it records the date in a timestamp file in the /var/spool/anacron directory with the name specified in the job-id field. It seems to wait until the next day rather than 24 hours. There is more useful information in https://kifarunix.com/scheduling-tasks-using-anacron-in-linux-unix/ and the man page (man anacron ). In particular note the differences in Debian where anacron is only activated during certain times and not under battery. I got caught as it is not activated from 2359 to 0730 and I do much of my development in the early hours and could not understand why it was failing to call the scripts! I found using the log files accessed by Menu -> Logs very useful as one can look at 'All' and Search for anacron and it shows the times it is started and the number of scripts run each time.

You can check if the anacrontab file is correctly formulated by

anacron -T

which returns blank if the file is OK

I found that the command parameter were not as obvious as the above indicates and command did not work as a command.sh or even just command as expected. I had to use /bin/bash  command.

So what I have (with addition in red) in my /etc/anacrontab is:

# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/root
LOGNAME=root

# These replace cron's entries
1 5 cron.daily run-parts --report /etc/cron.daily
7 10 cron.weekly run-parts --report /etc/cron.weekly
@monthly 15 cron.monthly run-parts --report /etc/cron.monthly

# Run backup scripts to save todo and keepass2 information daily
1 14 syncBackup.daily /bin/bash /home/peter/runSyncBackupDaily
7 16 syncBackup.weekly /bin/bash /home/peter/runSyncBackupWeekly
@monthly 18 syncBackup.monthly /bin/bash /home/peter/runSyncBackupMonthly

and in the initial trial with script runSyncBackupDaily which has been made executable looked like:

#!/bin/sh
cp /home/peter/AndroidSync/todo/todo.txt /home/peter/Backups/todo_peter_$(date +%Y%m%d).txt
cp /home/peter/AndroidSync/Keypass2droid/PandP.kdbx /home/peter/Backups/PandP_peter_$(date +%Y%m%d).kdbx
exit

and it rans 14 minutes after the machine is started for the first time during the day and every 24 hours if the machine is on, otherwise at next time the machine is started. The dated copies are now saved in saved in .syncBackup/daily rather than Backups and take up about 5 Mbytes a month at present.

The script can obviously be extended to backup more than one users files, provided the the user is active and the folder unencrypted. It is probably safest to run two Anacron jobs in that case as one will presumably fail.

An alternative is to add scripts to /etc/cron.daily where the scripts are run by run-parts in the default anacrontab file. run-parts is responsible for running scripts in the specified directory /etc/cron.blabla and expects the file names to "consist entirely of ASCII upper- and lower-case letters, ASCII digits, ASCII underscores, and ASCII minus-hyphens" (man page) So, the script file names or link file names you put under this directory should not contain special characters like "." I have not tried that with my script but used it for logrotate. Reminds me to check for how trim is organised.

Pruning Backups

The basic system above has been tested and works but created a lot of backups. What we really need is to be able to have a progresive prune of backups much like we have available in Timeshift where one can keep a chosen number of days, weeks and months. I thought that would be a common activity and algorithm but have found little in my internet searches, perhaps the most useful were:

The last seems to be the most appropriate and the favoured solution is clearly expounded. In summary the code comes down to a single line:

ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {}

Many thanks to mklement0 or this excellent code and explanation - I have removed a couple of checks which I do not require but consult his solution if you need ultimate flexibility or code efficiency.

This does exactly what is required as it keeps a number of backups regardless of missed days etc.

I however see no easy alternative when it comes to the weekly and monthly backups to doing the whole process again into a separate folder where we just keep the most recent - that is not a huge overhead in code, time or overlapping backups. So I use 3 scripts runBackupDaily, runBackupWeekly and runBackupMonthly feeding into three folders ( /home/peter/.syncBackup/daily etc ) and each folder is pruned separately.

Over the next month there were a number of small refinements and the new script in runSyncBackupDaily just leaves the newest 5 pairs which have been copied .

This was extended on January 20th to also keep a log file of activity into three pCloudDrive synced folders which enables a quick confirmation they are being updated and from which machines. This was done because there were suspicions that pCloud Syncs were getting broken when pCloud updated and I wanted a quick means to check on a daily basis.

#!/bin/sh
# Script run as root from /etc/anacrontab in users home folder
cp /home/peter/AndroidSync/todo/todo.txt /home/peter/.syncBackup/daily/todo_peter_$(hostname)_$(date +%Y%m%d).txt
cp /home/peter/AndroidSync/Keypass2droid/PandP.kdbx /home/peter/.syncBackup/daily/PandP_peter$(hostname)_$(date +%Y%m%d).kdbx
# Prune number of backups and correct ownership
ls -tp /home/peter/.syncBackup/daily | grep -v '/$' | tail -n +11 | xargs -I {} rm /home/peter/.syncBackup/daily/{}
chown peter:peter -R /home/peter/.syncBackup
#
# Log activity which will also confirm pCloud synchronising AndroidSync, Phonebox and Shoebox
echo "$(hostname) $(date +%Y-%m-%d) $(date +%H:%M:%S)" >> /home/peter/Desktop/Phonebox/DailySyncLog_$(hostname)
chown peter:peter /home/peter/Desktop/Phonebox/DailySyncLog_$(hostname)
echo "$(hostname) $(date +%Y-%m-%d) $(date +%H:%M:%S)" >> /home/peter/Desktop/Shoebox/DailySyncLog_$(hostname)
chown peter:peter /home/peter/Desktop/Shoebox/DailySyncLog_$(hostname)
echo "$(hostname) $(date +%Y-%m-%d) $(date +%H:%M:%S)" >> /home/peter/AndroidSync/DailySyncLog_$(hostname)
chown peter:peter /home/peter/AndroidSync/DailySyncLog_$(hostname)
exit

there are similar scripts for weekly and monthly which omit the logging activity

Note 1: The script is run as root and I was not sure how that worked out in appending to a file as far as the resulting ownership so as belt and braces I have set the ownersip and and group after the append - this may not be strictly necessary for an existing file but ensures the first run is correct.

Note 2: Likewise ownership of .syncBackup folder and contents is now corrected (27th January 2021)

Conclusions This has now been in use for two months and works well

I also carrried out a number of further tests which were never put into 'production'

The example above could be made more compact by combing the chown peter:peter commands and by use of | tee -a after echo but this makes it less clear eg

# Log activity which will also confirm pCloud is synchronising AndroidSync, Phonebox and Shoebox
echo "$(hostname) $(date +%Y-%m-%d) $(date +%H:%M:%S)" | tee -a /home/peter/AndroidSync/DailySyncLog_$(hostname) /home/peter/Desktop/Shoebox/DailySyncLog_$(hostname) /home/peter/Desktop/Phonebox/DailySyncLog_$(hostname)
chown peter:peter /home/peter/AndroidSync/DailySyncLog_$(hostname) /home/peter/Desktop/Shoebox/DailySyncLog_$(hostname) /home/peter/Desktop/Phonebox/DailySyncLog_$(hostname)

Conclusion: Idle fingers gain but with less clarity.

Trial of cp -p option

  1. The owner which is otherwise be changed to root
  2. The timestamp which would otherwise be changed to the copy time. This will avoid lots of identical copies under different names when the list is pruned.

BUT: If implemented pruning needs to be done on each series rather than the whole folder so another level is required in .syncBackup

Conclusions: Overly confusing output so not used currently. Concept may be worth exploring further

More about a possible pCloud issue and logging

pCloud is my choice for cloud storage which is used for data sharing and synchronisation between machines. It works slightly differently to Dropbox and has two main modes.

I have noticed that on a very few occassion these syncs (linkages) have seemed to stop functioning although they show in the pCloud Preferences. I thinks this occured after a major update of pCloud version and they are easily remade if required however it is easy to miss. I have therefore tried to work out a simple way to check they are working. The scheme I came up with is to create a sort of daily log file in each synchronised folder for each machine which just has the current date and time appended to it and has the machine name as part of the file name. This is run daily by adding a few additional lines to the existing daily backup script run for me by anacron and explains the extra lines in the listing shown above.

It has been in use since late January and allows a very quick check by checking the file presence and dates in the synced folders.

Mounting USB Backup Drives Encrypted using LUKS

All the USB Backup drives we currently have in use have been encrypted with LUKS and need a passphrase when they are mounted (plugged in).

There is a bug in the implementation of gnome-keyring under the latest versions of Mint. It occurs if you have been switching users before plugging in a LUKS encrypted drive. It is described in detail in Part 34 and I have now registered an Issue on Github LUKS encrypted removable USB drives not mounting in expected manner. #343 but there has been little response to date (2 February 2021)

It can be avoided by:

  1. Rebooting before plugging in the drive then keeping it mounted through any subsequent changes of user.
  2. or Mounting at any time using the Forget the Password Immediately option rather than the default of Remember the Password until you logout and again keeping it mounted until you have completely finished.

If you make a mistake you will get an error message. The drive will be unlocked but the only way to use it will be to open the file manager and click on it under devices which will mount it and it can be used.

Remember - you should always eject/un-mount the drive using the Removable Drives applet in the tray when you have completely finished. This may need a password if you are logged into a different user.

The eventual Monthly Backup Procedure must take these contraints into consideration.

Working with remote logins and Unison over a network and Switching users

A feature of PAM (Pluggable Authentication Modules) mechanism fused to automatically mount LUKS encrypted folders such as VAULT at login is that it also unmounts the folder (for security) if any user logs out. This is implemented as a security feature so is unlikely to change. It is however an obvious problem if any user is using files in VAULT or tries to before logging out and back in after the final remote access has terminated.

It therefore needs cooperation when using ssh logins or Unison to access a machine remotely for back-up and the process can not be automated easily. Normally the synchronisations using Unison will be done from the prime user (user id 1000) in any case but I often used to use an SSH login from a remote machine before the actual use of Unison, for example setting up permissions ready to create home folder backups on my wifes machine or general maintenance.

This bug also means that you should not Switch between users.

There is a fudge which can used by making use of one of the low level PAM functions (pmvarrun) to avoid the final logout action being detected. It is however more suitable for power users as it is possible there may be unintended consequences although I have used it extensively without problems to access an active machine and to a lesser extent to Switch users.

Scripts used for maintenance and housekeeping (latest versions in Dairy part 34)

During development of the action list it became clear that use of a few scripts would make life much easier and more predictable. They fall into two classes, one of which we have already covered:

It turns out that it is simplest for the user to combine the second pair of activities and backup_users.sh (run as root) currently does both.

This script below has been developed to the point that it completely automates the activities once one has logged ino the prime user and has one or more of my backup drives mounted. It now detects the machine name to use in the backup files so is machine independent and can easily edited to add extra drives or channge the list

#!/bin/sh
echo "This script is intended to be run as root from the prime user id 1000 (pcurtis) on $(hostname)"
echo "It expects one of our 6 standard backup drives (D -> I) to have been mounted"
#
echo "Adjusting Ownership and Group of DATA contents"
chown -R 1000:adm /media/DATA
test -d /media/DATA/.Trash-1001 && chown -R 1001:1001 /media/DATA/.Trash-1001
test -d /media/DATA/.Trash-1002 && chown -R 1002:1002 /media/DATA/.Trash-1002
#
echo "Adjusting Ownership and Group of VAULT contents"
test -d /media/VAULT && chown -R 1000:adm /media/VAULT
test -d /media/VAULT/.Trash-1001 && chown -R 1001:1001 /media/VAULT/.Trash-1001
test -d /media/VAULT/.Trash-1002 && chown -R 1002:1002 /media/VAULT/.Trash-1002
#
echo "Adjusting Ownership and Group of any Backup Drives present"
# Now check for most common 2TB backup Drives
test -d /media/LUKS_D && chown -R 1000:adm /media/LUKS_D
test -d /media/SEXT_E && chown -R 1000:adm /media/SEXT_E
test -d /media/SEXT4_F && chown -R 1000:adm /media/SEXT4_F
test -d /media/LUKS_G && chown -R 1000:adm /media/LUKS_G
test -d /media/LUKS_H && chown -R 1000:adm /media/LUKS_H
test -d /media/LUKS_I && chown -R 1000:adm /media/LUKS_I
echo "All Adjustments Complete"
#
echo "Starting Archiving home folders for users peter and pauline to any Backup Drives present"
echo "Be patient, this can take 10 - 40 min"
echo "Note: Ignore any Messages about sockets being ignored - sockets should be ignored!"
#
test -d /media/LUKS_D && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_D/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_D && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_D/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/SEXT_E && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT_E/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/SEXT_E && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/SEXT_E/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/SEXT4_F && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT4_F/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/SEXT4_F && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/SEXT4_F/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/LUKS_G && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_G/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_G && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_G/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/LUKS_H && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_H/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_H && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_H/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/LUKS_I && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_I/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_I && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_I/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
echo "Archiving Finished"
echo "List of Archives now present on any backup drives follows, latest at top"
test -d /media/LUKS_D && ls -hst /media/LUKS_D | grep "backup_"
test -d /media/SEXT_E && ls -hst /media/SEXT_E | grep "backup_"
test -d /media/SEXT4_F && ls -hst /media/SEXT4_F | grep "backup_"
test -d /media/LUKS_G && ls -hst /media/LUKS_G | grep "backup_"
test -d /media/LUKS_H && ls -hst /media/LUKS_H | grep "backup_"
test -d /media/LUKS_I && ls -hst /media/LUKS_I | grep "backup_"
#
echo "Summary of Drive Space on Backup Drives"
df -h --output=size,avail,pcent,target | grep 'Avail\|LUKS\|SEXT'
echo "Delete redundant backup archives as required"
exit
#
# 20th January 2021

Notes:

  1. The script sets the ownerships to be that of the prime user and the groups to be adm and then corrects the ownership of the recycle bin to be the current owner and group to enable trash to work correctly
  2. It creates archives for DATA and VAULT for both of the normal users (peter and pauline)
  3. The script has hard wired my most common 6 backup drives and will set the ownership etc to any and all that are mounted and backup in turn to each
  4. It list all the backup archives on the drives and the spare space available.

It is currently called backup_users.sh and is in the prime user's (id 1000) home folder of and needs to be made executable and run as root ie by

sudo ./backup_users.sh

You should also back up the prime user (id 1000) pcurtis in our case from a different user. The script can be modified to do this as below and is currently called backup_pcurtis.sh . The setting of permissions is not required so it is much shorter.

#!/bin/sh
echo "This script is intended to be run as root from any other user and backs up the prime user pcurtis (id 1000) on $(hostname)"
echo "It expects one of our 6 standard backup drives (D -> I) to have been mounted"
#
echo "Starting Archiving home folder for users the prime user pcurtis on $(hostname)"
echo "Be patient, this can take 5 - 20 min"
echo "Note: Ignore any Messages about sockets being ignored - sockets should be ignored!"
#
test -d /media/LUKS_D && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_D/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/SEXT_E && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT_E/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/SEXT4_F && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT4_F/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/LUKS_G && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_G/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/LUKS_H && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_H/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/LUKS_I && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_I/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
echo "Archiving Finished"
echo "List of Archives now present on any backup drives follows, latest at top"
test -d /media/LUKS_D && ls -hst /media/LUKS_D | grep "backup_"
test -d /media/SEXT_E && ls -hst /media/SEXT_E | grep "backup_"
test -d /media/SEXT4_F && ls -hst /media/SEXT4_F | grep "backup_"
test -d /media/LUKS_G && ls -hst /media/LUKS_G | grep "backup_"
test -d /media/LUKS_H && ls -hst /media/LUKS_H | grep "backup_"
test -d /media/LUKS_I && ls -hst /media/LUKS_I | grep "backup_"
#
echo "Summary of Drive Space on Backup Drives"
df -h --output=size,avail,pcent,target | grep 'Avail\|LUKS\|SEXT'
echo "Delete redundant backup archives as required"
exit
#
# 3 February 2021

Notes:

28 December, 2021

Useful Terminal Commands

ps -efl | grep gnome-keyring
grep -r gnome_keyring /etc/pam.d

ls -lhAt # long, human readable, include .files, time order

date && sudo du -h --max-depth 1 /media/DATA | sort -rh

29 December 2020

Helios - Fresh install of Mint 20.1 beta

I have had a number of problems booting Helios so I decided to do a re-install with the beta of Linux Mint 20.1 which seemed to work fine during my initial tests using a LiveUSB. I used the procedures I have just written in the Linux Grab Bag as much as possible except that I used the 'other' setting for partitioning the drive and only reformated and installed the / root drive leaving /home and all the other drives unchanged.

Fresh Install preserving Users with Encrypted home folders

I have three folders and two were encrypted. As far as I can recall the initial install (user 1000) went fine although the user was encrypted and I could set up the password during the install. However see section below on what I have done in the past to set up a password for encrypted and unencrypted folders where the password setting is grayed out for other users.

The gist of it is that you have to force the addition of a password using sudo passwd user in a terminal and that MUST be the old password othewise you will still not unlock the home folder. This is the Only time you should ever use passwd with sudo to avoid getting passwords out of step. The exact details still need to checked and Grab bag and other pages modified to take it into account.

Subsequent 'standard' setup:

  1. Swappiness Changes as per Grab Bag
  2. Grub Changes as per Grab Bag
  3. Automount Changes as per Grab Bag
  4. Hibernation Changes - /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla did not exist any more so not needed.
  5. Programs were reinstalled using the proceedure in Grab Bag from the big list.
  6. It seems that one of the programs (ubuntu-restricted-extras now has ttf-mscorefonts-installer as a dependency so that held back until the agreements are done so Grab Bag has been modified to reflect this.
  7. Other programs installed from links to debs rather than backup drive.
  8. Wine programs still present as in .wine in home folders.
  9. Thunderbird changed back using Grab Bag proceedures
  10. Make additions to /etc/anacrontab to run backups of keypass and jdotxt files and provide log files of pCloud syncs

Bios problems: The solutions are unique to the Helios Bios and so do not need to be in Grab Bag. The options are documented in Dairy Part 32 and the existing Helios write up. I took the brute force approach of stopping the logging services as I had not found any show-stoppers with that solution.

All went very smoothly and quickly

I subsequently overlayed the existing user 1000 with that from gemini by restoring an archive to get a very basic user. The unison profiles were the only items saved and were copied back from .unison in the backup copy. Again quick and smooth.

31 December 2020

Notes on Reinstalling a user with an encrypted home folder in a system

This covers the first major deviation from the procedures in Grab Bag which does not currently cover the re-installing a new version of Linux Mint when you have encrypted home folders. The installation system on the LiveUSB recognises that you have encrypted folders but does not seem to handle a mix correctly. The following is based on the Section in Dairy Part 32 and covers what happened last time I did a fresh installed from Mint 19.3 to 20 and some of this needs to be simplified and then patched into or referenced within Grab bag.

Up to now my approach has been to decrypting users home folders before reinstalling as part of a dual boot system, moving between machines or transfering to a new machine then re-encrypting if appropriate. This section first covers more background about how my machines are set up, then my attempts to avoid decrypting every folder when doing a fresh install before looking at how best to proceed in the future.

All my systems have a home folder mounted in a separate partition and several can be dual booted as they have two partitions containing root folders for Linux different systems. This enable me to do major updates from say Mint 19.x to Mint 20 by a fresh install yet preserve the individual users home folders provided they are installed into the new system in the same order with only minor changes in configuration. There are also partitions for shared data mounted at /media/DATA and sensitive data at /media/VAULT. Currently all filesystems are EXT4.

The complexity is increase when encryption is used for an additional volume for sensitive data and my laptops now all have a LUKS volume mounted at login using a partition mounted at /media/VAULT. Each LUKS volume has 8 slots for different passwords which can mount it so can be mounted by up to seven users with different passwords. When the new system is installed the new users now have to not only be intalled in the same order but use the same password as before to enable the LUKS volume to be mounted.

It is also desirable to encrypt the home folders of selected users using encryptfs, either during inital installation or at a later stage. This poses an additional set of problems when upgrading which I avoided on the Defiant and the Helios by removing the encryption before reinstalling the new Mint 20 system. That worked but has disadvantages as the users home folders need to be re-encrypted. That is not difficult but has a major problem that needs a lot of workspace - up to 2.5 times the size of the users home folder of wasted space.

On the Lafite I removed the encryption on the primary user and fully expected I would be able to re-install using the same home partition and keep user 1000 but that did not turn out to work. When I went through the install from the LiveUSB and got to providing information on the user it had the button for using an encrypted home checked and greyed out so it had looked at the existing et up and found some of the users had encrypted home folders although the actual user I was adding was not encrypted.

This meant I had to back out and start again without mounting the existing partition and creating a new user. I then had to change the system to mount my old partition at /home at boot by adding it to /etc/fstab and rebooting, not something to inflict on a newbie. This left a few mbytes of wasted space in / but not enough to be worth chasing. This was all things I had done before but adds to the work and learning curve for someone new to Linux. I had already removed the encryption from user 1001 (peter) as well so I could then add peter as a user in 'Users and Groups' and only a little tidying of the configurations was needed to have two users transfered to Mint 20 and one left accessible by booting into Mint 19.2 as originally intended to allow time to fully configure the system and install all the programs without risk to the main user of the machine. Configuration includes such activities as reducing use of swap, setting up battery saving by timeouts for hard dries, automounting the VAULT using PAM mount and setting up Timeshift. These are all best completed before the main users are transfered.

This took a good part of a day elapsed time but not all was spent actually working on the installation. At this point I decided to try to add the final user 1002 with an ecryptfs home folder. I found it the password change was greyed out and I tried with and without the option of no password. Without a password it just failed and with a password it asked for by LUKS password for the /media/VAULT drive and let me in but without a password functionality was severely limited (no sudo for example) and again the GUI did not allow me to set a password. So I forced a reset to the old password again using the sudo in a terminal command

sudo passwd user1002

which just prompts twice for the new password for user_id-1002 without requesting the old password

and after a reboot I could then use sudo etc

It is essential the password you set is the same as the old password otherwise the unwrapping in ecryptfs will fail. NOTE: this is the only time sudo should be used with passwd when you have an encrypted home folder.

 

Before you leave

I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.


Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Content revised: 28th February, 2021