Home Uniquely NZ Travel Howto Pauline Small Firms Search
Gemini - Our Chillblast WAP Pro Microcomputer

Summary of Progress to Date

This page covers the setting up of my Chillblast WAP Pro with Linux Mint 20 (Referred to a Gemini in the document). Mint is built on top of Ubuntu and both sit at the top of the popularity stakes of Linux distributions. The WAP Pro is a tiny silent machine directed towards media centre activities with Windows 10 Pro included.

So far no problems have emerged in building a dual boot system using an additional 1 TB SSD which is mounted in an easily accessible bay on the underside of the box. This has resulted in a capable machine at a total cost of under £300 which provides an excellent introduction to Linux. The write-up contains absolute and comparative performance measurements.

This documents has been written in parallel with "A Linux Grab Bag - what you need for a rebuild or to set up a new machine away from home". There is almost total overlap in Apendices 1 - 7 . Currently this document is stand-alone but that may be changed.


Contents

Why another Computer?

The requirements for this machine were very different to previous purchases. This was not intended to be a high performance machine for use when travelling but capable of handling pictures and video when required. This was intended as a cheap reliable workhorse replacing my only fixed machine which provided access to a scanner, many old fixed 3.5 inch drives etc. In parallel it seemed sensible to have at least one dual booted machine with Windows 10 as there are some programs, for example updating car GPS's, which can not be run under Linux even with wine. It was also intended as a way to bring Pauline up to speed on how to set up a new system and continue to access all our encrypted information if, for example, I was struck with Covid.

Although there was no fixed budget, the sort of laptops we have would have been an expensive solution and probably less reliable. So when I saw that Chillblast were offering a Microsystems with Windows 10 Pro at well under £200 I saw interesting possibilities. All of our laptops still have more than adequate performance for the use I envisaged so the first selection criteria was that the performance only needed to be close to the Helios. Even so that ruled out the lowest specification machine with a Celeron N3060 processor but an extensive look at the Celeron 4 core J4105 in the WAP Pro Microsystems indicated it should be at least provide 60% of the processor power on single thread application and the same or better multi-threaded than the core i5 6200 processor (2 core 4 hyper-threads) in the Helios.

The 4 Gbytes of RAM in the WAP Pro should be adequate for normal use in a 64 bit Linux Mint system but I will need to beware of having large numbers of tabs open in browsers and thunderbird as they each use extra RAM and it quickly builds up. It is desirable, if not essential, to install the Multicore System Monitor App and you can keep an eye on RAM use and, in particular whether you are using swap which means memory is overloaded.

The basic WAP Pro only has 32 Gbytes of storage provided by an embedded Multi Media Card (eMMC) as did many early phones. An MMC is similar to an SD card and it uses a SanDisk type DA4032 eMMC. eMMC storage is significantly slower than standard SSDs and has less sophisticated error correction. The 30 GBytes available will barely be enough to handle Windows 10 after a few updates. The Windows partition shows as an SD card in Linux Mint. See https://www.howtogeek.com/196541/emmc-vs.-ssd-not-all-solid-state-storage-is-equal/ for a simple comparison of eMMC versus SSD storage. There is a slot for a micro SD card to be added, as on a phone, which could make use of the Windows Pro very practical for the Work and Play applications it is targeted at. I have permanently inserted a SanDisk Ultra 32 Gbyte microSDHC UHS-1 (~80 MB/sec spec) card to act as a shared Windows and Linux Drive.

However the big draw to me was that there was an easily accessible slot underneath for a standard 2.5 inch SSD to be added which allows fast storage to be added for the Linux system. Adding a 250 Gbyte SSD would give a very practical machine only a tad short of my Helios in storage and performance for under £200 and adding the Crucial MX500 1 TB SSD (~550 Mb/sec read) that I choose gives a total of just over £250.

It is all in a silent fan-less box little more in volume than a VHS cassette which can even clip to the back of a monitor. It seemed almost too good to be true when one considers that the additional cost of a Windows 10 Home licence alone on my last machine would have been £90!

Screenshot

Screenshot

Specification of the Chillblast WAP Pro v2

Note from Chillblast: As of Windows v1903 no drivers are 'required' for full functionality in Windows 10. The system should work on a 'clean' install of Windows 10 once all Windows Updates are completed. Any driver 'warnings' in Device Manager are just 'labels' that are missing from the devices and it will not negatively affect the functionality of the system at all.

Why is my machine called Gemini

Every computer needs a name which is used on a network. The processor in this case is based on the Gemini Lake architecture but also Gemini is one of the Constellations and Zodiac signs known as the Twins because of the shape of the stars which make it up. It was also the first type of space capsules tested in the Apollo program carrying two people. With that background it seemed appropriate for my only current Dual Booted system to have the name gemini on the network. The other computers on the network have mostly been called after the manufacturers model name or motherboard for home-builds - lafite is bad enough but WAP Pro is totally inappropriate. The other alternative was matrix which had always been my computer name at work and then the fixed machine at the centre of the home network but I am not sure Gemini is powerful enough to take on that role and name and the existing matrix is still alive if somewhat tired!

Approach to Installation

I adopted a different approach to the installation to my usual one for a Linux only machines where I set up a complex partitioning scheme at the start. That involves carrying out the partitioning before starting the installation from the LiveUSB and defining the mount points during installation. The same procedure was used for most of my installs of dual or triple booted machines. That allowed more to be done in a GUI (Graphical User Interface) and a very quick install, but only if one knew exactly what was needed.

This time there were different factors to consider:

This has actually been quite instructive as the dual boot installation it provided is, in several details, different to what I would have done. I am yet to decide which is best!

In summary I installed the new SSD and only added a GPT (GUID Partition Table) partition structure in the LiveUSB but left the whole drive unallocated with no actual partitions added. Under these circumstances the Installer dual boot option is programmed to use the unallocated space to install Linux Mint leaving everything else intact.

Summary of actions before installing.

So before we do anything about installing we need to:

To enter the BIOS Setup on the WAP Pro v2, press F2 repeatedly during the self test period with the logo displaying.

We will now look at those activities in more detail:

Background Review of Optimisation of Disk Drives, in particular Solid State Drives (SSDs)

The first time I had used an SSD was in the Helios and at that time I did considerable background reading which revealed that there were more nuances in getting a good setup than I had realised. I initially correlated the various pieces of information I had on optimising performance into a checklist. Many areas which do not need action on the the WAP Pro as they are already set up or are now defaults in Mint and Ubuntu. The in depth coverage therefore moved to a dedicated page on The Use of Solid State Drives in Linux and only the links in Bold below will be covered in the appropriate sections of this page, the links here all point to in depth coverage in the dedicated page. It is still worth a quick look through the following.

Checklist for the use of Solid State Drives

  1. The SATA controller mode needs to be set to AHCI in the BIOS. AHCI provides a standard method for detecting, configuring, and programming SATA/AHCI adapters. AHCI is separate from the SATA standards, but it exposes SATA's more advanced capabilities that are required in the BIOS to fully support an SSD. Changes normal not available in current BIOSes as already the default.
  2. Partition Alignment is critical but should be correct on a recent system is essential for optimal performance and longevity as SSDs are based on flash memory, and thus differ significantly from hard drives. While reading remains possible in a random access fashion on pages of typically 4 KByte, erasure is only possible for blocks which are much larger, typically 512 KByte, so it is necessary to align the absolute start of Every partition to a multiples of the erase block size ie 1 Mbyte
  3. Use a file system supporting TRIM: in practice this means EXT4 in Linux, normally not supported by Windows.
  4. Automate TRIM. A SSD system needs some form of automatic TRIM enabled to assist garbage collection otherwise the speed decreases and the number of writes also increases at the expense of SSD life.
  5. Check Queued TRIM is Blacklisted. A number of drives do not correctly support TRIM reliably and, in particular, the queued TRIM command which may need to be inhibited. The latest kernels take care of this but it is a real issue if updating an existing machine or using an LTS (Long Term Support) Distribution with an elderly kernel.
  6. Overprovision. This is the reserving of some areas of disk and leaving unformatted. This also desirable for similar reasons to TRIM, namely maintaining speed and decreasing disk writes - a certain amount is already reserved by manufacturers (~7% ) but it is best increased by another 10 to 20%.
  7. Control use of Swapping to disk. SSDs have a large but finite number of write cycles and frequent swapping uses that up. The use of swap files is not optimised for desktop machines in Linux for SSDs (or even Hard Drives) and needs to be changed.
  8. Inhibit Hibernation. (suspend to disk): This should be inhibited as it causes a large number of write actions, which is very bad for an SSD. If you are dual booting make sure Windows also has hibernation inhibited - in any case it is catastrophic if both hibernate to the same disk.
  9. Avoid Defragmentation. It is not required in Linux and is never done automatically. It must be avoided because the many write actions it causes will wear an SSD rapidly - make sure a dual booted system does not kill your SSD by defragmentation and avoid the need by maintaining at least 20% spare capacity on each partition, even in Linux this has benefits.
  10. Consider changes to the file access. Changes can be made to reduce the number of 'writes' by options in the configuration files such as noatime. (relatime is the default in Ubuntu and Mint and is the best compromise)
  11. Optimise the disk access scheduler. The scheduler may be optimised for hard drives rather than SSDs. The default scheduler for Ubuntu/Mint is 'deadline' which is acceptable for both but noop may be better if only SSDs are in use. No Action Planned

Only four of the above are likely to need addressing in a dedicated Linux system using Ubuntu or Mint - they are in bold. The first two are factors in the initial partitioning of the SSD and the other two are carried out during the setting up procedure.

Burning a LiveUSB to run Linux Mint

This is based on the Instructions on the Mint Web Site and in many ways the most important step towards pruning Linux on a new or old machine as everything can then be done in Linux even on a new machine without any existing operating system. This can save £45 to £90 in the future by not having Windows on your purchase of a specialist machine.

In Linux Mint

In Windows, Mac OS, or other Linux distributions

Download balenaEtcher , install it and run it.

In windows the portable version does not even install anything on your machine or can live on a USBdrive. I do not have a Windows system to test it but it ran under the wine, the Windows 'emulator' under Mint

Using Etcher

Access the BIOS and Set it up for Linux.

To enter BIOS Setup, turn on the computer and press Esc ( during the POST (Power On Self Test - the Chillblast logo is probably also displayed . It is possible that the Logo and options will not display if the BIOS has been set into Quiet Mode and you just have to keep tapping F2 until it enters the BIOS. If you get a “Keyboard Error”, (usually because you pressed F2 too quickly) just press F2 again.

The BIOS is a fairly standard one probably by American Megatrends Inc (AMI) but with a very reduced set of options available. There are 5 tabs and you have to navigate using the keyboard. Help Instructions are always shown on the right. There are only three settings you need to check and mine were all correctly set in the Lafite BIOS as supplied. They do need to be checked and correct before you start.

Note: It seems that the WAP Pro BIOS only supports UEFI .

In Summary, the only setting you are likely to need to change is to set Secure Boot [Disabled] but it is important to ensure Fast Boot [Disabled] as it is often used in Windows systems and may end up being set during testing before the machine is delivered.

Booting the LiveUSB

The boot menu is accessed on the WAP Pro by pressing F7 during the time the BIOS is doing the POST checks, ie when the initial Logo is being displayed.

You will then see a menu with the Internal Drive at the top followed by two entry points on the USB Sticks which correspond to a conventional and UEFI configuration. One needs to select the UEFI version in the USB boot options as the Gemini BIOS only supports UEFI. If you use the wrong entry it will probably work as a LiveUSB but when you come to do an install you will end up installing the wrong version which will almost certainly not work.

It will take a couple of minutes to boot up from a LiveUSB, the faster the USB drive the quicker it is. Mine was a Kingston G100 USB3.1 Drive. You should now have a Mint Desktop and a fully working system although much slower than the installed version will be.

Checks on what is working: At this point I always do a quick check to see what is working, the obvious things to check are:

Existing Partitioning: We can now check what the partitioning is currently on the existing drive and decide what to do on the new SSD. The partition editor is called gparted and can be run by Menu -> gparted. When first opened it will look like this:

Screenshot

At the top right the drop down allows you to choose the device and the machine as delivered has the one Solid State Drive divided up into 4 partitions. The first is part of the UEFI booting system. The second is tiny and has been reserved for something, I have no idea what by Microshaft. The third contains the Windows System and the fourth is a recover partition which can be used to rebuild a Windows system supplied without any DVDs or the like. We want to leave all the alone. It is obvious that there is no space for a useful Linux system without adding an extra drive.

You can get more information on the Device if you go to the menu at the top and View -> Device Information and another panel will open up. It tells us that the partition table is the new GPT type which is not a surprise. If we select a partition and then Partition -> Information we can get a matching level of information about the partition including one piece of information which we will find useful in the future - the UUID which is a Universal Unique IDentifier for that partition - we can take that device to a different machine and the UUID will remain the same. The device names eg /dev/sda1 can change depending on the order they are plugged in. If and when we need the UUID in the future this is a way to find it in a GUI.

If we plug in an extra USB stick and look for it in the drop down menu we will find it has the old MBR partitioning which shows as msdos and it has a single partition with an old fashioned fat32 file system

If you are new to Linux or Linux Mint this is a good time to explore the system. Just remember that when you leave the LiveUSB everything you install or configure will be lost.

Background on General Linux File Systems

I have recently realised how much I take for granted in the implementation of our system and it it quite difficult to explain to some one else how to install it and the logic behind the 'partitioning' and encryption of various sections. So I have decided to try to go back to basics and explain a little about the underlying structure of Linux.

Firstly Linux is quite different to Windows which more people understand. In Windows individual pieces of hardware are given different Drive designations A: was in the old days a floppy disk, C: was the main hard drive, D: might be a hard drive for data and E: a DVD or CD drive. Each would have a file system and directory and you would have to specify which drive the a file lived on. Linux is different - there is only one file system with a hierarchical directory structure. All files and directories appear under the root directory /, even if they are stored on different physical or virtual devices. Most of these directories are standard between all Unix-like operating systems and are generally used in much the same way; however, there is a File System Hierarchy Standard which explicitly defines the directory structure and directory contents in Linux distributions. Some of these directories only exist on a particular system if certain subsystems, such as the X Window System, are installed and some distributions follow slightly different implementations but the basics are identical where it be a web server, supercomputer or android phone. Wikipedia has a good and comprehensive Description of the Linux Standard Directory Structure and contents

That however tells us little about how the basic hardware (hard drives, solid state drives, external USB drives, USB stickers etc are linked (mounted in the jargon) into the filesystem or how and why parts of our system are encrypted. Nor does it tell us enough about users or about the various filesystems used. Before going into that in details we also need to look at another major feature of Linux which is not present in Windows and that is ownership and permissions for files in Linux. Without going into details every file and folder in Linux has an owner and belongs to a group. Each user can belong to many groups. Each Folder and File has associated permission for owners, groups and the rest of the world which include write, read and execute permissions. This is vastly more secure than Windows which has no similar restrictions over access to information.

So coming back to Devices, how does a storage device get 'mapped' to the Linux file system? There are a number of stages. Large devices such as Hard Disks and Solid State Drives are usually partitioned into a number of sections before use. Each partition is treated by the operating system as a separate 'logical volume', which makes it function in a similar way to a separate physical device. A major advantage is that each partition can have its own file system which helps protect protect data from corruption. Another advantage is that each partition can also use a different file system - a swap partition will have a completely different format to a system or data partition. Different partitions can be selectively encrypted when a block encryption is used. Older operating systems only allowed you to partition a disk during a formatting or reformatting process. This meant you would have to reformat a hard drive (and erase all of your data) to change the partition scheme. Linux now has disk utilities now allow you to resize partitions and create new partitions without losing all your data - that said it is not a process without risks and important data should be backed up. Once the device is partitioned each partition needs to be formatted with a file system which provides a directory structure and defines the way files and folders are organized on the disk and hence become part of the overall Linux Directory structure once mounted.

If we have several different disk drives we can chose which partitions to use to maximise reliability and performance. Normally the operating system (kernel/root) will be on a Solid State Drive (SSD) for speed and large amounts of data such as pictures and music can be on a slower hard disk drive whilst the areas used for actually processing video and perhaps pictures is best on a high speed drive. Sensitive data may be best encrypted so ones home folder and part of the data areas will be encrypted. Encryption may however slow down the performance with a slow processor. There are many obvious advantages in separating the Operating System from areas holding user specific information and configuration and from Data areas which may be shared. Other areas may need to be encrypted.

But, as I said earlier, there is no need to do everything at the start. For the experienced user it is quickest and actually safest to do or prepare everything at the begriming but that is daunting to somebody just starting and I am planning to use the system defaults this time safe in the knowledge that I have explored ways already to improve the system in the future.

Installing the new SSD Drive

So it is now type to bite the bullet, get out a screwdriver and install the new expensive Solid State Drive for extending the system for Linux or transforming it for Windows.

Shut down and turn the machine off and unplug it. There is a large panel on the bottom secured by 4 small crosspoint screws. The panel is a tight fit but there is a space to get a nail under to get it out. The SSD just plugs in at a very slight angle then drops down the last mm to be retained by the metal bracket. It looks as if it could be adjusted if necessary. SSD drives come in two depths and my Crucial MX500 is the 7mm slim line but has a extender which can be fixed with self adhesive tape to increase the depth. It then looked as if it would be well supported by the lid but you may wish to add a bit of thin single sided self adhesive foam keep it firmly in place when the lid is replaced. The cover needs care replacing as it is a tight fit and needs to be aligned exactly and right down everywhere before replacing the 4 screws. You should take care not to touch the connectors as static can damage electronics and ideally you should be earthed. The two longest activities were finding a small cross point screwdriver and getting a satisfactory picture of a shiny drive in a matt black hole.


Screenshot

The connector is to the right and you can see the frame it drops into - it did not need adjustment but there is obviously scope by loosening the screws. The adaptor can be seen glued on the top to increase it from slimline to standard depth. Overall the construction standard looks good.

Partitioning the new drive

Before starting we need to choose the Partitioning Table Scheme for the new drive. There are two choice, the old and familiar to many MBR with its various restrictions or the more modern GPT scheme which I recommend, not only does it have less restrictions but it is more secure as critical information has extra redundancy.

A little background may be in order on all this AS (Alphabet Soup). Wikipedia tells us "A GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk, using globally unique identifiers (GUID). Although it forms a part of the Unified Extensible Firmware Interface (UEFI) standard (the Unified EFI Forum proposed replacement for the PC BIOS), it is also used on some BIOS systems because of the limitations of Master Boot Record (MBR) partition tables, which use 32 bits for storing logical block addresses (LBA) and size information. Most current operating systems support GPT. Some, including OS X and Microsoft Windows on x86, only support booting from GPT partitions on systems with EFI firmware, but FreeBSD and most Linux distributions can boot from GPT partitions on systems with either legacy BIOS firmware interface or EFI."

So we will be configuring the new SSD to have a GUID Partition Table (GPT) with an unformatted partition occupying the whole space using gparted.

So start up the LiveUSB again and open gparted. You should now see an extra device on the drop down menu at top right. Select it and go to Device -> Create Partition Table . You will have a screen with dire warnings so make sure it is the correct drive then select GPT from the tiny box in the middle and Apply and that is it. The install should use the empty space and add the filesystems if you use the "Install Linux Mint alongside Windows 10" option.

The actual Install

Having completed the partitioning I started the install from the LiveUSB using the "Install Linux Mint alongside Windows 10" option and followed the instructions. There is only one point to take into account and that is choosing the user. The first user seems to have certain addition administrative rights which are needed when it comes to synchronising between machines. It may all seem very much in the future but it is sensible for the first user to be a family or firm name rather than a personal user so the it can be used for tasks where a user must not be active. Backing up a user whilst all the files are changing leads to disasters. If you already have machine with multiple users you install them in the same order as there is number associated with every user and group - for the moment just trust me on this.

If you want to see what will happen during the install there are lots of examples on the internet which have screen shots of every step although most concentrate on the hard way where you specify the formatting yourself. Search for "Install Linux Mint alongside Windows 10" but look at a couple just in case.

If all has gone well you will have dual boot system where you can chose each time which system to use as you boot up - initially you have 10 seconds to make the choice but we can shorten that to a couple of seconds.

Choices: We are at a break point.

We have reached the point where you have a fully working and usable Linux Mint system and are at the equivalent point to where you are on the Windows 10 Pro system the machine was delivered with. With the Windows system you still need to install a number of programs to make it useful, a virus checker to make it secure and drivers for any printers or other peripherals. Likewise there are a number of things on this Linux System which are desirable but you do not have to do them immediately. You do not need virus checkers like with Windows but there are a few things, some of which have already been mentioned which will improved the life and reliability so we will first look at a few of those. If you want to explore, install programs, printers and generally use the machine for a few weeks it is not a problem. In general you will find many activities are easier than Windows.

I have pondered over the best way to proceed and finally decided that the remaining activities are largely independent and can be carried out in almost any order. Several of the sections are general and can be applied to other Linux machines. Some, like the performance evaluation I have carried out, are specific but largely for information. I am therefore going to initially provide the remaining information as a series of appendices. The Appendices are written to be as generally applicable to all our machines as possible and could be used as basis to rebuild any machine after a catastrophic failure such as a disk failure. The order of the Appendices is roughly the order we used to build Gemini The time taken to set up Gemini was measured in hours as opposed to the write up which has taken many days! The final Appendix contains a performance review which includes comparisons to our other past and current machines

Appendix 1 What you should do in the short term to get the best from the system

So I am now going to cover some of the things have mentioned which should be done on a timescale of weeks rather than years to keep the machine running reliably. Most of this has been lifted from earlier documents and the essential parts brought together and simplified. First however I need to introduce new users to use of the Terminal. Experience users should skip this or send suggestions on how to improve the section!

What is a Terminal? Why use it?

Up to now I have managed to avoid use if the terminal. Most people these days to do all their interaction via a mouse and the keyboard is only used to enter text, addresses searches etc., and never to actually do things. Historically it was very different and even now most serious computer administration uses what is known as a terminal. It is horses for courses. There is nothing difficult or magic using a "terminal" and there are often ways to avoid its use but they are slow, clumsy and often far from intuitive. However some things which need to be done are near impossible to do. I will not expect much understanding of the few things we have to be one in a terminal, think of the things you cut and paste in as magic incantations. I am not sure I could explain the details of some of the things I do.

The terminal can be opened by Menu -> Terminal, it is also so important that it is already in favourites and in the panel along with the file manager. When it has opened you just get window with a prompt. You can type commands and activate them by return. In almost all cases in this document you will just need to cut and paste into the line before hitting return. Sometimes you need to edit the line to match your system ie a different disk UUID or user name.

You need to know that there are difference in cut and paste and even entering text when using the terminal.

On the positive side

What is sudo

We have spoken briefly about permissions and avoiding changes to system files etc. When you use a normal command you are limited to changing things you own or are in a group with rights over them. All system files belong to root and you cannot change them or run any utilities which affect the system.

If you put sudo in front of a command you take on the mantle of root and can do anything hence the sudo which stand for SUperuser Do. Before the command is carried out you are asked for your password to make sure you do have those rights. Many machines are set up so retain the Superuser status for 15 minutes so do not ask repeatedly while you carry out successive commands. Take care when using sudo, you have divine powers and wreak destruct with a couple of keystrokes.

Tutorials

There are many good tutorials including a five minute one from Mint here and a longer one from Ubuntu here

Can I avoid the terminal?

There are ways to reduce the use of the terminal. One I use a lot when I need to edit or execute as root is to start in the file manager then right click on empty space and click Open as Root. I now get another file manager window with a Red Banner and I can now see and do anything as I am a Superuser, I can also destroy everything so beware. I can now double click on a file to open it for editing as root and when editing in Xed there will again be a Red Banner. You can also Right Click -> Open in a Terminal which will open a terminal at the appropriate position in the folder tree.

Warning about Deleting as a superuser: Do not 'Move into the Recycle Bin' as root, it will go into the root trash which is near impossible to empty, if you need to Delete use right click -> Delete which deletes instantly.

There are however some activities which will still demand use of sudo in a terminal so you will have to use it on occasion.

Changes to the System which ought to be done at an early stage

Firstly we will look at the two changes required because we are using a SSD, these are best done at an early stage say during the first few days and then look at some changes to speed up the boot process and finally how to inhibit Hibernate which is bad for SSDs and disastrous on a dual boot system.

Reduce Swapping to Disk.

The changes that are described here are desirable for all disk drives and I have already implemented them on all my systems. They are even more important when it comes to SSDs. A primary way to reduce disk access is to reduce the use of Swap Space which is the area on a hard disk which is part of the Virtual Memory of your machine, which is then a combination of accessible physical memory (RAM) and the Swap space. Swap space temporarily holds memory pages that are inactive. Swap space is used when your system decides that it needs physical memory for active processes and there is insufficient unused physical memory available. If the system happens to need more memory resources or space, inactive pages in physical memory are then moved to the swap space therefore freeing up that physical memory for other uses. This is rarely required these days as most machines have plenty of real memory available. If Swapping is required the system tries to optimise this by making moves in advance of their becoming essential. Note that the access time for swap is much slower, even with an SSD, so it is not a complete replacement for the physical memory. Swap space can be a dedicated Swap partition (normally recommended), a swap file, or a combination of swap partitions and swap files. The hard drive swap space is also used for Hibernating the machine if that feature is implemented

It is normally suggested that the swap partition size is the same as the physical memory, it needs to be if you ever intend to Hibernate (Suspend to disk by copying the entire memory to a file before shutting down completely). It is easy to see how much swap space is being used by using the System Monitor program or by using one of the system monitoring applets. With machines with plenty of memory like my Defiant, Helios and Lafite which all have 8 Gbytes you will rarely see even a few percent of use if the system is set up correctly which brings us to swappiness.

There is a parameter called Swappiness which controls the tendency of the kernel to move processes out of physical memory and on a swap disk. See Performance tuning with ''swappiness'' As even SSD disks are much slower than RAM, this can lead to slower response times for system and applications if processes are too aggressively moved out of memory and also causes wear on solid state disks.

Reducing the default value of swappiness will improve overall performance for a typical installation. There is a consensus that a value of swappiness=10 is recommended for a desktop/laptop and 60 for a server with a hard disk. I have been using a swappiness of 10 on my two MSI U100 Wind computers for many years - they used to have 2 Gbyte of RAM and swap was frequently used. In the case of the Defiant I had 8 Gbytes of memory and Swap was much less likely to be used. The consensus view is that optimum value for swappiness is 1 or even 0 in these circumstances. I have set 1 at present on both the Helios and the Lafite with an SSD to speed them up and minimise disk wear and 0 on the Gemini as the swap is small and so is the memory.

To check the swappiness value

cat /proc/sys/vm/swappiness

or open the file from the file manager.

For a temporary change (lost on reboot) to a swappiness value of 1:

sudo sysctl vm.swappiness=1

To make a change permanent you must edit a configuration file as root:

xed admin:///etc/sysctl.conf

Or use Files and right click -> Open as Root, Navigate and double click on file

Search for vm.swappiness and change its value as desired. If vm.swappiness does not exist, add it to the end of the file like so:

vm.swappiness=1

Save the file and reboot for it to take effect.

Change auto-mount point for USB drives back to /media

Ubuntu (and therefore Mint) have recently changed the mount points for USB drives from media/USB_DRIVE_NAME to /media/USERNAME/USB_DRIVE_NAME. There is a sort of logic as it makes it clear who mounted the drive and protects the drive from other users. It is however an irritation or worse if you have drives permanently plugged in as they move depending on the users who logs in first and when you change user or come to syncronise, back-up etc. I therefore continued to mount mine to /media/USB_DRIVE_NAME. One can change the behaviour by using a udev feature in udisks version 2.0.91 or higher which covers all the distributions you will ever use.

Create and edit a new file /etc/udev/rules.d/99-udisks2.rules - the easiest way to create a new file as root is by ising the right click menu in the file browser and Open as Root then right click -> New file and double click to open to edit. You willhave red toolbars to remind you.

and cut and paste into the file

ENV{ID_FS_USAGE}=="filesystem", ENV{UDISKS_FILESYSTEM_SHARED}="1"

then activate the new udev rule by restarting or by

sudo udevadm control --reload

When the drives are now unplugged and plugged back in they will mount at /media/USB_DRIVE_NAME

Do not ask me exactly what the line does - I think it sets some system 'flag' which changes the mode.

Inhibit Hibernation [Now hidden from most Mint Menus so less of a problems and possible already inhibited in latest versions]

Hibernation (suspend to disk) should be inhibited as it causes a huge amount of write actions, which is very bad for an SSD. If you are dual booting you should also make sure Windows also has hibernation inhibited - in any case it is catastrophic if both hibernate to the same disk when you start up. Ubuntu has inhibited Hibernation for a long time but Mint did [/does] not and I prefer to change it. An easy way is to, in a terminal, do:

sudo mv -v /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla /

Note this is a single line and is best copied and pasted it into the terminal.

It moves the settings file that enables hibernation, to the main directory / (root) rendering it ineffective. The new location is a safe storage, from which you can retrieve it again, should you ever wish to restore hibernation. Thanks to https://sites.google.com/site/easylinuxtipsproject/mint-cinnamon-first for this idea. (I have not checked and have no views on any of the other information on that page.)

Note: The file is missing on the latest version 20.1 of Mint so I assume Hibernation is already inhibited

One needs to reboot before this is active. After the reboot Hibernation should now no longer be one of the options when you close the computer. Applets or Keyboard Shortcuts which try to hibernate may still display but will demand root access.

Appendix 2

Additional Software Requirements during setup and use

It is convenient to start loading programs at this point as some of the utilities will be needed in the next stages. There are two sorts of programs required in addition to those loaded during the install, in summary they are:

Several of the programs are needed for the next stage or will make it much more convenient so I recommend doing an install of most of our standard programs before continuing. It also removes a few anomalies where programs which are available on the LiveUSB are not after a full install! gparted seems to be one of them.

Installing our standard set of programs after a fresh install of the operating system.

Before we can re-install our backed-up user folders into freshly installed system we need to also install any programs which are not installed by default. Our system has quite a number of extra programs and utilities installed.

Normal Programs available from Mint Repositories

Most of the programs we use can be installed from the Software Manager or the Synaptic Packager manager both both take time to find and install programs, and usually it is done one by one. The Synaptic Package Manager also needs one utility (apt-xapian-index) to be installed before the indexing is enable to allow the quick search to be used.

There are therefore good reasons to do the installs via the command line where the list of our standard programs can be copied in and installed in a single batch. Other users may want to check the list and leave out programs they do not need. The end result is the programs are downloaded and installed in as fast a manner as possible. Over time the list will change but at any time it can be rerun to check and those not installed or needing updating will be updated or installed. Missing programs will however cause the install to terminate so the output should be watched carefully. This is our current list ready to paste as a single line into a terminal.

sudo apt-get install gparted vnstat vnstati zenity sox libsox-fmt-mp3 ssh unison-gtk gdebi hardinfo apt-xapian-index libpam-mount ripperx sound-juicer lame filezilla keepass2 udftools libudf0 libudf-dev i7z meld mousepad sublime-text git gitk cheese skypeforlinux v4l-utils guvcview gimp jhead wine wine64 wine-binfmt

Programs only able to be installed from terminal

If you are using wine (The program that enables you to run windows programs) then you have to install extra fonts using a Microsoft installer. This is the only program that we use (or maybe altogether) that has to be installed in the terminal. Unfortunately this may appear as a dependency of ubuntu-restricted-extras (initially included above because it includes useful codecs) when the install is done on the latest versions of Mint. It is not installed on all my machines but if you find you need ubuntu-restricted-extras it should be installed after ttf-mscorefonts-installer has been installed in a terminal.

sudo apt install ttf-mscorefonts-installer

You need to agree the T&C during the install of ttf-mscorefonts-installer before they will download and an undesirable feature is that you must install in a terminal to be able to read and agree them. Use cursor - up down keys to get to the bottom then left - right to highlight OK and Click. If you get it wrong search my web site for instructions what to do.

Programs installed from PPAs (Private Personal Archives)

None in use at present - Chrome/Chromium may have to be installed that way from Mint 20 but is available directly in Mint 20.1

Programs which have to be installed from .deb (installer files) or appimages

Most programs installed this way to not automatically update although there are a few exceptions so there has to be a very good reason why any were installed that way, usually old programs or where a PPA had not been updated for the latest version of Cinnamon. You can see what has been installed from .deb files manually in the Synaptic Package Manager by selecting Status and then Installed (local or obsolete) on left.

This section also includes installs from appimages the most essential of which is pCloud and will be covered separately,

pCloud - download - find it - right click -> properties -> permissions and click box to allow execute - close properties and double click and fill in credentials. It is an appimage and runs in the home folder so a copy is needed for each user. Once installed it will occasional put up a notification asking you to update. Eventually you will need to log in and also set up a folder to sync for keepass2 and jdotxt. Saved copy in My Programs

jdotxt I downloaded the .deb file as the repository has not been updated to support the latest releases of ubuntu/mint Also saved Copy in My Programs

opera - https://www.opera.com/download - choose open with debi - wait - Install - Accept updating automatically with system - job done except Run from menu - right Click on icon in toolbar and 'pin to panel' - Finished. Integrated with system so manual updates not needed. Also saved Copy in My Programs

Veracrypt In My Programs - downloaded from https://www.veracrypt.fr/en/Downloads.html Also there is a saved Copy in My Programs

Zoom from https://zoom.us/download and zoom_amd64.deb with a Copy in My Programs

Virtualgl - Not essential only used to run a video test. Copy in My programs

Old Windows programs still in use under Wine

Wine programs are specific to each user and each wine ecosystem is in .wine so if you restore an archive of your home folder it will include all the wine programs. Those we use are:

The next phase involves separating the functions of the system into partitions and restoring the home folders from the backups on the USB Disk in the grab bag. In other words getting back to our 'Standard Installation'. It is best to do it before you end up with things to undo. However if you are away and just do not want to get invoved then you can probably just restore a single user folder without and partitioning etc but all the DATA and VAULT would be missing. Before doing anything you may want to look back at the section Introduction to the General Linux File System and read the next section carefully

Appendix 3

Adding the Users, Allocating to Groups and a bit about permissions.

The first thing to do once the Install is complete is to set up the Users on the system in the correct order.

Once the three users are set up they need to be put into certain groups. Groups fall into two quite different sorts - groups which are in effect users. Every user is in a group of the same name. The second sort are to do with rights, eg users in the sudo group have superuser powers and can use sudo to assume all the rights of root and those in lp-admin have the right to set up printers.

The first user which was set up during the install has, you will notice, membership of more groups than subsequent users which have to be manually added to those groups. This is easy in Users and Groups by clicking on the list of groups which brings up the full list with tick boxes. You either need a very good memory or it become iterative as you go back to have a look.

This is what the process looks like after clicking in the middle of the list of groups

Screenshot

You can also cheat if you are happy with the terminal as id username brings up a full list including the numeric values. Here is a current list from gemini

peter@gemini:~$ id
uid=1001(peter) gid=1001(peter) groups=1001(peter),4(adm),24(cdrom),
27(sudo), 30(dip),46(plugdev),114(lpadmin),134(sambashare),
1000(pcurtis),1002(pauline)
peter@gemini:~$

Now another very important step. We need to allow all of the users access to each others files (we have nothing to hide) and the shared areas. This means all are users must be in each others groups. You will find the list of groups includes all the current users so add pauline and peter to pcurtis's groups, peter and pcurtis to pauline's groups etc

Recall: Every file and folder has permissions associated. The permissions are read, write and execute (ie run as a program). The permissions are separately set for the owner, the group and the rest of the world. The owner always has read and write execute as required. The grouped in our case will have the same as we trust each other. The rest of the world will certainly not have the permission to write. For various reason to do with synchronisation the shared files in DATA and VAULT are usually changed in bulk have owner pcurtis and group adm with owner and group having read, write and execute permissions. The end result of all this is that we all have access to everything as if we had created and owned it

There is only one thing that a group member can not do that the owner can do and that is change a date stamp on the file or its permissions . That is why we set all the files in DATA and VAULT to be owned by pcurtis and use Unison to synchronise from User pcurtis as the process requires the ability to restore original timestamps.

The following will set all the owners, groups and permissions in DATA. The primary user is refereed to by id 1000 to make this more general. This uses the octal way of designation and sets the owner and group to read and write and other to read for files and read write and execute for folders which always requir execute permission. I am not going to try o explain the following magic incantation at this time or maybe ever, but it works and very fast with no unrequired writes to wear out your SSD or waste time. These need to be run from user 1000 (pcurtis) to change the permissions or put an extra sudo in front of each xargs

sudo chown -R 1000:adm /media/DATA && find /media/DATA -type f -print0 | xargs -0 chmod 664 && find /media/DATA -type d -print0 | xargs -0 chmod 775

Background on the 'Standard Installations' we have implemented on our machines

We currently have four machines in everyday use, three are laptops of which two are lightweight ultrabooks, the oldest machine still has the edge in performance being a Core i7 processor nVidia Optimus machine with discrete Video card and 8 thread processor but is heavy and has a limited battery. The other two are true ultrabooks - light, small, powerful and many hours of use on battery power. We also have a desktop which has scanners, printers and old external drives connected and a big screen which is in the process of being replaced by Gemini, a lightweight microsystem, dual booted with Windows 10 although we have rarely used it.

All the machines are multi-user so either of us can, in an emergency, use any machine for redundancy or whilst travelling. Most of the storage is shared between users and periodically synchronized between machines. The users home folders are also periodically backed up and the machines are set up so backups can be restored to any of the three machines. In fact the machines actually have three users. The first user installed with id 1000 is used primarily as an administrative convenience for organising backups and synchronisation for users from a 'static' situation when they are logged out.

There is no necessity for the id_1000 user to have an encrypted home folder as no sensitive information is accessed by that user and there is no email with saved logins or saved information in the browsers. In contrast the other users have email and browsers where the saved logins are the weak point in most systems as most passwords can be recovered by emails which is an incredibly weak point for many web sites where a minimum for password recover should be two factor using a txt or other extra information. The other users need encrypted home folders where email and browser information is kept safe. I favour that over a separate encrypted folder holding email and browser 'profiles' as I have had problems with encrypted folders initial mounted at login getting de-mounted when email and browsers have been active with undesirable results and real potential for considerable data loss. An encrypted folder to save sensitive documents and backups is however desirable if not essential and it is preferable for it to mounted at login.

So the end result is that each machine has:

The rationale for this comparatively complex arrangement is that it gives good separation between the various components which can be backed up and restored independently.

This has been proven over the years. The transfer of users between machines has been carried out many times and the synchronisation of DATA and VAULT is a regular monthly process. New LTS versions of Linux are usually done as fresh installs preserving Users and DATA/VAULTS. Most recently the same procedures have been used to set up the latest machine this is being written on.

Next Step: It is however intended to go slightly further and have a set timetable for comprehensive backups and synchronising to include off-site storage and in the grab-bag of the hardware and to check that LiveUSBs and the backups of programs are up-to-date

Advanced Partitioning to Separate System, Users and Data

We spoke briefly about the Linux File System above. To reiterate, separating the various parts onto different devices obviously makes the system more robust to hardware failures. The best would be to have each component on a physical different device - almost as good is to use different areas of a device with a separate file system if a directory is destroyed then the other file systems remain intact and functional. This is where partitioning comes in. Each physical device can be divided into a number of virtual devices (partitions) each with its own file system and mounted at different parts of the overall file system.

The following shows the actual partitions I have set up on Gemini. I should perhaps mention that I have been using the setting up of Gemini to check that this document reflects what I have done. I have named the partitions especially to make it clear - normally one would not bother. Also note this a view of a single hardware device, there is another which holds the Windows System which could be displayed using the dropdown menu at top right.

Screenshot

Implementing our Standard Installation

We will start by first working out exactly what we need in detail in some detail as we do not want to repeat any actual partitioning. Then start with the easiest change which is also arguably the most important as it separates our data from the system and users. So the overall plan is

  1. Look at the requirements and decide what the final arrangement should be even if we do not implement it all at once. The main decission is how much space to allocate to each partition.
  2. Repartition the device to have a final partitions layout similar to that above from the LiveUSB.
  3. Arrange for the DATA partition to be mounted automatically when the system boots. That only involves collecting some information and adding one line to a file.
  4. Change to a separate partition for the home folders, This is the tricky one.
  5. Maybe create an encrypted partition to hold our Vault or just mount the VAULT leaving encryption for latter
  6. Maybe encrypt one or more of the home folders

The question now is how to simply get from a very basic system with a single partition for the Linux system to our optimised partitioning and encryption. As I said earlier my normal way is to do it all during install but that would be considerable challenge to a new user. So we divide it into a series of small challenges each of which is manageable. Some are relatively straight forwards some are more tricky. Some can be done in a GUI and editor but some are only practical in terminal. Using a terminal will mostly involve copy and pasting a command into the terminal sometimes with a minor edit to a file or drive name. Using a terminal is not something to be frightened of - it is often easier to understand what you are doing.

Looking at the requirements and deciding what the final arrangement should be

There are two requirements which are special to Mint and one for our particular set up which influence our choice of the amount of space to allocate.

Disk Space Requirements when using TimeShift

TimeShift has to be considered as it has a major impact on the disk space requirements to be considered during partitioning the final system. It was one of the major additions to Mint which differentiates it from other lesser distributions. It is fundamental to the update manager philosophy as Timeshift allows one to go back in time and restore your computer to a previous functional system snapshot. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot and still have the ability to apply updates selectively (as you did in previous releases). This comes at a considerable cost as the snapshots take up space. Although TimeShift is extremely efficient my experience so far with using Timeshift means that one needs to allocate at least an extra 2 fold and preferably 3 fold extra storage over what one expects the root file system to grow to, especially if you intend to take many extra manual snapshots during early development and before major updates. I have already seen a TimeShift folder reach 21 Gbytes for a 8.9 Gbyte system before pruning manual snapshots.

Special Requirements of pCloud

We make use of pCloud for Cloud storage and that has consequences on the amount of space required in the home folders. pCloud uses a large cache to provide fast response and the default and minimum is 5 GB in each users home folder. In addition it needs an additional 2 GB for headroom (presumably for buffering) on each drive. With three users that gives an additional requirement of ~17 GB over a system not using cloud storage -other cloud storage implementations will also have consequences on storage requirements and again with will probably be per user.

Normal Constraints on the filesystem

One must take into account the various constraints from use of an SSD (EXT4 Filesystem, Partition Alignment and Overprovision) and the space required for the various other components. My normal provisioning goes like this when I have single boot system and most are the same for the Linux components of a Dual Boot System:

The first thing to understand is fairly obvious - you cannot and must not make changes to or back up a [file] system or partition which is in use as files will be continuously changing whilst you do so and some files will be locked and inaccessible.

That is where the LiveUSB is once very useful if not essential. We cannot change a partition which is in use (mounted in the jargon) so we must make the initial changes in the partitions in the SSD drive from the LiveUSB. After that we can make many of the changes from within the system provided the partition is unmounted and not from the root or home partitions which will always be in use. It is best to create an extra user who can make changes or backup a user who is logged out and therefore static, we have already set up an administrative user with id 1000 for such purposes. So even if we only have one user set up we will probably need to create a temporary user at times, but that only takes seconds to set up.

Changing partitions which contain information is not something one wants to do frequently so it is best to decide what you will need, sleep on it overnight, and then do it all in one step from the LiveUSB. One should bear in mind that shrinking partitions leaving the starting position fixed is normally not a problem, moving partitions or otherwise changing the start position takes much longer and involves more risk as well as the risk of power failures or glitches. I have never had problems but always take great care to back up everything before starting.

As soon as I was satisfied I knew that the machine was working and I understood what the hardware was I did the main partitioning of the SSD from the LiveUSB using gparted. This actual text belongs to a dual booted system with a new extra 1 TB SSD drive added before carrying out a default Mint Install.

I then shut down the LiveUSB and returned to a normal user.

Mounting the DATA Drive

Earlier on we looked at the output of gparted aand saw where we could get the UUIDs of the drives. I am going to give an alternative view of the same information again showing the important part you will need, namely their UUIDs. The output follows for Gemini after partitioning and was obtained by running the command lsblk -o NAME,FSTYPE,UUID,SIZE in a terminal:

peter@gemini:~$ lsblk -o NAME,FSTYPE,UUID,SIZE
NAME          FSTYPE      UUID                                   SIZE
sda                                                            931.5G
├─sda1        vfat        3003-2779                              512M
├─sda2        ext4        4c6d1f4d-49c6-40df-a9e9-b335fbc4cafe  97.7G
├─sda3        ext4        6c5da67e-94b8-4a50-aece-e1c02cc0c6fe 696.4G
├─sda4        ext4        95df9c27-f804-4cbb-8af1-567b87060c82  78.1G
└─sda5        crypto_LUKS e1898a1c-491d-455c-8aab-09eaa09cc74b  58.9G
  └─_dev_sda5 ext4        2a7ee96e-e97f-4cc5-becd-34f9753bc946  58.9G
mmcblk0                                                         29.1G
├─mmcblk0p1   vfat        6A15-BF9D                              100M
├─mmcblk0p2                                                       16M
├─mmcblk0p3   ntfs        AED216D6D216A29F                        28G
└─mmcblk0p4   ntfs        0086171086170636                       999M
mmcblk0boot0                                                       4M
mmcblk0boot1                                                       4M
mmcblk2                                                         29.7G
└─mmcblk2p1   vfat        3232-3630                             29.7G
peter@gemini:~$ 

You can get it from a GUI but this is easier with everything in a single place!

Now lets look at the simplest of the above to implement. Partitions are mounted at boot time according to the 'instructions' in the file /etc/fstab and each mounting is described in a single line. So all one has to do is to add a single line. I will not go into detail about what each entry means but this is based on one of my working systems - if you have to know what they all mean try a man fstab in a terminal!

UUID=6c5da67e-94b8-4a50-aece-e1c02cc0c6fe /media/DATA ext4 defaults 0 2

You do need to edit the file as a root user as the file is a system file owned by root and is protected against accidental changes. There is a simple way to get to edit it with root permissions in the usual editor (xed). Open the file browser and navigate to the /etc folder. Now right click and on the menu and click 'Open as Root'. Another file manager window will open with a big red banner to remind you that you are a superuser with god like powers in the hierarchy of the system. You can now double click the file to open it and again the editor will show a red border and you will be able to edit and save the file. I like to make a backup copy first and a simple trick is to drag and drop the file with the Ctrl key depressed and it will make a copy with (copy) in the filename.

After you have copied in the string above you MUST change the UUID (Universal Unique IDentifier) to that of your own partition. The header in the file /etc/fstab says to find the UUID using blkid which certainly works or using the other incantation above. That is what I used to do but I have found a more clear way using a built in program accessed my Menu -> Disks. This displays very clearly all the partitions on all the Disks in the system including the the UUID which is a random number which is stored in the metadata of every file system in a partition and is independent of machine mounting order and anything else which could cause confusion. So you just have to select the partition and copy the UUID which is below. To ensure you copy the whole UUID I recommend right clicking the UUID -> 'select all' then 'copy'. You can then paste it into the line above replacing the example string. Save /etc/fstab and the drive should be mounted next time you boot up. If you make a mistake the machine may not boot but you can use the LiveUSB to either correct or replace the file from your copy.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda2 during installation
UUID=4c6d1f4d-49c6-40df-a9e9-b335fbc4cafe / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/mmcblk0p1 during installation
UUID=6A15-BF9D /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
#/dev/sda3
UUID=6c5da67e-94b8-4a50-aece-e1c02cc0c6fe /media/DATA ext4 defaults 0 2
#/dev/sda4
UUID=95df9c27-f804-4cbb-8af1-567b87060c82 /home ext4 defaults 0 2

Transferring to a separate partition for home

This is much more tricky especially as it requires an understanding of the Linux file system. When you mount a partition you specify where it appears in the single folder structure starting at root. We have already mounted a Data drive which we now access at /media/DATA. When we change to a separate home folder we will be moving from a situation where /home is on the same partition as the root folder structure to one where /home is on a separate partition. This means that the new /home will be overlayed on top of the old structure which will no longer be visible or accessible and effectively wasted space on the partition with root. You need to think about and understand this before proceeding.

So we need to copy the old user folders to the new partition in some way and then mount it. Eventually we will need to recover the wasted space. Just to compound problems a simple drag and drop or even move in a terminal may not correctly transfer the folders as they are likely to contain what are called symbolic links which point to a different location rather than actually be the file or folder and a simple copy may move the item pointed to rather than the link so special procedures are required. Fortunately there is a copy command using the terminal which will work for all normal circumstances and a nuclear option of creating a compressed archive which is what I use for backups and transferring users between machines which has never failed me.

As a diversion for those of an enquiring mind I found an article on how to see if symlinks are present at https://stackoverflow.com/questions/8513133/how-do-i-find-all-of-the-symlinks-in-a-directory-tree and used the following terminal command on my home folder:

find . -type l -ls

Explanation: find from the current directory . onwards all references of -type link and list -ls those in detail. The output confirmed that my home folder had gained dozens of symlinks with the Wine and Opera programs being the worst offenders. It also confirmed that the copy command and option I am going to use does transfer standard symlinks in the expected way. It found 67 in my home folder in .wine, .pcloud, firefox, menus and skypforlinux

The method that follows minimises use of the terminal and is quite quick although it may seem to have many stages. First we need to mount the new partition for home somewhere so we can copy the user folders. Seeing we will be editing /etc/fstab an easy way is to edit fstab to mount the new partition at say /media/home just like above, reboot and we are ready to copy. The line we add at this stage will be

UUID=95df9c27-f804-4cbb-8af1-567b87060c82 /media/home ext4 defaults 0 2

We must do the copies with the users logged out, if there is only one user we need to create a temporary user just to do the transfer and we can then delete it. So we are going to forced once more to use the terminal so we can do the copies using the cp command with a special option -a which is short for –archive which never follows symbolic links whilst still preserving the links and copies folders recursively

sudo cp -a /home/user1/ /media/home/
and repeat for the other users

Remember that you must login and out of users in a way that you ensure that you never copy an active (logged in) user.

Now we edit /etc/fstab again so the new partition mounts at just /home instead of /media/home and reboot and the change should be complete. I will show another way to open fstab for editing or use the previous way. Again I am not going to explain fully how it works ut just state it is actually the recommended way to open the text editor as root

xed admin:///etc/fstab

And change the mount point to /home in fstab

UUID=95df9c27-f804-4cbb-8af1-567b87060c82 /home ext4 defaults 0 2

I suggest you leave it at that for a few days to make sure everything is OK after which you can get out your trusty LiveUSB and find and delete the previous users from the home folder in the root partition. Always use 'Delete' not 'Copy to the Recycle Bin' when using root privileges or it will go to Root's recycle bin which is very difficult to empty.

Howto mount a ntfs partition to allow sharing with Windows

I have not implemented this [yet] on Gemini but used to do the equivalent on machines in the past. It is very similar to adding our data partition but the string added to fstab is different in several ways. In this I am going to assume we will call the mount point SHARED and the partition has been formated as ntfs. The mount point for the SHARED partition (/media/SHARED) should to set up along with all the other partitions or space left for it during the planning if you intend to try this..

Windows filesystems do not support permissions and the concept of owners and groups is unknown but we do need to arrange them on the drive when mounted. The reason this is important is that only the owner can reset time stamps on files which is required by some synchronization programs such as Unison. This also makes the choice of initial user important. The owner and group can be set in options for mounting in fstab as below where uid is the owner and gid is the group. The uid below can either be the 'prime' user, pcurtis in our case or more generally it is a good idea to give it the matching id of 1000 and the gid of the adm group.

# /media/DATA was on /dev/sda5 during installation
UUID=2FBF44BB538624C0 /media/DATA ntfs defaults,umask=000,uid=1000,gid=adm 0 0

The umask setting means that the mask which sets the file permissions allows everyone read write and execute access rather than just the owner and group (adm is a group which most users with administrators will belong to or can be added to) - you may want to keep that tighter. This is a mask so 007 would prevent access by anyone other than the owner and those in the admin group.

Gemini has a slot for a microSD card which could be used as a shared area with Windows especially if a fast microSD card was used. I will probably experiment

Appendix 5

Encrypting a partition for our Vault (or an external USB Drive for Back-up)

You should now be in a good position but for those who really want to push their limits I will explain how I have encrypted partitions and USB drives. This can only be done from the terminal and I am only going to give limited explanation (because my understanding is also limited!). Consider it to be like baking a cake - the recipe can be comprehensive but you do not have to understand the details of the chemical reactions that turn the most unlike ingredients into a culinary masterpiece.

Firstly you do need to create a partition to fill the space but it does not need to be formatted and have a specific filesystem. All you need to know is the device number, in my case /dev/sda5, if yours is different when you examine it in gparted or using lsblk -o NAME,FSTYPE,UUID,SIZE then change the commands. Anything in the partition will be completely overwritten. If your partition has a different device number you must therefore edit the commands which follow. There are several stages, first we add encryption directly to the drive at block level. Then we add a filesystem and finally we mount the encrypted drive when a user logs in.

Add LUKS encryption to the partition

sudo cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 2000 --use-random luksFormat /dev/sda5

The options do make some sense except for --iter-time which is the time allocated to generate the random key and likewise decode it in milliseconds. Each user may have a different password so the search so the total time spend during login can get quite long hence choosing 2 seconds rather than the 5 seconds I used to use. During the dialog you will first be asked for your normal login password as it is a root activity and then for the secure passphrase for the drive.

Now we mount and format the drive with an ext4 filesystem

Next you have open your new drive and mount it - you will be asked for the passphrase you gave above

sudo cryptsetup open --type luks /dev/sda5 sda5e

Now we format the device with a ext4 filesystem and it is important to name it at this time to VAULT or your chosen name in the case of an external USB drive, mine are called LUKS_A etc

sudo mkfs.ext4 /dev/mapper/sda5e -L VAULT

There is one further important step and that is to back up the LUKS header which can get damaged by

sudo cryptsetup -v luksHeaderBackup /dev/sda5 --header-backup-file LuksHeaderBackup_gemini.bin

and save the file somewhere safe.

If you were formatting an external USB Drive the job is done, if it is an internal drive we need to mount it automatically.

Set up to mount the drive when a user logs in.

We now need to mount it when a user logs in. This is not done in /etc/fstab for an encrypted file but by a utility called pam-mount which first has to be installed if you did not do it earlier:

It is in the list I provided earlier but if you use the following it will just tell you there is no need if already installed or you can run the Synaptic Package Manager

sudo apt-get install libpam-mount

We now need to find its UUID by use of lsblk -o NAME,FSTYPE,UUID,SIZE which will reveal that there are two associated UUIDs, one for the partition which is the one we need and another for the filesystem within it. There is already a screenshot of this higher up in this document for the example system.

Now we edit its configuration file using the information from above. The following magic incantation will, after a couple of passwords open the configuration file for editing.

xed admin:///etc/security/pam_mount.conf.xml

So the middle bit gains the single line as below but make sure you change the UUID to that of your own crypto_LUKS partition. Note this for the partition, not for the ext4 filesystem within it. See the blkid -f screenshot above in the document and check the UUIDs

...

<!-- Volume definitions -->

<volume fstype="crypt" path="/dev/disk/by-uuid/e1898a1c-491d-455c-8aab-09eaa09cc74b" mountpoint="/media/VAULT" user="*" />

<!-- pam_mount parameters: General tunables -->

..

Save the file and reboot (a logout and in should also work) and we almost finished.

When you have rebooted you will see the folder but will not be able to add anything to it as it is owned by root!

The following will set all the owners, groups and permissions in VAULT. The primary user is refereed to by id 1000 to make this more general. This uses the octal way of designation and sets the owner and group to read and write and other to read for files and read write and execute for folders which always requir execute permission. I am not going to try o explain the following magic incantation at this time or maybe ever, but it works and very fast with no unrequired writes to wear out your SSD or waste time.. These need to be run from user pcurtis to change the permissions or put an extra sudo in front of each xargs

sudo chown -R pcurtis:adm /media/VAULT && find /media/VAULT -type f -print0 | xargs -0 chmod 664 && find /media/VAULT -type d -print0 | xargs -0 chmod 775

This may have look daunting but about 8 terminal commands and adding 1 line to file was all it needed.

Changing and adding Passwords

This final step only applies if your users have different passwords or a password has been changed. The LUKS filesystem has 8 slots for different passwords so you can have seven users with different passwords as well as the original passphrase you set up. So if you change a login password you also have to change the matching password slot in LUKS by first unmounting the encrypted partition then using a combination of

sudo cryptsetup luksAddKey /dev/sda5
sudo cryptsetup luksRemoveKey /dev/sda5
# or
sudo cryptsetup luksChangeKey /dev/sda5

You will first be asked for your login password as you are using sudo then you will be asked for a password/passphrase which can be a valid one for any of the keyslots when adding or removing keys. ie any current user password will allow you to make the change. Do not get confused with all the passwords and passphrases! It looks like this:

peter@gemini:~$ sudo cryptsetup luksAddKey /dev/sda5
[sudo] password for peter:
Enter any existing passphrase:
Enter new passphrase for key slot:
Verify passphrase:
peter@gemini:~$

Warning: NEVER remove every password or you will never be able to access the LUKS volume and I always check the new password before removing the old.

Mounting of LUKS volumes from SSH, a Workaround for a bug/feature of pam_mount

I am not sure if this should be in this document but it might affect you if you use SSH from a phone. I have suffered a problem for a long time from what is either a feature or a bug depending on your viewpoint. When any user is logged out the LUKS VAULT is closed. That seems to apply even when multiple users are logged in as a 'security' feature. The same applies when using unison to synchronise between machines so my LUKS encrypted common partition which is mounted as VAULT is unmounted (closed) when unison is closed. The same goes for any remote login to users other than the when the user accessed remotely and the user on the machine were the same. This is bad news if another user was using VAULT or tries to access it - it certainly winds my wife up when a lot of work is potentially lost as her storage disappears.

I finally managed to get sufficient information from the web to understand a little more about how the mechanism (pam_mount) works to mount an encrypted volume when a user logs in remotely or locally. It keeps a running total of the logins in separate files for each user in /var/run/pam_mount and decrements them when a user logs out. When a count falls to zero the volume is unmounted REGARDLESS of other mounts which are in use with counts of one or greater in there files. One can watch the count incrementing and decrementing as local and remote users log in and out. So one solution is to always keep a user logged in either locally or remotely to prevent the the count decrementing to 1 and automatic demounting taking place. This is possible but a remote user could easily be logged out if the remote machine is shut down or other mistakes take place. A local login needs access to the machine and is still open to mistakes. One early thought was to log into the user locally in a terminal then move it out of sight and mind to a different workspace!

The solution I plan to adopt uses a low level command which forms part of the pam_mount suite. It is called pmvarrun and can be used to increment or decrement the count. If used from the same user it does not even need root privileges. So before I use unison or a remote login to say helios as user pcurtis I do a remote login from wherever using ssh then call pmvarrun to increment the count by 1 for user pcurtis and exit. The following is the entire terminal activity required.

peter@helios:~$ ssh pcurtis@defiant
pcurtis@defiant's password:

27 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sat Oct 24 03:54:43 2020
pcurtis@defiant:~$ pmvarrun -u pcurtis -o 1
2
pcurtis@defiant:~$ exit
logout
Connection to defiant closed.
peter@helios:~$

The first time you do an ssh remote login you may be asked to confirm that the 'keys' can be stored. Note how the user and machine changes in the prompt

I can now use unison and remote logins as user pcurtis to machine helios until helios is halted or rebooted which should unmount VAULT cleanly. Problem solved!

I tried adding the same line to the end of the .bashrc file in the pcurtis home folder. The .bashrc file is run when a user opens a terminal and is used for general and personal configuration such as aliases. That works but gets called every time a terminal is opened and I found a better place is .profile which only gets called at login, the count still keeps increasing but at a slower rate. You can check the count at any time by doing an increment of zero:

pmvarrun -u pcurtis -o 0

Appendix 6

Encrypting your home folder

The ability to encrypt your home folder has been built into Mint for a long time and it is an option during installation for the initial user. It is well worth investigating if you have a laptop but there are a number of considerations and it becomes far more important to back-up your home folder in the mounted (un-encrypted) form to a securely stored hard drive as it is possible to get locked out in a number of less obvious ways such as changing your login password incorrectly.

There is a passphrase generated for the encryption which can in theory be used to mount the folder but the forums are full of issues with less solutions! You should generate it for each user by

ecryptfs-unwrap-passphrase

Now we will find there is considerable confusion in what is being produced and what is being asked for in many of the ecryptfs options and utilities as it will request your passphrase to give you your passphrase!. I will try to explain. When you login as a user you have a login password or passphrase. The folder is encrypted with a much longer randomly generated passphase which is looked up when you login with your login password and that is what you are being given and what is needed if something goes dreadfully wrong. These are [should be] kept in step if you change your login password using the GUI Users and Groups utility but not if you do it in a terminal. It is often unclear which password is required as both are often just referred to as the passphrase in the documentation.

Encrypting an existing users home folder.

It is possible to encrypt an existing users home folder provided there is at least 2.5 times the folder's size available in /home - a lot of workspace is required and a backup is made.

You also need to do it from another users account. If you do not already have one an extra basic user with admin (sudo) privileges is required and the user should be given a password otherwise sudo can not be used.

You can create this basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, and set Type to Administrator provide username and Full name... -> Create -> Highlight User, Click Password to set a password otherwise you can not use sudo.

Restart and Login in to your new basic user. You may get errors if you just logout as the gvfs file system may still have files in use.

Now you can run this command to encrypt a user:

sudo ecryptfs-migrate-home -u user

You'll have to provide your user account's Login Password. After you do, your home folder will be encrypted and you should be presented with some important notes In summary, the notes say:

  1. You must log in as the other user account immediately – before a reboot!
  2. A copy of your original home directory was made. You can restore the backup directory if you lose access to your files. It will be of the form user.8random8
  3. You should generate and record the recovery passphrase (aka Mount Passphrase).
  4. You should encrypt your swap partition, too.

The highlighting is mine and I reiterate you must log out and login in to the users whose account you have just encrypted before doing anything else.

Once you are logged in you should also create and save somewhere very safe the recovery phrase (also described as a randomly generated mount passphrase). You can repeat this any time whilst you are logged into the user with the encrypted account like this:

user@lafite ~ $ ecryptfs-unwrap-passphrase
Passphrase:
randomrandomrandomrandomrandomra
user@lafite ~ $

Note the confusing request for a Passphrase - what is required is your Login password/passphrase. This will not be the only case where you will be asked for a passphrase which could be either your Login passphrase or your Mount passphrase! The Mount Passphrase is important - it is what actually unlocks the encryption. There is an intermediate stage when you login into your account where your account login is used to used to temporarily regenerate the actual mount passphrase. This linkage needs to updated if you change your login password and for security reasons this is not done if you change your login password in a terminal using passwd user which could be done remotely. If you get the two out of step the mount passphrase may be the only way to retrieve your data hence the great importance. It is also required if the system is lost and you are accessing backups on disk.

The documentation in various places states that the GUI Users and Groups utility updates the linkage between the Login and Mount passphrases but I have found that the password change facility is greyed out in Users and Groups for users with encrypted home folders. In a single test I used just passwd from the actual user and that did seem to update both and everything kept working and allowed me to login after a restart.

Mounting an encrypted home folder independently of login.

A command line utility ecryptfs-recover-private is provided to mount the encrypted data but it currently has several bugs when used with the latest Ubuntu or Mint.

  1. You have to specify the path rather than let the utility search.
  2. You have to manually link keychains with a magic incantation which I do not completely understand namely sudo keyctl link @u @s after every reboot. A man keyctl indicates that it links the User Specific Keyring (@u) to the Session Keyring (@s). See https://bugs.launchpad.net/ubuntu/+source/ecryptfs-utils/+bug/1718658 for the bug report

The following is an example of using ecryptfs-recover-private and the mount passphrase to mount a home folder as read/write (--rw option), doing a ls to confirm and unmounting and checking with another ls.

pcurtis@lafite:~$ sudo keyctl link @u @s
pcurtis@lafite:~$ sudo ecryptfs-recover-private --rw /home/.ecryptfs/pauline/.Private
INFO: Found [/home/.ecryptfs/pauline/.Private].
Try to recover this directory? [Y/n]: y
INFO: Found your wrapped-passphrase
Do you know your LOGIN passphrase? [Y/n] n
INFO: To recover this directory, you MUST have your original MOUNT passphrase.
INFO: When you first setup your encrypted private directory, you were told to record
INFO: your MOUNT passphrase.
INFO: It should be 32 characters long, consisting of [0-9] and [a-f].

Enter your MOUNT passphrase:
INFO: Success! Private data mounted at [/tmp/ecryptfs.8S9rTYKP].
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
Desktop Dropbox Pictures Templates
Documents Videos Downloads Music Public
pcurtis@lafite:~$ sudo umount /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$

The above deliberately took the long way rather than use the matching LOGIN passphrase as a demonstration.

I have not bothered yet with encrypting the swap partition as it is rarely used if you have plenty of memory and swappiness set low as discussed earlier.

Once you are happy you can delete the backup folder to save space. Make sure you Delete it (Right click delete) if you use nemo and as root - do not risk it ending up in a root trash which is a pain to empty!

Feature or Bug - home folders remain encrypted after logout?

In the more recent versions of Ubuntu and Mint the home folders remain mounted after logout. This also occurs if you login in a consul or remotely over SSH. This is useful in many ways and you are still protected fully if the machine is off when it is stolen. You have little protection in any case if you are turned on and just suspended. Some people however logout and suspend expecting full protection which is not the case. In exchange it makes backing up and restoring a home folder easier.

Backing up an encrypted folder.

A tar archive can be generated from a mounted home folder in exactly the same way as before as the folder stays unencrypted when you change user to ensure the folder is static. If that was not the case you could use a consul (by Ctrl Alt F2) to login then switch back to the GUI by Ctrl Alt F7 or login via SSH to make sure it was mounted to allow a backup. Either way it is best to logout at the end.

Another and arguably better alternative is to mount the user via encryptfs-recover-private and backup using Method 3 from the mount point like this:

sudo ecryptfs-recover-private --rw /home/.ecryptfs/user1/.Private

cd /tmp/ecryptfs.8S9rTYKP && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

Appendix 7

Backing-up and Restoring to an Old or New system.

Introduction

This is a chicken and egg situation. It will be easy to grow the chicken but first we must create the egg. When I started to put my system together on Gemini my intention was to take backups from my other machines and restore them on Gemini thus avoiding a large amount of configuration and I quickly realised that it would provide an excellent cross check of my existing back-up procedures and a chance to refine them. In addition Gemini would be the fixed machine and, if considered able, would be the center of the backing up, ie machines would all be synchronised via Gemini and then back-ups would be made to hard disk.

In thinking it all through I started to think about when the restoration from back-up were really needed and I came up with the Grab Bag concept which I am writing about separately. In this case the Grab Bag has to contain the minimum that enable one to get back in operation when away from base if ones only machine is dropped in the ocean, abandoned due to earthquakes (yes we spend a lot of time sailing and in New Zealand and have real grab bags), or just has a hard disk failure. So this section has turned out to be about much more than a token statement that one should always back up. It does tend to concentrate on our own, somewhat complex, system but that was also designed to provide a robust system.

From the point of view of quickly getting back to business restoring a users folder is arguably the most important so we need to have procedures in place which back-up the users home folder on a regular basis and make sure they are duplicated into a drive which we have with us in the grab-bag. Home folders are complex, typically contain hundreds of thousands of files, most from email and browsers and large ~ 15 Gbytes in our case - the actual machine I am writing on has my home folder at 30 Gbytes with 138,000 files so backups to compressed archives and restores are not quick. It depends on the speed of the processor for compressing and the drives for fetching and storing but on our machines with SSD storage and Seagate USB 3.1 Hard drives encrypted with LUKS it takes about 1 minute per Gbyte to backup or restore using a tar archive, say 20 minutes to backup or restore each user from our hard drive.

The restoring of DATA and VAULT is also slow but less urgent as the important parts can always be accessed whilst still on the backup drive. Few of our machines are big enough to hold everything simultaneously, usually old video folders only exist on the backup USB drives until we want to edit them. But as an indication on the machine I am currently using DATA is about 550 Gbytes which includes our web site, our Music (70 Gbytes), Our digital Pictures from nearly 20 years (320 Gbytes) and 10 years of unedited video (150 Gbytes). In contrast VAULT only contains 4 Gbytes of information and the rest is backups of home folders etc. which, of course, also need to be kept secure. Looking at DATA a direct copy would take at least 3 hours.

What should be clear is that having recent back-up with one is crucial and it should also be clear that there is a huge security risk if some of the backup is un-encrypted. So we will first look at the back-up requirements and procedures.

Overall Backup Philosophy for Mint

My thoughts on Backing Up have evolved considerably over time and now take much more into account the use of several machines and sharing between them and within them giving redundancy as well as security of the data. They have existed at Sharing, Networking, Backup, Synchronisation and Encryption under Linux Mint for many years. That page now looks much more at the ways the backups are used, they are not just a way of restoring a situation after a disaster or loss but also about cloning and sharing between users, machines and multiple operating systems. The thinking continues to evolve to take into account the use of data encryption. We have already looked at some of this from a slightly different perspective on the overall system design of our machines.

So firstly lets look more specifically at the areas that need to be backed up on our machines:

  1. The Linux operating system(s), mounted at root ( / ). This area contains all the shared built-in and installed applications but none of the configuration information for the applications or the desktop manager which is specific to users. Mint has a built in utility called TimeShift which is fundamental to how potential regressions are handled - this does everything required for this areas backups and can be used for cloning Cloning is not easy for rebuilding away from home and a reinstall and reload of programs will be much safer on a different machine. TimeShift will be covered in detail in a separate section.
  2. The Users Home Folders which are folders mounted within /home and contain the configuration information for each of the applications as well as the desktop manager which is specific to users such as themes, panels, applets and menus. It also contains all the Data belonging to a specific user including the Desktop, the standard system folders such as Documents, Video, Music and Photos etc. It will probably also contain the email 'profiles' . This is the most challenging area with the widest range of requirements so is the one covered in the greatest depth here.
  3. Shared DATA including encrypted data in VAULT. The above covers the minimum areas required but I have an additional DATA area which is available to all operating systems and users and is periodical synchronised between machines as well as being backed up. This is kept independent and has a separate mount point. In the case of machines dual booted with Windows it could use a files system format compatible with Windows and Linux such as NTFS. The requirement for easy and frequent synchronisation means Unison is the logical tool both between machines and for the associated synchronisation to large USB hard drives for backup. Most of our machines also have a similar encrypted partition called VAULT for sensitive data.
  4. Email and Browsers (profiles). I am going to also mention Email specifically as that has specific issues as it needs to be collected on every machine as well as pads and phones and some track kept on replies regardless of source. All incoming email is retained on the POP servers we use for months if not years and all outgoing email is copied to either a separate account accessible from all machines or where that is not possible automatically such as android a copy is sent back to the senders inbox.

    Email is also a major security risk as saved passwords for the email servers have limited or no protection. Thunderbird has a self contained 'profile' where all the local configuration and the filing system for emails is retained and that profile along with the matching one for the Firefox browser need to be backed up and that depends where they are held.

Physical Implications of Backup Philosophy - Partitioning

I am not going to go into this in great depth as it has already been covered in other places but to allow this section to be self contained I will reiterate the philosophy which is:

  1. TheLinux system should be in a separate partition and there are advantages in having two partitions available for Linux systems so new versions can be run for a while before committing to them.
  2. The folder containing all the users home folders should be a separate partition mounted as /home. This separates the various functions and makes backup, sharing and cloning easier.
  3. When one has an SSD the best speed will result from having the Linux systems and the home folder using the SSD especially if the home folders are going to be encrypted.
  4. Shared DATA should be in a separate partition mounted at /media/DATA. If one is sharing with a Windows system it should be formatted as ntfs which also reduces problems with permissions and ownership with multiple users. DATA can be on a separate slower but larger hard drive.
  5. If you have an SSD swapping should be minimised and the swap partition should be on a hard drive if it is available to maximise SSD life. Swap obviously does not require any back-up as it is transient.
  6. Encryption should be considered on laptops which leave the home. Home folder encryption and encrypted drives are both possible and, if drive capacity allows, one should allocate space for an encrypted partition even if not implemented initially - in our systems it is mounted at /home/VAULT. It is especially important that email is in an encrypted area.

The Three Parts to Backing Up

1. System Backup - TimeShift - Scheduled Backups and more.

TimeShift which is now fundamental to the update manager philosophy of Mint and backing up the Linux system very easy. To Quote "The star of the show in Linux Mint 19, is Timeshift. Thanks to Timeshift you can go back in time and restore your computer to the last functional system snapshot. If anything breaks, you can go back to the previous snapshot and it's as if the problem never happened. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot (thus cancelling the effects of the regression) and you still have the ability to apply updates selectively (as you did in previous releases)." The best information I have found about TimeShift and how to use it is by the author.

TimeShift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and some settings common to all users. User files such as documents, pictures and music are excluded. This ensures that your files remains unchanged when you restore your system to an earlier date. Snapshots are taken using rsync and hard-links. Common files are shared between snapshots which saves disk space. Each snapshot is a full system backup that can be browsed with a file manager. TimeShift is efficient in use of storage but it still has to store the original and all the additions/updates over time. The first snapshot seems to occupy slightly more disk space than the root filesystem and six months of additions added another approximately 35% in my case. I run with a root partition / and separate partitions for /home and DATA. Using Timeshift means that one needs to allocate an extra 2 fold storage over what one would have expected the root file system to grow to.

In the case of the Defiant the root partition has grown to about 11 Gbytes and 5 months of Timeshift added another 4 Gbytes so the partition with the /timeshift folder needs to have at least 22 Gbytes spare if one intends to keep a reasonable span of scheduled snapshots over a long time period. After three weeks of testing Mint 19 my TimeShift folder has reached 21 Gbytes for a 8.9 Gbyte system!

This space requirements for TimeShift obviously have a big impact on the partition sizes when one sets up a system. My Defiant was set up to allow several systems to be employed with multiple booting. I initially had the timeshift folder on the /home partition which had plenty of space but that does not work with a multiboot system sharing the /home folder. Fortunately two of my partitions for Linux systems plenty big enough for use of TimeShift and the third which is 30 Gbytes is acceptable if one is prepared to prune the snapshots occasionally. With Mint 20 and a lot of installed programs I suggest the minimum root partition is 40 Gbytes and preferably 50 Gbytes.

It should be noted that it is also possible to use timeshift to clone systems between machines - that is covered in All Together - Sharing, Networking, Backup, Synchronisation and Encryption but we do not need it here.

2. Backing up Users - Home Folder Archiving using Tar.

The following covers my preferred and well tested mechanism for backing up the home folders. I have been doing this on a monthly basis on 3 machines for many years. An untested procedure is of little use so you will be pleased to know I have also done a good number of restorations usually when moving a complete user from one machine to another (cloning) and that will also be covered.

The only catch is that the only mechanism I trust does require use of a terminal although it is only a single command in a terminal to create the archive and likewise two commands to restore an archive. Once the commands are tailored to include your own username and the drive name you use they can just be repeated - the final version I provide even adds in a date stamp in the filename!

Tar is a very powerful command line archiving tool round which many of the GUI tools are based and it works on most Linux Distributions. In many circumstances it is best to access this directly to backup your system. The resulting files can also be accessed (or created) by the archive manager accessed by right clicking on a .tgz .tar.gz or .tar.bz2 file. Tar is an ideal way to backup many parts of our system, in particular one's home folder. A big advantage of tar is that (with the correct options) it is capable of making copies which preserve all the linkages within the folders - simple copies do not preserve symlinks correctly and even an archive copies (cp -aR mybackupname) are not as good as a tar archive

The backup process is slow (15 mins plus) and the archive file will be several Gbyte even for the simplest system. Mine are typically 15 Gbytes and our machines achieve between 0.7 and 1.5 Gbytes/minute. After it is complete the file should be moved to a safe location, preferably an external device which is encrypted or kept in a very secure place. You can access parts of the archive using the GUI Archive Manager by right clicking on the .tgz file - again slow on such a large archive or restore (extract in the jargon) everything, usually to the same location.

Tar, in the simple way we will be using it, takes a folder and compresses all its contents into a single 'archive' file. With the correct options this can be what I call an 'exact' copy where all the subsidiary information such as timestamp, owner, group and permissions are stored without change. Soft, symbolic links, and hard links can also be retained. Normally one does not want to follow a link out of the folder and put all of the target into the archive so one needs to take care. Tar also handles 'sparse' files but I do not know of any in a normal users home folder.

As mentioned above the objective is to back up each users home folder so it can be easily replaced on the existing machine or on a replacement machine. The ultimate test is can one back up the users home folder to the tar archive,, delete it (or safer is to rename) and restore it exactly so the user can not tell in any way. The home folder is, of course, continually changing when the user is logged in so backing up and restoring must be done when the user is not logged in, ie from a different user, a LiveUSB or from a consul. Our systems reserve the first installed user for such administrative activities. For completeness I note:

You can create a user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, Set Type: Administrator -> Add -> Click Password to set a password (otherwise you can not use sudo)

Firstly we must consider what is, arguably, the most fundamental decision about backing up, the way we specify the location being saved when we create the tar archive and when we extract it - in other words the paths must be complementary and restore the folder to the same place. If we store absolute locations we must extract in the same way. If it is relative we must extract the same way. So we will always have to consider pairs of commands depending on what we chose. In All Together - Sharing, Networking, Backup, Synchronisation and Encryption I looked at several options but in practice I have only ever used one which is occasional referred to as Method 1 in this document.

My preferred method Method (method 1 in the full document) has absolute paths and shows home when we open the archive with just a single user folder below it. This is what I have always used for my backups and the folder is always restored to /home on extraction. In its simplest form (there will be a better one latter) it looks like this.

sudo tar --exclude=/home/*/.gvfs -cvpPzf "/media/USB_DATA_DRIVE/mybackup1.tgz" /home/user1/

sudo mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA_DRIVE/mybackup1.tgz" -C /

Notes:

Archive creation options: The options used when creating the archive are: create archive, verbose mode (you can leave this out after the first time) , retain permissions, -P do not strip leading backslash, gzip archive and file output. Then follows the name of the file to be created, mybackup.tgz which in this example is on an external USB drive called 'USB_DATA' - the backup name should include the date for easy reference. Next is the directory to back up. There are objects which need to be excluded - the most important of these is your back up file if it is in your /home area (so not needed in this case) or it would be recursive! It also excludes the folders (.gvfs) which is used dynamically by a file mounting system and is locked which stops tar from completing. The problems with files which are in use can be removed by creating another user and doing the backup from that user - overall that is a cleaner way to work. If you want to do a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Other exclusions: There are other files and folders which should be excluded. In our specific case that includes the cache area for the pCloud cloud service as it will be best to let that recreate and avoid potential conflicts. ( --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" )

Archive Restoration uses options - extract, verbose, retain permissions, from file and gzip. This will take a while. The "-C / ensures that the directory is Changed to a specified location, in case 1 this is root so the files are restore to the original locations.

tar options style used: The options are in the old options style written together as a single clumped set, without spaces separating them, the one exception is that recent versions of tar >1.28 require exclusions to immediately follow the tar command in the format shown. Mint 20 has version 1.30 in November 2020 so that ordering applies.

Higher compression: If you want to use a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Deleting Files: If the old system is still present note that tar only overwrites files, it does not deleted files from the old version which are no longer needed. I normally restore from a different user and rename the users home folder before running tar as above, when I have finished I delete the renamed folder. This needs root/sudo and the easy way is to right click on a folder in Nemo and 'open as root' - make sure you use a right click delete to avoid going into a root deleted items folder.

Rename: The rename command somewhat confusingly uses the mv (move) command with option -T

Deleting Archive files: If you want to delete the archive file then you will usually find it is owned by root so make sure you delete it in a terminal - if you use a root browser then it will go into a root Deleted Items which you can not easily empty so it takes up disk space for ever more. If this happens then read http://www.ubuntugeek.com/empty-ubuntu-gnome-trash-from-the-command-line.html and/or load the trash-cli command line trash package using the Synaptic Package Manager and type

sudo trash-empty

Summary - Archiving a home folder and restoring

Everything has really been covered above so this is really just a slight expansion of the above with the addition of some suggested naming conventions.

This uses 'Method 1' where all the paths are absolute so the folder you are running from is not an issue. This is the method I have always used for my backups so it is well proven. The folder is always restored to /home on extraction so you need to remove or preferably rename the users folder before restoring it. If a backup already exists delete it or use a different name. Both creation and retrieval must be done from a different or temporary user to avoid any changes taking place during the archive operations.

sudo tar --exclude=/home/*/.gvfs -cvpPzf "/media/USB_DATA_DRIVE/backup_machine_user_a_$(date +%Y%m%d).tgz" /home/user_a/

sudo mv -T /home/user_a /home/user_a-bak
sudo tar xvpfz "/media/USB_DATA_DRIVE/backup_machine_user_YYYYmmdd.tgz" -C /

Note: the automatic inclusion of the date in the backup file name and the suggestion that the machine and user are also included. In this example the intention is send the archive straight to a plug-in USB hard drive, encrypted is best.

Cloning between systems using a backup archive.

This is what we are, in effect, doing when we are rebuilding our system on the new machine. This is easy provided the users you are cloning were installed in the same order and you make the new usernames the same as the old. I have done that many times. There is however a catch which you need to watch for and that is that the way Linux stores user names is a two stage process. If I set up a system with the user pcurtis when I install that is actually just an 'alias' to a numeric user name 1000 in the Linux operating system - the users ids in Mint start at 1000. If I then set up a second user peter that will correspond to user id 1001. If I have a disaster and reinstall and this time start with pauline she will have user id 1000 and peter is 1001. I then get my carefully backed up folders and restore, the folders which now have all the owners etc incorrect as they use the underlying numeric value apart, of course, where the name is used in hard coded scripts etc. This is why I keep reiterating in different places that the users Must not only have the same names but be installed in the same order.

You can check all the relevant information for the machine you are cloning from in a terminal by use of id in a terminal :

pcurtis@gemini:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),6(disk),
20(dialout),21(fax),24(cdrom),25(floppy),26(tape),29(audio),30(dip),
44(video),46(plugdev),104(fuse),109(avahi),110(netdev),112(lpadmin),
120(admin),121(saned),122(sambashare)

So when you install on a new machine you should always use the same usernames and passwords as on the original machine and then create an extra user with admin (sudo) rights for convenience for the next stage, or do what we do and always save the first user for this purpose.

Change to your temporary user, rename the first users folder (you need to be root) and replace it from the archived folder from the original machine. Now login to the user again and that should be it. At this point you can delete the temporary user. If you have multiple users to clone the user names must obviously be the same and, more importantly, the numeric id must be the same as that is what is actually used by the kernel, the username is really only a convenient alias. This means that the users you may clone must always be installed in the same order on both machines or operating systems so they have the same numeric UID. I have repeated myself about this several times because it is so important!

So we first make a backup archive in the usual way and take it to the other machine or switch to the other operating system and restore as usual. It is prudent to backup the system you are going to overwrite just in case.

So first check the id on both machines for the user(s) by use of

id user

If and only if the ids are the same can we proceed

On the first machine and from a temporary user:

sudo tar --exclude=/home/*/.gvfs -cvpPzf "/media/USB_DATA/backup_machine_user1_$(date +%Y%m%d).tgz" /home/user1/

On the second machine or after switching operating system and from a temporary user:

mv -T /home/user1 /home/user1-bak # Rename the user.

sudo tar xvpfz "/media/USB_DATA/mybackup_with_date.tgz" -C /

When everything is well tested you can delete the renamed folder

 

3. DATA Synchronisation and Backup - Unison

This is a long section because in contains a lot of background but in practice I just have one central machine which has Unison set up which offers a set of profiles each of which will check and then do all the synchronisation to a drive or machine very quickly. I do it once a month as backup and when required if, for example, I need to edit on different machines. When one is away one will want to do a selective synchronisation of the areas of data which are being changed or added to - this would include pictures, video and possible any music you had ripped.

This is what Unison looks like when editing or setting up a synchronisation profile.

Screenshot

So how does it work. Linux has this very powerful tool available called Unison to synchronise folders, and all their subfolders, either between drives on the same machine or across a local network using a secure transport called SSH (Safe 'S Hell). At its simplest you can use a Graphical User Interface (GUI) to synchronise two folders which can be on any of your local drives, a USB external hard drive or on a networked machine which also has Unison and SSH installed. Versions are even available for Windows machines but one must make sure that the Unison versions numbers are compatible even between Linux versions. That has caused me a lot of grief in the past and has been largely instrumental in causing me to upgrade some of my machines to Mint 20 from 19.3 earlier than I would have done.

If you are using the graphical interface, you just enter or browse for two local folders and it will give you a list of differences and recommended actions which you can review and it is a single keystroke to change any you do not agree with. Unison uses a very efficient mechanism to transfer/update files which minimises the data flows based on a utility called rsync. The initial Synchronisation can be slow but after it has made its lists it is quite quick even over a slow network between machines because it is running on both machines and transferring minimum data - it is actually slower synchronising to another hard drive on the same machine.

The Graphical Interface (GUI) has become much more comprehensive than it was when I started using it and can now handle most of the options you may need, however it does not allow you to save the configurations to a different name. You can however find them very easily as they are all stored in a folder in your home folder called .unison so you can copy and rename them to allow you to edit each separately, for example, you may want similar configurations to synchronise with several hard drives or other machines. The format is so simple and obvious.

For more complex synchronisation with multiple folders and perhaps exclusions you set up a more complex configuration file for each Synchronisation Profile and then select and run it from the GUI as often as you like. It is easier to do than describe - a file to synchronise my four important folders My Documents, My Web Site, Web Sites, and My Pictures is only 10 lines long and contains under 200 characters yet synchronises 25,000 files!

After Unison has done the initial comparison it lists all the potential changes for you to review. The review list is intelligent and if you have made a new folder full of sub folders of pictures it only shows the top folder level which has to transferred . You have the option to put off, agree or reverse the direction of any files with difference. Often the differences are in the metadata such as date or permissions rather than the file and there is enough information to resolve those differences is supplied - other times when changes have been made independently in different places the decisions may be more difficult but there is an ability to show differences between simple (text) files.

Both Unison and SSH are available in the Mint repositories but need to be installed using System -> Administration -> Synaptic Package Manager and search for and load the unison-gtk and ssh packages or they can be directly installed from a terminal by:

sudo apt-get install unison-gtk ssh

The first is the GUI version of Unison which also installs the terminal version if you need it. The second is a meta package which install the SSH server and Client, blacklists and various library routines.

The procedures initially written in grab_bag.htm and gemini.htm differ slightly from what I have done in the past in an attempt to get round a shortcoming in the Unison Graphical User Interface (GUI). In looking deeper into some of the aspects whilst writing up the monthly backup procedures I have realised there were some some unintended consequences which has caused me to look again at this section and update it.

Unison is then accessible from the Applications Menu and synchronising can be tried out immediately using the GUI. There are some major cautions if the defaults are used - the creation/modification date are not synchronisation by default so you lose valuable information related to files although the contents remain unchanged. The behaviour can can easily be set in the configuration files or by the setting the option times to be true in the GUI. The other defaults also pose a risk of loss of associated data. For example the user and group are not synchronised by default - they are less essential than the file dates and I normally leave them with their default values of false. Permissions however are set by perm which is a mask and read, write and execute are preserved by default which in our case is not required. What one is primarily interested in is the contents of the file and the date stamps, the rest is not required if one is interested in a common data area although they are normally essential to the security of a Linux.

WARNINGs about the use of the GUI version of Unison:

I find it much easier and predictable to do any complex setting up by editing the configuration/preference files (.prf files) stored in .unison . My recommendation is that you do a number of small scale trials until you understand and are happy and start to edit the configuration files to set up the basic parameters and only edit details of the folders etc you are synchronising in the GUI. I have put an example configuration file with comments further down this write up as well as the following is a basic file which synchronises changes in the local copy of My Web Site and this years pictures with backup drive LUKS_H which you can use as a template

# Unison preferences - example file recent.prf in .unison
label = Recent changes only
times = true
perms = 0o0000
fastcheck = true
ignore = Name temp.*
ignore = Name *~
ignore = Name .*~
ignore = Name *.tmp
path = My Pictures/2020
path = My Web Site
root = /media/DATA
root = /media/LUKS_H

Note the use of the root and path gives considerable flexibility and they are both safe to edit in the GUI unlike label and perms. Also note the letter o designating octal in perms - do not mistake for an extra zero!

Choice of User when running Unison

There is, however, a very important factor which must be taken into account when synchronising using the GUI and wish to preserving the file dates and times. When using the GUI you are running unison as a user not as a root and there are certain actions that can only be carried out by the owner or by root. It is obvious that, for security, only the owner can change the group and owner of a file or folder and the permissions. What is less obvious is that only the owner or root can change the time stamp. Synchronisation therefore fails if the time stamp needs to updated for files you do not own which is common during synchronisation. I therefore carry out all synchronisations from the first user installed (id 1000) and set the owners of all files to that user (id 1000) and the group to adm (because All users who can use sudo are automatically in group adm). This can be done very quickly with recursive commands in the terminal for the folders DATA and VAULT on the machine and the USB drives which are plugged in (assuming they are formatted with a linux file system). If Windows formatted USB drives or sticks are plugged in they will pick up the same owner as the machine at the time regardless of history.

sudo chown -R 1000:adm /media/DATA

sudo chown -R 1000:adm /media/VAULT

sudo chown -R 1000:adm /media/USB_DRIVE

Ownership requirements for Trash (Recycle Bins) to work.

There is however an essential extra step required. In changing the ownership of the entire partition we have also changed the ownership of the .Trash-nnnn folders and the result is that one can no longer use the recycle bin (trash) facilities. I have only recently worked that one out! There is an explanation of how trash is implemented on drives which are on separate partitions at http://www.ramendik.ru/docs/trashspec.html and one can see on each drive that there is a trash folder for each user which looks like .Trash-1000 .Trash-1001 etc and these need to be set to owned and grouped with their matching user. So with two extra users we need to do:

sudo chown -R 1001:1001 /media/DATA/.Trash-1001 && sudo chown -R 1002:1002 /media/DATA/.Trash-1002

sudo chown -R 1001:1001 /media/VAULT/.Trash-1001 && sudo chown -R 1002:1002 /media/VAULT/.Trash-1002

Note: All the above applies to Linux Files systems such as ext4 - old fashioned Windows file systems which do not implement owners, groups or permissions have to be mounted with the user set to 1000 and group to adm a different way (using fstab) which is beyond this document.

Considerations about permissions and synchronisation.

I initially tried to set all the permissions for the drives being synchronised in bulk when I found the GUI did not allow permissions to be ignored. I now find it is better to just edit the preferences files for each synchronisation to ignore permissions

If you do want to set the permissions recursively for a whole partition or folder you have to recall that folders need the execution bit set otherwise they can not be accessed. The most usual permissions used are read and write for both the owner and group and read only for others. As we are considering shared Data one can usually ignore preserving or synchronising the execution bits although sometimes one has backups of executable programs and scripts which might need it to be reapplied if one retrieves them for use.

The permissions are best set in a terminal, the File Manager looks as it could be used via the set "set permissions for all enclosed files" option but the recursion down the file tree does not seem to work in practice.

The following will set all the owners, groups and permissions in DATA, VAULT and a USB drive if you need to, it is not essential . The primary user is referred to by id 1000 to make this more general. This uses the octal way of designation and sets the owner and group to read and write and other to read for files and read write and execute for folders which always require execute permission. I am not going to try to explain the following magic incantations at this time or maybe never, but it works and very fast with no unrequired writes to wear out your SSD or waste time. These need to be run from user id 1000 to change the permissions (adding an extra sudo in front of every chmod will also work from all users.

find /media/DATA -type f -print0 | xargs -0 chmod 664 && find /media/DATA -type d -print0 | xargs -0 chmod 775

find /media/VAULT -type f -print0 | xargs -0 chmod 664 && find /media/VAULT -type d -print0 | xargs -0 chmod 775

find /media/USB_DRIVE -type f -print0 | xargs -0 chmod 664 && find /media/USB_DRIVE -type d -print0 | xargs -0 chmod 775

This only takes seconds on a SSD and appears instantaneous after the first time. USB backup drives take longer as they are hard drives.

SSH (Secure aS Hell)

In the introduction I spoke of using the ssh protocol to synchronise between machines.

When you set up the two 'root' directories for synchronisation you get four options when you come to the second one - we have used the local option but if you want to synchronise between machines you select the SSH option. You then fill in the hostname which can be an absolute IP address or a hostname. You will also need to know the folder on the other machine as you can not browse for it. When you come to synchronise you will have to give the password corresponding to that username, often it asks for it twice for some reason.

Setting up ssh and testing logging into the machine we plan to synchronise with.

I always checked out that ssh has been correctly set up on both machines and initialise the connection before trying to use Unison. In its simplest form ssh allows one to logon using a terminal on a remote machine. Both machines must have ssh installed (and the ssh daemons running which is the default after you have installed it).

The first time you use SSH to a user you will get some warnings that it can not authenticate the connection which is not surprising as you have not used it before and will ask for confirmation and you have type yes rather than y. It will then tell you it has saved the authentication information for the future and you will get a request for a password which is the start of your log in on the other machine. After providing the password you will get a few more lines of information and be back to a normal terminal prompt but note that it is now showing the address of the other machine. You can enter some simple commands such as a directory list (ls) if you want to try it out.

pcurtis@gemini:~$ ssh pcurtis@lafite
The authenticity of host 'lafite (192.168.1.65)' can't be established.
ECDSA key fingerprint is SHA256:C4s0qXX9GttkYQDISV1fXpNLlXlXQL+CXjpNN+llu0Y.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'lafite,192.168.1.65' (ECDSA) to the list of known hosts.
pcurtis@lafite's password: **********

109 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sun Oct 18 03:29:24 2020 from 192.168.1.64
3
pcurtis@lafite:~$ exit
logout
Connection to lafite closed.
pcurtis@defiant:~$

This is how computing started with dozens or even hundreds of users logging into machines less powerful than ours via a terminal to carry out all their work. How things change!

The hostname resolution seems to work reliably with Mint 20 on my network however if username@hostname does not work try username@hostname.local. If neither work you will have to use the numeric IP address which can be found by clicking the network manager icon in the tool tray -> network settings - > the settings icon on the live connection. The IP addresses can vary if the router is restarted but can often be fixed in the router internal setup but that is another story.

Example of a more complex profile for Unison with notes on the options:

The profiles live in /home/username/.unison and can be created or edited with xed.

# Example Profile to synchronise to helios with username pcurtis on helios
# Note: if the hostname is a problem then you can also use an absolute address
# such as 192.168.1.4 on helios
#
# Optional label used in a profile to provide a descriptive string documenting its settings.
label = Recent changes only
#
# Roots for the synchronisation
root = /media/DATA
root = ssh://pcurtis@helios//media/DATA
#
# Paths to synchronise
path = My Backups
path = My Web Site
path = My Pictures/2020
path = My Video/2020
#
# Some typical regexps specifying names and paths to ignore
ignore = Name temp.*
ignore = Name *~
ignore = Name .*~
ignore = Name *.tmp
#
# Some typical Options - only times is essential
#
# When fastcheck is set to true, Unison will use the modification time and length of a
# file as a ‘pseudo inode number’ when scanning replicas for updates, instead of reading
# the full contents of every file. Faster for Windows file systems.
fastcheck = true
#
# When times is set to true, file modification times (but not directory modtimes) are propagated.
times = true
#
# When owner is set to true, the owner attributes of the files are synchronized.
# owner = true
#
# When group is set to true, the group attributes of the files are synchronized.
# group = true
#
# The integer value of this preference is a mask indicating which permission bits should be synchronized.
# In general we do not want (or need) to synchronise the permission bits (or owner and group)
# Note the letter o designating octal - do not mistake for an extra zero
perms = 0o0000

The above is a fairly comprehensive profile file to act as a framework and the various sections are explained in the comments.

Appendix 8 - Performance Testing

Comparative Performance Tests

These have been steadily updated and now include comparison between the Lafite, Helios, Defiant, AMD Athlon 5000+ x64 and MSI Wind U100 performance including some 32 versus 64 bit performance differences and variations between Mint versions.

Ubuntu initially recommended running with 32 bit systems, certainly for systems of 2 Gbytes memory or under as the additional memory requirements cancel out any gains from the richer instruction set and faster processing. These tests include some comparisons of 32 and 64 bit performance to help separate the differences. Mint 20 no longer supports 32 bit.

I first did some quick and dirty checks on my, now seriously old, AMD Athlon 64 dual processor 5000+ with 2 Gbytes of Memory which is on the boundary. Speed in rendering videos (which is a good processor benchmark) was only 5% faster, almost within the measurement noise. Memory use was up 40% on average with quite a wide variation (28% Firefox with many tabs on startup and 54% Thunderbird with many accounts on startup). This is in line with Tests by Phoronix on Ubuntu 13.10 amd64 and i32 where Video and Audio processing showed ~15% gain and only FFTs showed a lot more, in most cases it was a very marginal or none existent gain.

I also used the benchmarks at the end of the System Information program - install hardinfo to access it.

Benchmarks MSI Wind U100 AMD Athlon 5000+ i32 Mint 16 AMD Athlon 5000+ x64 Mint 17.2 (Mint 20) Defiant i32 Mint 17.2 Defiant x64
Mint 20
Helios x64 Mint 20 Lafite x64 Mint 19.0 WAP Pro (Gemini) Mint 20
CPU Blowfish (lower is better) 28.16 8.39 8.07 (7.42) 1.79 1.92 3.16 1.49 3.58
CPU CryptoHash (higher is better) 57.7 125.2 145 (162) 776 774 366 679 351
CPU Fibonacci (lower is better) 8.49 2.93 3.53 (1.36) 1.22 0.62 0.63 0.50 1.29
CPU N-Queens (lower is better) 17.78 16.09 13.55 (6.67) 0.48 6.96 5.99 6.50 18.92
FPU FFT (lower is better) 18.69 6.75 8.16 (8.80) 0.69 1.13 1.85 0.97 2.55
CPU zlib (Higher is better)     (0.19)   0.93 0.5   0.35
FPU Ray tracing (lower is better) 33.82 - 10.28 (8.21) 10.92 1.35 1.85 1.80 6.05
GPU Drawing (Higher is better)    

(1061)

 

 

7401 Intel

4528 4572 1638
GXLSpheres 1920x1080 N/A N/A 11 fps (both quite variable)
18 Mpix/sec (1680 x 1050)
Intel Graphics

120 fps

237 Mpix/sec

Intel:
192 fps
375 Mpix/sec
Nvidia: 420 fps 830 Mpix/sec

 

100 fps

198 Mpix/sec

52 fps

87 Mpix/sec

(1680 x 1050)

There is a lot of inconsistencies but the double the number of cores and threads of the Defiant's Core i7 4700MQ 2.4 Ghz (3.4 on Turbo) quad-core (8 thread) processor gives it about a 50% advantage over the Helios's core i5 6200U dual core four thread 2.3 Ghz (2.8 on Turbo) processor. The 8th generation Kaby Lake processor in the Lafite also has a quad core (8 thread) i5 8250 processor and comes close or often betters the 45 watt Core i7 4700MQ processor in these tests.

There are some big changes in some cases between Mint 17.2 and Mint 20, presumably the newer kernel and better handling of multiple cores.

One surprise was how slow the graphics seem to be in the Gemini, almost back to the ATI R300 dating from 2002 in the GPU drawing test. The FPU tests also seemed unexpectedly poor.

Video blanking was switched off for GLXSpheres tests by running with:

vblank_mode=0 /opt/VirtualGL/bin/glxspheres64

Turbo Mode Performance and CPU temperature using i7z

i7z a utility which has to be installed and run as root by sudo i7z. The following is output under medium load processing a large archive which uses a single core.

Screenshot

and processing a high load with 4 cores active. Observe the CPU is running slower, halt states are active yet the temperature is close to the maximum which seems to be 80 degrees. Core voltage has been considerably lowered

Screenshot

When the processor is idling the CPU frequencies are typically between 850 and 1100 and the temperatures about 46 deg. When the load is increased by carrying out a large archive compression as above the speed initially goes to a steady 2400 for all the cores then keeps dropping back to under 1600 as the temperature rises to ~70C and drops as it should. The cooling, or lack of, probably determines performance in tasks such as compression, video processing, encoding and possibly image processing.

Cache and memory usage (Observed by using the Multicore System Monitor Applet)

The machine has limited memory compared to my other machines - 4 Gb as opposed to 8 Gb. The cache is provided in fixed file rather than a partition as is set to 2 Gbytes in my default type installation. I never see significant cache use with 8 Gbytes memory but I have seen it as high as 50% with the 4 Gbytes built into the WAP Pro although I have minimised use with a low setting for swappiness - initially set to 1 then 0. I will keep watching as I run different programs - I expect once Browsers are opened with large numbers of windows are opened in parallel with Thunderbird that use may rise but at present it looks a sensible compromise. It is possible to increase the file size - difficult but still easier than changing the partitioning but excessive use of cache is bad for SSD life.

Read/Write Performance tests on SSDs

I was interested to find out how much performance gain was coming from the SSD and, in particular, the differences between the Hybrid SATA Hard Drive in the Defiant, the Samsung 850 EVO M2 in the Helios, the Samsung 970 EVO mSATA in the Lafite and the Crucial MX500 in Gemini

systemd-analyze utility

A first check can be made by checking the boot time which can be measured is by use of systemd-analyze critical-chain which measures the time taken by the kernel to reach a graphical interface (for login). This is a very real world test although the time here is often masked by the time within the bios before booting starts.

The Defiant took close to 30 seconds under Mint 19 and the 4.15 kernel, the Helios running Mint 18.1 with a 4.05 kernel took just under 2 seconds and the Lafite under Mint 19 with a 4.15 kernel was down to about 1.25 seconds. I have always been disappointed with the Hybrid drive in practice and this just confirms that claims it offers similar performance to an SSD are optimistic.

Since making the initial measurements I have added an SSD to the Defiant and the boot time has dropped to 1.51 seconds although part of the drop was disabling theNetworkManager-wait-online.service which was causing a 7 second unrequired wait. I can only think it had got turned on by use of wireless and Bluetooth mice for potential login. So like for like may be an improvement from 22 secs to 1.5 secs. I have added an encrypted home folders to the Lafite and the boot time has increased to 1.95 seconds possible because of that.

The WAP Pro (Gemini) takes 3.31 secs, again a factor of about 60% longer than the Helios.

Disks utility benchmark

There are few good benchmark programs in Linux but the inbuilt Disks Utility has a useful benchmark built in which I used but for Read performance testing.

Previous Tests: Here the Samsung 970 EVO M2 250 Gbyte Read Benchmark recorded an incredible 3.2 Gbyte/sec and very close to the manufacturers specification whilst the older Samsung 850 EVO M2 250 Gbyte achieved 522 Mbyte/sec, only a little less than the specification of up to 540 Mbytes/sec. In contrast the 2 TByte 5400 rpm SATA hard drive achieved an average of 98 Mbytes/sec and seemed to be falling steadily during the test, possibly due to the internal cache.

Gemini (WAP Pro) tests:

hdparm utility

Another useful approach is to use the hdparm command line utility (run as root) which has some useful options and enables one to separate the effects of caching.

-t Perform timings of device reads for benchmark and comparison purposes. This displays the speed of reading through the buffer cache to the disk without any prior caching of data. This measurement is an indication of how fast the drive can sustain sequential data reads under Linux, without any filesystem overhead. To ensure accurate measurements, the buffer cache is flushed during the processing.

-T Perform timings of cache reads for benchmark and comparison purposes. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test.

@lafite:~$ sudo hdparm -Tt /dev/nvme0n1p7 //Lafite Samsung EVO 970 M2 SSD
/dev/nvme0n1p7:
Timing cached reads: 21302 MB in 1.99 seconds = 10698.25 MB/sec
Timing buffered disk reads: 5658 MB in 3.00 seconds = 1885.73 MB/sec

@helios:~$ sudo hdparm -tT /dev/sda3 //Helios Samsung 850 EVO M2 SSD
/dev/sda3:
Timing cached reads: 8590 MB in 2.00 seconds = 4303.98 MB/sec
Timing buffered disk reads: 984 MB in 3.00 seconds = 327.89 MB/sec

@lafite:~$ sudo hdparm -Tt /dev/sda //lafite 5400 rpm 2.5" Hard Drive
/dev/sda:
Timing cached reads: 22072 MB in 1.99 seconds = 11073.43 MB/sec
Timing buffered disk reads: 364 MB in 3.01 seconds = 120.87 MB/sec


peter@gemini:~$ sudo hdparm -tT /dev/mmcblk0p3 //Gemini inbuilt DA4032
/dev/mmcblk0p3:
Timing cached reads: 7480 MB in 1.99 seconds = 3756.34 MB/sec
Timing buffered disk reads: 496 MB in 3.01 seconds = 164.91 MB/sec

peter@gemini:~$ sudo hdparm -tT /dev/sda3 //gemini Crucial MX500 1 TB
/dev/sda3:
Timing cached reads: 6774 MB in 1.99 seconds = 3400.13 MB/sec
Timing buffered disk reads: 1414 MB in 3.00 seconds = 471.30 MB/sec

These again show the impressive difference between the Samsung 970 EVO and the Samsung 850 EVO whilst the conventional 2 Tbyte hard drive does better than I would expect.

Video Test

One of my favourite video tests for smooth video and frame speeds is glxspheres.

Installing glxspheres

Download VirtualGL (.deb) from: http://sourceforge.net/projects/virtualgl/files/VirtualGL/

Navigate to the folder containing the deb package and install it with:

sudo dpkg -i VirtualGL_*.deb

Run glxspheres:

cd /opt/VirtualGL/bin/
./glxspheres64

Overall Summary and Conclusions on The Chillblast WAP Pro v2 (aka Gemini)

The Chillblast WAP Pro v2 (aka Gemini) has performed well under Linux and for the small amount of time under Windows, The built in default storage is far too small for serious Windows use and just updates to the system, the installation of AVAST anti-virus and system updates reduced the spare disk capacity to 3 Gbytes in C:. I have added a 32 Gbyte microSD card which shows as D: to help.

In contrast it provides a very adequate and excellent value machine under Linux where I have added a 1 Tbyte SDD. The performance measurements seem to indicate a slightly lower performance than I hoped for but in real life it is very responsive. I suspect that the performance tests may take long enough for the thermal constraints of a fanless machine to bite and throttle the processor. I will be interested to try it on some really challenging tasks (but ones it was never intended to take on) such as video processing where rendering stresses all my machines and has the fans howling. I have ended up adding a webcame and curved 24inch screen for zoom meetings and general system work and it is getting used far more than I ever expected. Minor points: bluetooth seems to have very little range and graphics rendering speeds low but adequate for me, on plus side undocumented features such as a microphone. I am tempted towards another to handle media etc which is what it is aimed at by Chillblast as it is such excellent value.

Before You Leave

I would be very pleased if visitors could spare a little time to give me some feedback - it is the only way I know who has visited, if it is useful and how I should develop it's content and the techniques used. I would be delighted if you could send comments or just let us know you have visited by Sending a quick Message to me.

Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Fonts revised: 28th April, 2021