Home Pauline Howto Articles Uniquely NZ Small Firms Search
Sharing, Networking, Backup, Synchronisation and Encryption under Linux Mint and Ubuntu


This page is completing the first major updates for many years, in particular in the Backing Up section where there is a new philosophy laid out and the section is completely restructured. It also takes into account the addition of TimeShift to the standard Mint Installs. There is more emphasis on multiple boot and multiple user systems and use of encryption. The emphasis is much more on Mint, in particular the latest LTS version 19 Tara

Index

Introduction

This page has been extended from my original page covering standard backing up and now covers the prerequisites of Mounting Drives, File Sharing between Computers running both Linux and Windows, secure access to remote machines and Synchronisation. It also covers other techniques in the armoury for preservation of data, namely separation of system, user data and shared data as well as encryption. When this page was originally written we were using Ubuntu, now we have switched to Mint, which uses much of the infrastructure of Ubuntu, which again is built on Debian. We believe Mint has a better desktop in Cinnamon and is more user friendly. Most of what is written here is compatible with Mint and Ubuntu although some of the facilities in the Nemo file manager are accessed differently or do not exist in Nautilus and the examples will use Xed as a text editor. Mint 19 is based on Ubuntu Bionic 18.04 LTS version and there are significant improvements such as TimeShift but also a number of differences. If you are using earlier Mint versions or Ubuntu versions less than 18.04 there is an older version of the page available.

The ability to Backup has always been considered an essential to any system. The amount of data involved has increased dramatically from the days when we started this web site 20 years ago - data was preserved on floppy disks and our internal drive was 850 Mbytes. Now our latest laptop has a 2 TByte Hard Drive and 250 Gbyte SSD and backups use 2Tbyte external USB drives but we still have important data from 20 years ago still accessible and even emails from 1998. 18 years of Digital Pictures totaling 135,000 takes up 200 Gbytes and our Music collection another 70 Gbytes. We will not even talk about Video!

Backing up takes many forms now that the norm is for machine to be networked. This article covers many forms of achieving redundancy and preserving data. It covers sharing drives between different operating systems on multi booted computers, networking of both Linux and Windows machines, techniques to separate system and data areas, conventional backing up to internal and external drives and most important in this time of mobility - synchronisation both between machines and to external hard drives.

File Systems and Sharing

Preliminaries - Background on the Linux File System

I am not going to extol the virtues of Linux, in particular the Debian based distributions here, but I need to explain a little about the ways that file systems and disks differ in Windows and Linux before talking about backing up. In Windows, Physical Disk drives and the 'virtual' Partitions they are divided up into show up in the top level of the File System as Drives with names such as C: , a floppy disk is almost always A: and the system drive is C: . This sort of division does not appear at all in Linux where you have a single file system starting from root ( / ) which is where the operating system is located and booted. Any additional disk drives which are present or are 'mounted' at a latter time such as a USB disk drive will by default in most distributions appear in the file tree in /media and the naming will depend on several things but expect to entries such as /media/cdrom , /media/floppy and /media/disklabel. If you create a special partition for a different part of the filesystem such as home where all the users live then it can be 'mounted' at /home. In theory you could mount partitions for all sorts of other parts of the file system. If, for example, you add a new disk and choose to mount a partition to just contain your home directory it is only the addition of a single line in a file although you need to exactly copy the old contents to the new partition first - I will cover that in detail latter in this article.

There is a nearly perfect separation of the kernel, installed programs and users in Linux. The users each have a folder within the home folder which has all the configuration of the programs they use and their data which is specific to them - the configuration is set up the first time they run a program. A users folder only contains a few tens of kbytes until programs are used and all the program settings are hidden (hidden files and folders start with a . dot) and by default do not appear in the file browser until you turn them on (Ctrl h). There are usually a number of folders generated by default including /Desktop /Documents /Music /Pictures /Videos /Templates and /Public which are used by program as their defaults for user data. This makes backing up very easy - an exact copy of the home folder allows the system to be restored after a complete reload of the operating system and programs. Note the word exact as the 'copy' has to preserve Timestamps, Symbolic Links and the Permissions of all the files - permissions are key to the security of Linux so special archiving utilities are best employed as a normal copy is usually not good enough.

Networks

Permanently Mounting a shared drive in Ubuntu (Advanced)

If we are going to use a shared drive for data then we must ensure that it is permanently mounted. The mounting points in all major flavours of Linux are defined in a file system table which is in /etc/fstab and and the convention is that the mount points are in /media. We therefore need to set modify /etc/fstab to set up to mount points in /media and we must also create the folders for them using sudo to make them owned by root or the primary user and set the permissions so they are accessible to all users for read and write.

A standard approach was to open the File Browser as Root in a terminal by gksudo nemo however gksudo is no longer available in the Ubuntu 18.04 and higher code base but instead nemo has the ability to open any folder as root using the right click menu and in this mode shows a very visible red banner saying 'elevated priviledges'.

This allows me to use the graphical file browser to create the folders and set permissions by standard navigation and right click menus for create folder and Properties -> Permisions. It is best to make these folders with the same names as those assigned by mounting from 'Places' which is derived from the partition label if it is set – see below. Do not continue to use this Root File Browser after setting up the shared folder as running anything as Root has many dangers of accidental damage although the aware reader will realise that the terminal can be avoided in the next stage by also opening fstab from within the Root File Browser - but do take care!

You will however have to use a terminal for some of the other actions so I now feel the easiest way is to use a terminal and use the following two commands.

sudo mkdir /media/DRIVE_NAME
sudo chmod 777 /media/DRIVE_NAME

I have found that fstab does not like the folder name to include any spaces even if it is enclosed in quotes so it is best to use short single word drive names or join them with underscores. I also seem to recall an early Windows restriction of 13 characters maximum.

It is desirable to back up /etc/fstab and then make the changes using the editor xed. This is done in a terminal by:

sudo cp /etc/fstab /etc/fstab_backup
xed admin:///etc/fstab

Note the special way that xed is used to get root priviledges now gksudo is no longer available. Before proceeding we need some information on the identifier for the file systems. The UUID can be found by typing blkid in a terminal – typical output looks like:

pcurtis@matrix:~$ blkid
/dev/sda1: LABEL="VISTA" UUID="2E6121A91F3592E4" TYPE="ntfs"
/dev/sda2: LABEL="HP_RECOVERY" UUID="0B9C003E5EA964B2" TYPE="ntfs"
/dev/sda5: LABEL="DATA" UUID="47859B672A5D9CD8" TYPE="ntfs"
/dev/sdb5: UUID="a7c16746-1424-4bf5-980e-1efdc4500454" TYPE="swap"
/dev/sdb6: UUID="432c69bd-105c-454c-9808-b0737cab2ab3" TYPE="ext4"
/dev/sdb7: UUID="a411101c-f5c6-4409-a3a1-5a66de372782" SEC_TYPE="ext2" TYPE="ext3"

The recommended procedure for modifying /etc/fstab is to use the drives UUID rather than the device's location ie append lines to /etc/fstab looking like:

# /dev/sda3: LABEL="DATA" UUID="47859B672A5D9CD8" TYPE="ntfs"
UUID=47859B672A5D9CD8 /media/DATA ntfs nls=utf8,uid=pcurtis,gid=pcurtis,umask=0000 0 0

Note – This is for a NTFS partitions. This provides a read/write mounts ( umask=0000 for read/write for owner, group and everyone else) If you want a read only mount use umask=0222. The uid=pcurtis and gid=pcurtis are optional and define the user and group for the mount. See below for reason for defining the user. This does not work for ext3 or ext4 partitions, only vfat and ntfs and it seems the only way is the permisions have to be set up after every boot in a script for ext3 and ext4.

After modifying /etc/fstab and rebooting the chosen 'Drives' are mounted and appear on the desktop by default as well as in the file browser - they can not be unmounted without root privileges which is just what we want.

Ownership and permissions of Windows type Filesystems mounted by fstab

There is one 'feature' of this way of mounting which seems to be totally universal and that is that only root (the owner) can set the time codes - this means that any files or directories that are copied by a user have the time of the copy as their date stamp. What seems to happen is this:

A solution for a single user machine is to find out your user id and mount the partition with option uid=user-id, then all the files on that partition belong to you - even the newly created ones. This way when you copy you keep the original file date. This is important if you have a file synchronisation program such as Unison which checks file creation and modification dates.

# /dev/sda5 - note this is from a vfat (FAT32) filesystem
UUID=706B-4EE3 /media/DATA vfat utf8,uid=yourusernam,gid=yourusernamee,umask=0000 0 0

You must change yourusername to you own user name.

The uid can also be specified numerically and the first user created has user id 1000.

More about UIDs

This brings us to point you need to understand for the future which is that user names are a two stage process. If I set up a system with the initial user peter when I install that is actually just an 'alias' to a numeric user id 1000 in the linux operating system. I then set up a second user pauline who will correspond to 1001. If I have a disaster and reinstall and this time start will pauline who is then 1000 and peter is 1001. I get my carefully backed up folders and restore the folders which now have all the owners etc incorrect as they use the underlying numeric value apart, of course, where the name is used in hard coded scripts etc.

You can check all the relevant information in a terminal by use of id:

pcurtis@defiant:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),27(sudo),30(dip)
,46(plugdev),115(lpadmin),127(sambashare),999(bumblebee),1001(pauline)

pcurtis@defiant:~$ id pauline
uid=1002(pauline) gid=1002(pauline) groups=1002(pauline),24(cdrom),27(sudo),30(dip)
,46(plugdev),115(lpadmin),1000(pcurtis),127(sambashare),999(bumblebee)

So when you install on a new machine you should always use the same username and password as on the machine you wish to 'clone' from and set up any further users in the same order so they have the same numeric ids using the Users and Groups utility

In summary: A user ID (UID) is a unique positive integer assigned within the kernel to each user. Each user is identified to the system by its UID, and user names are generally used only as an interface for humans, in the kernel, only UIDs are used. Users ids are allocated starting at 1000 in Debian and hence Mint at 1000

We will return to this when we consider backups in more detail .

Set up file system to mount an ext4 'DATA' partition to allow sharing by all users using groups. (Advanced)

This is still work in progress and is only required fo multiple users or for syncronising between machines.

Drives of type ext3 and ext4 mounted by fstab are by default owned by root. You can set the owner to the main user but not the group and access rights so other users can access a shared area as you can with an ntfs file system.

You can largely get round this by making sure that all the users belong to all the other users groups if you are prepared to share everything rather than just the DATA drive.

The alternative is to set the group for all files and folders on the DATA drive so it can be accessed by all users. I chose to use the adm group as most users with administrator rights will be a member. An alternative is the sudo group - all sudoers would be able to mount any way so there is no point in keeping them out.

I have a script on the Helios which is run at the correct time during a reboot which sets all the owners, groups [and permissions] after ensuring the file system has been mounted. This is very fast on a small SSD and works well.

My script file is /usr/bin/shareDATAadm which must have execute permission set and should contain:

#!/bin/bash
# shareDATAadm file to set ownership, group [and permissions] of the media/DATA partition mount point
# after a delay to allow the mount to be complete.
# sleep 10
chown 1000:adm -R /media/DATA
# chmod 770 -R /media/DATA
exit 0

The best way is to use the power of systemd to create a service to run our script waiting until the partition has been mounted. The initial idea came from: https://forum.manjaro.org/t/systemd-services-start-too-soon-need-to-wait-for-hard-disk-to-mount/37363/4

First we need to find out the mount points of the drives, so the service can be set up to wait for the /media/DATA drive by:

systemctl list-unit-files | grep .mount

In my case the output looked like:

$ systemctl list-unit-files | grep .mount
proc-sys-fs-binfmt_misc.automount static
-.mount generated
boot-efi.mount generated
dev-hugepages.mount static
dev-mqueue.mount static
home.mount generated
media-DATA.mount generated
proc-sys-fs-binfmt_misc.mount static
sys-fs-fuse-connections.mount static
sys-kernel-config.mount static
sys-kernel-debug.mount static
clean-mount-point@.service static
systemd-remount-fs.service static
umountfs.service masked
umountnfs.service masked
umountroot.service masked
umount.target static
$

and the relevant mount point is media-DATA.mount

We can now create our Unit file for the service which will run our script after the /media/DATA partition has been mounted. So, create a new file in /etc/systemd/system/ as sharedata.service and add the following contents:

[Unit]
Description=Runs script to set owner and group for /media/DATA
Requires=media-DATA.mount
After=mnt-media-DATA.mount

[Service]
Type=oneshot
ExecStart=/usr/bin/shareDATAadm

[Install]
WantedBy=multi-user.target

Note: You need a #!/bin/sh or #!/bin/bash in the first line of the script.

Enable the service to be started on bootup by

systemctl enable sharedata.service

You can check it is all working by

$ systemctl status sharedata.service
Loaded: loaded (/etc/systemd/system/sharedata.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2018-07-09 01:43:39 BST; 10min ago
Process: 789 ExecStart=/usr/bin/shareDATAadm (code=exited, status=0/SUCCESS)
Main PID: 789 (code=exited, status=0/SUCCESS)
Jul 09 01:43:01 lafite systemd[1]: Starting Runs script to set owner and group for /media/DATA...
Jul 09 01:43:39 lafite systemd[1]: Started Runs script to set owner and group for /media/DATA.
$

Note: This proceedure of using a script works very well on a SSD as everything is very fast but it can take quite a few seconds on large hard drive with tens of thousands of files - see above. It also requires a reboot to trip it when changing users or before a syncronisation using Unison.

You can easily disable the service from running every boot by

sudo systemctl disable sharedata.service

In practice making sure every user is in every other users groups allows almost everything to work other than some use of Unison so the permissions when the owner and group can be set in a terminal or by running the above script just before synchronising with Unison may be better with large hard drives.

Afterwords: The use of a systemd service is a very easy way to start any program at startup which requires root access and is for all users and has wide applicability.

Ownership and Permissions of USB Drives with Unix filesystems (ext3, ext4, jfs, xfs etc)

The mount behavior of Unix file systems such as ext*, jfs, xfs etc is different to windows file systems such as NTFS and FAT32. When a USB drive with a 'Windows' type file system which has no internal knowledge of each folder and files permissions is plugged in it is auto-mounted with the Owner and Group corresponding to the user who mounted it. Unix filesystems have the permissions built in and don't (and shouldn't) get their ownership and permissions changed simply by mounting them. If you want the ownership/permissions to be different, then you change them, and the changes persist across unplug/replugs

Defined Mounting of USB Drives with Non Unix filesystems or mounting early in the boot process using fstab

It is sometimes necessary to mount a USB drive early in the boot process but this should only be done using this proceedure for ones which are permanently connected as boot process will halt and need intervention if the drive is not present. Even so it does allow a mount with defined ownership and permissions at an early enough stage for programs started automatically and run in the background as Daemons to use them.

Change auto-mount point for USB drives back to /media

Ubuntu (and therefore Mint) have changed the mount points for USB drives from /media/USB_DRIVE_NAME to /media/USERNAME/USB_DRIVE_NAME. One can change the behaviour by using a udev feature in Ubuntu 13.04 and higher based distributions (needs udisks version 2.0.91 or higher). This has been tested with the latest Mint 19.

Create and edit a new file /etc/udev/rules.d/99-udisks2.rules

sudo xed /etc/udev/rules.d/99-udisks2.rules

and cut and paste into the file

ENV{ID_FS_USAGE}=="filesystem", ENV{UDISKS_FILESYSTEM_SHARED}="1"

then activate the new udev rule by restarting or by

sudo udevadm control --reload

When the drives are now unplugged and plugged back in they will mount at /media/USB_DRIVE_NAME

Accessing Files on Windows Machines over the Windows Network

I originally considered this would be an important step but the order of difficulty was not what I expected - I could access files on Windows machines over the Network immediately although the creation date is only displayed correctly for files on NTFS partitions. I have not found a workaround yet or any reference to it despite extensive web searches. You will usually be asked for a password the first time.

Accessing Files on your Linux machine from Windows machines (and other Linux machines on your local network).

This is rarely required with Mint as Nemo has the ability to mount a remote file system built in. It can be accessed from the File menu as Connect to Server. You should use SSH for security and the server should be hostname.local . You will often get a series of checks and questions but it will work at the end!

Warning - the Samba GUI does not seem to work under Mint 19

If you wish to use Samba to share with Windows see the previous version of the page for Ubuntu Xenial LTS

Printing over the Network

This was again much easier than I expected. Use System -> Administration -> Printing -> New printer -> Network Printer:Windows Printer which takes one on to identify the printer on the network and has a selection procedure which has a huge range of printers with drivers available including the members of the Epson Stylus series - I have done this with many printers including an Epson C66 and Wifi linked Epson SX515.

Secure access to a Remote File System using SSH and SSHFS (Advanced)

No longer used by me, if you need it, follow up on the previous version of this page

Using ‘Connect to Server’ in the file browser with the SSH option

An Alternative way to mount a secure remote file system is to open the nemo file browser then File -> Connect to Server and select SSH from the drop down menu. Fill in the server name and other parameters as required. This way will give you a mount on the desktop and you can unmount by a right click -> Unmount Volume. You still need SSH on both machines.

If this fails with a timeout error try deleting or renaming ~/.ssh/known_hosts - you will get a message saying that it is a new host you are connecting to and everything will then work. It seems to be caused by the IP address changing for the remote hostname and you not getting the warning message.

Backing up

Overall Backup Philosophy for Mint

My thoughts on Backing Up have evolved considerably over time and now take much more into account the use of several machines and sharing between them and within them giving redundancy as well as security of the data. They now look much more at the ways the backups are used, they are not just a way of restoring a situation after a disaster or loss but also about cloning and sharing between users, machines and multiple operating systems. They continue to evolve to take into account the use of data encryption.

So firstly lets look at the broad areas that need to be backed up:

  1. The linux operating system(s), mounted at root. This area contains all the shared built-in and installed applications but none of the configuration information for the applications or the desktop manager which is specific to users. Mint has a built in utility called TimeShift which is fundamental to how potential regressions are handled - this does everything required for this areas backups and can be used for cloning. TimeShift will be covered in detail in a separate section.
  2. The Users Home Folders which are folders mounted within /home and contain the configuration information for each of the applications as well as the desktop manager which is specific to users such as themes, panels, applets and menus. It also contains all the Data belonging to a specific user including the Desktop, the standard folders such as Documents, Video, Music and Photos etc. It will probably also contain the email 'profiles' if an SSD is in use. This is the most challenging area with the widest range of requirements so is the one covered in the greatest depth here.
  3. Shared DATA. The above covers the minimum areas but I have an additional DATA area which is available to all operating systems and users and is periodical syncronised between machines as well as being backed up. This is kept independent and has a separate mount point. In the case of machines dual booted with Windows it uses a files system format compatible with Windows and Linux such as NTFS. The requirement for easy and frequent syncronisation means Unison is the logical tool for DATA between machines with associated synchonisation to a large USB hard drive for backup. Unison is covered in detail elsewhere in this page.

Email and Browsers. I am going to also mention Email specifically as that has specific issues as it needs to be collected on every machine as well as pads and phones and some track kept on replies regardless of source. All incoming email is retained on the servers for months if not years and all outgoing email is copied to either a separate account accessible from all machines or where that is not posible automatically such as android a copy is sent back to the senders inbox. Thunderbird has a self contained 'profile' where all the local configuration a filing sytem for emails is retained and that profile along with the matching one for the firefox browser need to be backed up and depends where they are held. The obvious places are the DATA area allowing sharing between operating systems and users or in each users home folder which offers more speed if an SSD is used and better security if encryption is implemented. I use both.

Physical Implications of Backup Philosophy - Partitioning

I am not going to go into this in great depth as it has already been covered in other places but my philosophy is:

  1. The folder containing all the users home folders should be a separate partition mounted as /home. This separates the various functions and makes backup, sharing and cloning easier.
  2. There are advantages in having two partitions for linux systems so new versions can be run for a while before commiting to them. A separate partition for /home is required if different systems are going to share it.
  3. When one has an SSD the best speed will result from having the linux systems and the home folder using the SSD especially if the home folders are going to be encrypted.
  4. Shared DATA should be in a separate partition mounted at /media/DATA. If one is sharing with a Windows system it should be formatted as ntfs which also reduces problems with permissions and ownership with multiple users. DATA can be on a separate slower but larger hard drive.
  5. If you have an SSD swaping should be minimised and the swap partition should be on a hard drive if it is available to maximise SSD life.

The Three Parts to Backing Up

1. System Backup - TimeShift - Scheduled Backups and more.

TimeShift which is now fundamental to the update manager philosophy of Mint and backing up the linux system very easy. To Quote "The star of the shown Linux Mint 19, is Timeshift. Thanks to Timeshift you can go back in time and restore your computer to the last functional system snapshot. If anything breaks, you can go back to the previous snapshot and it's as if the problem never happened. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot (thus canceling the effects of the regression) and you still have the ability to apply updates selectively (as you did in previous releases)." The best information I hve found about TimeShift and how to use it is by the author.

TimeShift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded. This ensures that your files remains unchanged when you restore your system to an earlier date. Snapshots are taken using rsync and hard-links. Common files are shared between snapshots which saves disk space. Each snapshot is a full system backup that can be browsed with a file manager. TimeShift is efficient in use of storage but it still has to store the original and all the additions/updates over time. The first snapshot seems to occupy slightly more disk space than the root filesystem and six months of additions added another approximately 35% in my case. I run with a root partition / and separate partitions for /home and DATA. Using Timeshift means that one needs to allocate an extra 2 fold storage over what one expects the root file sytem to grow to.

In the case of the Defiant the root partition has grown to about 11 Gbytes and 5 months of Timeshift added another 4 Gbyes so the partition with the /timeshift folder neeeds to have at least 22 Gbytes spare if one intends to keep a reasonable span of sheduled snapshots over a long time period. After three weeks of testing Mint 19 my TimeShift folder has reached 21 Gbytes for a 8.9 Gbyte system!

This space requirements for TimeShift obviously have a big impact on the partition sizes when one sets up a system. My Defiant was set up to allow several systems to be emplyed with multiple booting. I initially had the timeshift folder on the /home partition which had plenty of space but that does not work with a multiboot system sharing the /home folder. Fortunately two of my partitions for Linux systems plenty big enough for use of TimeShift and the third which is 30 Gbytes is accceptable if one is prepared to prune the snapshotss occasionally.

Cloning your System using TimeShift

Timeshift can also be used for 'cloning' as you can chose what partition you restore to. For example I have recently added an SSD to the Defiant and I just created the partition scheme on the SSD , took a fresh snapshot and restored it to the appropriate partition on the SSD. It will not be available until Grub is alerted by a sudo update-grub after which it will be in the list af operating systems available at the next boot. Assuming you have a separate /home it will continue to use the existing one and you will probably want to also move the home folder - see the previous section on Moving a Home Folder to a dedicated Partition or different partition (Expert level) for the full story.

Warning about deleting system partitions after cloning.

When you come to tidy up your partitions after cloning You Must do a sudo update-grub after any partitio deletions before any reboot. If Grub can not find a partition it expects it hangs and you will not be able to boot your system at all and you will drop into a grub-recovery prompt.

I made this mistake and used the proceedure in https://askubuntu.com/questions/493826/grub-rescue-problem-after-deleting-ubuntu-partition by Amr Ayman and David Foester which I reproduce below

grub rescue > ls
(hd0) (hd0,msdos5) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) (hd1) (hd1,msdos1)
grub rescue > ls (hd0,msdos1) # try to recognize which partition is this
grub rescue > ls (hd0,msdos2) # let's assume this is the linux partition
grub rescue > set root=(hd0,msdos2)
grub rescue > set prefix=(hd0,msdos2)/boot/grub # or wherever grub is installed
grub rescue > insmod normal # if this produced an error, reset root and prefix to something else ..
grub rescue > normal

For a permanent fix run the following after you successfully boot:

sudo update-grub
sudo grub-install /dev/sdX

where /dev/sdX is your boot drive.

It was not a pleasant activity and had far too much trial and error so make sure you do update-grub.

2. Users - Home Folder Archiving using Tar.

Tar is a very powerful command line archiving tool round which many of the GUI tools are based which should work on most Linux Distributions. In many circumstances it is best to access this directly to backup your system. The resulting files can also be accessed (or created) by the archive manager accessed by right clicking on a .tgz .tar.gz or .tar.bz2 file. Tar is an ideal way to backup many parts of our system, in particular one's home folder. The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system.

The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system. After it is complete the file should be moved to a safe location, preferably a DVD or external device. If you want to do a higher compression method the command "tar cvpjf mybackup.tar.bz2" can be used in place of "tar cvpzf mybackup.tgz". This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file.

You can access parts of the archive using the GUI Archive Manager by right clicking on the .tgz file - again slow on such a large archive.

Tar, in the simple way we will be using it, takes a folder and compresses all its contents into a single 'archive' file. With the correct options this can be what I call an 'exact' copy where all the subsiduary information such as timestamp, owner, group and permissions are stored without change. Soft, symbolic links, and hard links can also be retained. Normally one does not want to follow a link out of the folder and put all of the target into the archive so one needs to take care.

We want to back up each users home folder so it can be easily replaced on the existing machine or on a replacement machine. The ultimate test is can one back up the users home folder, delete it (safer is to rename) and restore it exactly so the user can not tell in any way. The home folder is, of course, continually changing when the user is logged in so backing up and restoring should be really be done when the user is not logged in, ie from a different user, a LiveUSB or from a consul.

You can create this basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, Type: Administrator ... -> Add -> Click Password to set a password (otherwise you can not use sudo)

Firstly we must consider what is, arguably, the most fundamental decision about backing up, the way we specify the location being saved when we create the tar archive and when we extract it - in other words the paths must usually restore the folder to the same place. If we store absolute locations we must extract in the same way. If it is relative we must extract the same way. So we will always have to consider pairs of commands depending on what we chose.

Method 1 has absolute paths and shows home when we open the archive with just a single user folder below it. This is what I have always used for my backups and the folder is always restored to /home on extraction.

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Method 2 shows the users folder at the top level when we open the archive. This is suitable for extracting to a different partition or place but here the extraction is back to the correct folder.

cd /home && sudo tar cvpzf "/media/USB_DATA/mybackup2.tgz" user1 --exclude=user1/.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackup2.tgz" -C /home

Method 3 shows the folders within the users folder at the top level when we open the archive. This is also suitable for extracting to a different partition or place and has been added to allow backing up and restoring encrypted home folders where the encrypted folder may be mounted to a different place at the time

cd /home/user1 && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" -C /home/user1

These are all single lines if you cut and paste.

 

Notes:

Archive creation options: The options used when creating the archive are: create archive, verbose mode (you can leave this out after the first time) , retain permissions, -P do not strip leading backslash, gzip archive and file output. Then follows the name of the file to be created, mybackup.tgz which in this example is on an external USB drive called 'USB_DATA' - the backup name should include the date for easy reference. Next is the directory to back up. Next are the objects which need to be excluded - the most important of these is your back up file if it is in your /home area (so not needed in this case) or it would be recursive! It also excludes the folders (.gvfs) which is used dynamically by a file mounting system and is locked which stops tar from completing. The problems with files which are in use can be removed by creating another user and doing the backup from that user - overall that is a cleaner way to work. If you want to do a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Archive Restoration uses options - extract, verbose, retain permissions, from file and gzip. This will take a while. The "-C / ensures that the directory is Changed to a specified location, in case 1 this is root so the files are restore to the original locations. In case two you can chose but is normally /home. Case 3 is useful if you mount an encrypted home folder independently of login using ecryptfs-recover-private --rw which mounts to /tmp/user.8random8

tar options style used: The initial set of options are in the old options style; old options are written together as a single clumped set, without spaces separating them or dashes preceding them and appear first on the command line, after the tar program name.

Higher compression: If you want to use a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Deleting Files: If the old system is still present note that tar only overwrites files, it does not deleted files from the old version which are no longer needed. I normally restore from a different user and rename the users home folder before running tar as above, when I have finished I delete the renamed file. This needs root/sudo and the easy way is to right click on a folder in Nemo and 'open as root' - make sure you use a right click delete to avoid going into a root deleted items folder.

Deleting Archive files: If you want to delete the archive file then you will usually find it is owned by root so make sure you delete it in a terminal - if you use a root browser then it will go into a root Deleted Items which you can not easily empty so it takes up disk space for ever more. If this happens then read http://www.ubuntugeek.com/empty-ubuntu-gnome-trash-from-the-command-line.html and/or load the trash-cli command line trash package using the Synaptic Package Manager and type

sudo trash-empty

Alternative to the multiple methods above - use --strip=n option

The tar manual contains information on an option to strip given number of leading components from file names before extraction namely --strip=number

To quote

For example, suppose you have archived whole `/usr' hierarchy to a tar archive named `usr.tar'. Among other files, this archive contains `usr/include/stdlib.h', which you wish to extract to the current working directory. To do so, you type:

$ tar -xf usr.tar --strip=2 usr/include/stdlib.h

The option --strip=2 instructs tar to strip the two leading components (`usr/' and `include/') off the file name.

If you add the --verbose (-v) option to the invocation above, you will note that the verbose listing still contains the full file name, with the two removed components still in place. This can be inconvenient, so tar provides a special option for altering this behavior:

--show-transformed-names

This should allow an archive saved with the full path information saved (as we do in Method 1) to be extracted with the/home/user information striped off. I have only done partial testing but the following seems to work and provides Method 4:

Method 4 This should also enable one to restore to an encrypted home folders where the encrypted folder may be mounted to a different place at the time by encryptfs-recover-private --rw

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" --strip=2 --show-transformed-names -C /home/user1

It is likely new versions of the following sections may only use the existing Method 1 combined with use of --strip=n during extraction

Archiving a home folder and restoring

Everything has really been covered above so this is really just a slight expansion of the above for a specific case and the addition of some suggested naming conventions.

This uses Method 1 where all the paths are absolute so the folder you are running from is not an issue. This is the method I have always used for my backups so it is well proven. The folder is always restored to /home on extraction so you need to remove or preferably rename the users folder before restoring it. If a backup already exists delete it or use a different name. Both creation and retrieval must be done from a different or temporary user to avoid any changes taking place during the archive operations.

sudo tar cvpPzf "/media/USB_DATA/backup_machine_user_$(date +%Y%m%d).tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA/backup_machine_user_YYYYmmdd.tgz" -C /

This is usually the best and most flexible way to create a backup and the resulting archive can be used for almost anything with appropriate use of the --strip=n option when extracting. All my backups to date have used it.

Note the automatic inclusion of the date in the backup file name and the suggestion that the machine and user are also included.

Cloning between machines and operating systems using a backup archive (Advanced)

It is possible that you want to clone a machine, for example when you buy a new machine. It is usually easy if you have the home folder on a separate partition and the user you are cloning was the first user installed and you make the new username the same as the old. I have done that many times. There is however a catch which you need to watch for and that is that user names are a two stage process. If I set up a system with the user peter when I install that is actually just an 'alias' to a numeric user name 1000 in the linux operating system. I then set up a second user pauline who will correspond to 1001. If I have a disaster and reinstall and this time start will pauline who is then 1000 and peter is 1001. I get my carefully backed up folders and restore the folders which now have all the owners etc incorrect as they use the underlying numeric value apart, of course, where the name is used in hard coded scripts etc.

You can check all the relevant information for the machine you are cloning from in a terminal by use of id:

pcurtis@mymachine:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),6(disk),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),29(audio),30(dip),44(video),46(plugdev),104(fuse),108(avahi-autoipd),109(avahi),110(netdev),112(lpadmin),120(admin),121(saned),122(sambashare),124(powerdev),128(mediatomb)

So when you install on a new machine you should always use the same username and password as on the original machine and then create an extra user with admin (sudo) rights for convenience for the next stage. Change to your temporary user, rename the first users folder (you need to be root) and replace it from the archived folder from the original machine. Now login to the user again and that should be it. At this point you can delete the temporary user. If you have multiple users to clone the user names must obviously be the same and, more importantly, the numeric id must be the same as that is what is actually used by the kernel, the username is really only a convenient alias. This means that the users you may clone must alaways be installed in the same order on both machines or operating systems so they have the same numeric UID.

So we first make a backup archive in the usual way and take it to the other machine or switch to the other operating system and restore as usual. It is prudent to backup the system you are going to overwrite just in case.

So first check the id on both machines for the user(s) by use of

id user

If and only if the ids are the same can we proceed

On the first machine and from a temporary user:

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

On the second machine or after switching operating system and from a temporary user:

mv -T /home/user1 /home/user1-bak # Rename the user.
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Moving a Mint Home Folder to a different Partition (Expert level)

This is actually more complex than any of the other sections as you are moving information to a an initial mount point and then changing the automatic mounting so it become /home and covers up the previous information which then has to be deleted from a LiveUSB or other system. I have recently added a SSD to the Defiant computer and the same proceedure applies to the moving of the home folder /home from the existing hard drive partition to a partition on the faster SSD.

The problem of exactly copying a folder is not as simple as it seems - see https://stackoverflow.com/questions/19434921/how-to-duplicate-a-folder-exactly. You not only need to preserve the contents of the files and folders but also the owner, group, permissions and timestamp. You also need to be able to handle symbolic links and hard links. I initially used a complex proceedure using cpio but am no longer convinced that covers every case especially if you use wine where .wine is full of links and hard coded scripts. The stackoverflow thread has several sensible options 'exact' copies. I also have a well proven way of creating and restoring backups of home folders exactly using tar which has advantages as we would create a backup before proceeding in any case!

When we back up normally (Method 1 above) we use tar to create a compressed archive which can be restored exactly and To the Same place, even during cloning we are still restoring the user home folders to be under /home. If you are moving to separate partition you want to extract to a different place, which will become /home eventually after the new mount point is set up in file system mount point list in /etc/fstab. It is convenient to always use the same backup proceedure so you need to get at least one user in place in the new home folder before changing the mount point. I am not sure I trust any of the copy methods for my real users but I do believe it is possible to move a basic user (created only for the transfers) that you can use to do the initial login after changing the location of /home and can then use to extract all the real users from their backup archives. The savvy reader will also realise you can use Method 2 (or 4) above to move them directly to the temporary mount point but what is written here has stood the test of time.

An 'archive' copy using cp is good enough in the case of a very basic user which has recently been created and little used, such a home folder may only be a few tens of kbytes in size and not have a complex structure with links:

sudo cp -ar /home/basicuser /media/whereever

The -a option is an archive copy preserving most attributes and the -r is recursive to copy sub folders and contents.

So the proceedure to move /home to a different partition is to:

Example of changing file system table to auto-mount different partition as /home

I will give an example of the output of blkid and the contents of etc/fstab after moving my home folder to the SSD drive highlighting the changes in red. Note this is under Mint 19 and the invocation for the text editor.

pcurtis@defiant:~$ blkid
/dev/sda1: LABEL="EFIRESERVED" UUID="06E4-9D00" TYPE="vfat" PARTUUID="333c558c-8f5e-4188-86ff-76d6a2097251"
/dev/sda2: LABEL="MINT19" UUID="e07f0d65-8835-44e2-9fe5-6714f386ce8f" TYPE="ext4" PARTUUID="4dfa4f6b-6403-44fe-9d06-7960537e25a7"
/dev/sda3: LABEL="MINT183" UUID="749590d5-d896-46e0-a326-ac4f1cc71403" TYPE="ext4" PARTUUID="5b5913c2-7aeb-460d-89cf-c026db8c73e4"
/dev/sda4: UUID="99e95944-eb50-4f43-ad9a-0c37d26911da" TYPE="ext4" PARTUUID="1492d87f-3ad9-45d3-b05c-11d6379cbe74"
/dev/sdb1: LABEL="System Reserved" UUID="269CF16E9CF138BF" TYPE="ntfs" PARTUUID="56e70531-01"
/dev/sdb2: LABEL="WINDOWS" UUID="8E9CF8789CF85BE1" TYPE="ntfs" PARTUUID="56e70531-02"
/dev/sdb3: UUID="178f94dc-22c5-4978-b299-0dfdc85e9cba" TYPE="swap" PARTUUID="56e70531-03"
/dev/sdb5: LABEL="DATA" UUID="2FBF44BB538624C0" TYPE="ntfs" PARTUUID="56e70531-05"
/dev/sdb6: UUID="138d610c-1178-43f3-84d8-ce66c5f6e644" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="56e70531-06"
/dev/sdb7: UUID="b05656a0-1013-40f5-9342-a9b92a5d958d" TYPE="ext4" PARTUUID="56e70531-07"
/dev/sda5: UUID="47821fa1-118b-4a0f-a757-977b0034b1c7" TYPE="swap" PARTUUID="2c053dd4-47e0-4846-a0d8-663843f11a06"
pcurtis@defiant:~$ xed admin:///etc/fstab

and the contents of /etc/fstab after modification

# <file system> <mount point> <type> <options> <dump> <pass>

UUID=e07f0d65-8835-44e2-9fe5-6714f386ce8f / ext4 errors=remount-ro 0 1
# UUID=138d610c-1178-43f3-84d8-ce66c5f6e644 /home ext3 defaults 0 2
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
UUID=2FBF44BB538624C0 /media/DATA ntfs defaults,umask=000,uid=pcurtis,gid=46 0 0
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0

In summary: there are many advantages in having ones home directory on a separate partition but overall this change is not a proceedure to be carried out unless you are prepared to experiment a little. It is much better to get it right and create one when installing the system.

Cloning into a different username - Not Recommended but somebody will want to try!

I have also tried to clone into a different username but do not recommend it. It is possible to change the folder name and set up all the permissions and everything other than Wine should be OK on a basic system. The .desktop files for Wine contain the user hard coded so these will certainly need to be edited and all the configuration for the wine programs will have been lost. You will also have to change any of your scripts which have the user name 'hard coded'. I have done it once but the results were far from satisfactory. If you want to try you should do the following before you try to log in the first time after replacing the home folder from the archive. Change the folder name to match the new user name and the following commands set the owner and group to the new user and gives standard permissions for all the files other than .dmrc which is a special case.

sudo chown -R eachusername:eachusername /home/eachusername
sudo chmod -R 755 /home/eachusername
sudo chmod 644 home/eachusername/.dmrc

This needs to be done before the username is logged into the first time otherwise many desktop settings will be lost and the following warning message appears.

Users $Home/.dmrc file is being ignored. This prevents the default session and language from being saved. File should be owned by user and have 644 permissions.
Users $Home directory must be owned by user and not writable by others.

It this happens it is best to start again remembering that the archive extraction does not delete files so you need to get rid of the folder first!

What about backing up encrypted home folders?

This is covered at the end of the Encrypting Your Home Folder section.

3. DATA Synchronisation and Backup - Unison

Linux has a very powerful tool available called Unison to synchronise folders, and all their subfolders, between drives on the same machine and across a local network using a secure transport called SSH (Safe 'S Hell). At its simplest you can use a Graphical User Interface (GUI) to synchronise two folders which can be on any of your local drives, a USB external hard drive or on a networked machine which also has Unison and SSH installed. Versions are even available for Windows machines but one must make sure that the Unison versions numbers are compatible even between Linux versions .

You just enter or browse for the two folders if local and it will give you a list of differences and recommended actions which you can review and it is a single keystroke to change any you do not agree with. Unison uses a very efficient mechanism to transfer/update files which minimises the data flows based on a utility called rsync. The initial Synchronisation can be slow but after it has made its lists it is quite quick even over a slow network between machines because it is running on both machines and transferring minimum data - it is actually slower synchronising to another hard drive on the same machine.

For more complex synchronisation with multiple folders and perhaps exclusions you set up a very simple configuration file for each Synchronisation Profile and then select and run it from the GUI as often as you like. It is easier to do than describe - a file to synchronise my four important folders My Documents, My Web Site, Web Sites, and My Pictures is only 10 lines long and contains under 200 characters yet synchronises 25,000 files! The review list is intelligent and if you have made a folder full of sub folders of pictures whilst you are away it only shows the top folder level which has to transferred which is a good job as we often come back with several thousand new pictures!

Both Unison and SSH are available in Ubuntu but need to be installed using System -> Administration -> Synaptic Package Manager and search for and load the following three packages:

unison-gtk ssh winbind sshfs

or they can be directly installed from a terminal by:

sudo apt-get install unison-gtk ssh winbind sshfs

The first is the GUI version of Unison which also installs the terminal version if you need it. The second is a meta package which install the SSH server and Client, blacklists and various library routines. The third is required to allow ssh to use host names rather than just absolute IP addresses which will also requires some extra configuration described below - you may find with some versions of Ubuntu that winbind is already installed but it will still need the configuration. The last is not needed yet but will prove useful as it allows you to mount a remote file system on your machine usind ssh.

Unison is then accessible from the Applications Menu and can be tried out immediately. There is one caution - it is initially configured so that the date of any transferred file is the current date ie the file creation date is not preserved although that can easily be set in the configuration files - the same goes for preserving the user and group.

Ownership of Drives Mounted permanently using fstab: There is one 'feature' of mounting using the File System Table in the usual way is that the owner is root and only the owner (root) can set the time codes - this means that any files or directories that are copied by a user have the time of the copy as their date stamp which can cause problems with Unison when Synchronising. What seems to happen is this:

A solution for a single user machine is to find out your user id and mount the partition with option uid=user-id, then all the files on that partition belong to you - even the newly created ones. This way when you copy you keep the original file date.

# /dev/sda5
UUID=706B-4EE3 /media/DATA vfat iocharset=utf8,uid=yourusername,umask=000 0 0

In the case of multiple user machines you should not mount at boot time and instead mount the drives from Places.

The uid can also be specified numerically and the first user created has user id 1000.

SSH (Secure aS Hell)

Using SSH to synchronise between machines over a network

So far we have looked at the set needed up for Unison to Synchronise files on the same machine. We will now look at synchronising between machines using ssh. When you set up the two 'root' directories for synchronisation you get four options when you come to the second one - we have used the local option but if you want to synchronise between machines you select the SSH option. You then fill in the hostname which can be an absolute IP address or a hostname (but see below on the need to set this up). You should then specify the username you will use to log into the other machine ie the samba sharing username as set up for windows sharing, leave the port blank. You will also need to know the folder on the other machine as you can not browse for it. When you come to synchronise you will have to give the password corresponding to that username.

Setting up ssh .

I have always checked out that ssh has been correctly set up on both machines before trying to use Unison. In its simplest form ssh allows on to log using a terminal on a remote machine. Both machines must have ssh installed (and the ssh daemons running which is the default after you have installed it). For reasons we will come to (and sort out) the set up of Ubuntu only allows one to use and absolute IP address so you need to know the address on your network of the other machine. This can be found within a mass of other information by opening a terminal on the machine whose IP address you need and typing

ifconfig

which will give a lot of information on all your connections including the active network - the IP address you are looking for will follow an inet addr: . There will be two addresses, one will be for the local loopback (127.0.0.1) and the other is the external address which is what we want.

if you want to reduce the amount of information to just the lines containg inet addr then you can pipe the output through a program called grep - I leave it as a task for the reader to find out more about pipes and grep once you have tried it and can see the power of the command line!

ifconfig | grep "inet addr"

It is now time to try ssh and I assume that the machine we want to access has an IP address 192.168.1.2

ssh 192.168.1.2

You will get some warnings that it can not authenticate the connection which is not surprising as it is the first time and will ask for confirmation and you have type yes rather than y. It will then tell you it has saved the authentication information for the future and you will get a request for a password which is the start of your log in on the other machine. After providing the password you will get a few more lines of information and be back to a normal terminal prompt but note that it is now showing the address of the other machine. You can enter some simple commands such as a directory list (ls) if you want. When you have tired of this type exit to return to your own machine.

Enabling Hostname Resolution under Ubuntu: Use hostname.local or see the previous version of the page for Ubuntu Xenial LTS version . You will get various warnings but everything connects fine. ie

pcurtis@defiant:~$ ssh lafite.local
The authenticity of host 'lafite.local (192.168.2.151)' can't be established.
ECDSA key fingerprint is SHA256:khcDIB60+p0dFGoH0BQCVdqOLX3m0LGYauS+656tqpU.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/pcurtis/.ssh/known_hosts).
pcurtis@lafite.local's password:

packages can be updated.
0 updates are security updates.

Last login: Sun Aug 5 07:06:23 2018 from 192.168.2.140
pcurtis@lafite:~$ exit
logout
Connection to lafite.local closed.
pcurtis@defiant:~$

I got round the message 'Failed to add the host to the list of known hosts (/home/pcurtis/.ssh/known_hosts)'

by two actions:

  1. I changed ownership of /home/pcurtis/.ssh/known_hosts to my username
  2. I did a single ssh with sudo at which it added it.

After that it works for ssh and unison (still asks for password which is fine)

Some sample Profiles for Unison:

The profiles live in /home/username/.unison

# Profile to synchronise from desktop triton-ubuntu to netbook vortex-ubuntu
# with username pcurtis on vortex-ubuntu
# Note: if the hostname is a problem then you can also use an absolute address
# such as 192.168.1.4 on vortex-ubuntu
#
# Roots for the synchronisation
root = /media/DATA
root = ssh://pcurtis@vortex-ubuntu//media/DATA
#
# Paths to synchronise
path = My Backups
path = My Web Site
path = My Documents
path = Web Sites
#
# Some typical regexps specifying names and paths to ignore
ignore = Name temp.*
ignore = Name *~
ignore = Name .*~
ignore = Name *.tmp
#
# Some typical Options - only times is essential
#
# When fastcheck is set to true, Unison will use the modification time and length of a
# file as a ‘pseudo inode number’ when scanning replicas for updates, instead of reading
# the full contents of every file. Faster for Windows file systems.
fastcheck = true
#
# When times is set to true, file modification times (but not directory modtimes) are propagated.
times = true
#
# When owner is set to true, the owner attributes of the files are synchronized.
owner = true
#
# When group is set to true, the group attributes of the files are synchronized.
group = true
#
# The integer value of this preference is a mask indicating which permission bits should be synchronized other than set-uid.
perms = 0o1777

The above is a fairly comprehensive profile file to act as a framework and the various sections are explained in the comments.

Using Public Key Authentication with ssh, sshfs and unison to avoid password requests

This needs domain name resolution which is not working reliably - if you need information see the earlier version of the page,

Encryption

VeraCrypt (a current and improved fork of TrueCrypt) - very useful

I have used TrueCrypt on all my machines and despite various well documented shock withdrawal by the authors it is still well regarded and safe by many. See https://www.grc.com/misc/truecrypt/truecrypt.htm. There are many conspiracy theories based round the fact that the security services could not crack it for its sudden withdrawal. Fortunately it has now been forked and continues Opensource with enhanced security as VeraCrypt. There is the transcript of a podcast by Steve Gibson which covers security testing and his views on changing to VeraCrypt at https://www.grc.com/sn/sn-582.htm and he now supports the shift. VeraCrypt is arguably now both the best and the most popular disk encryption software over all machines and I have shifted on most of my machines. VeraCrypt can continue to use Truecrypt vaults and also has a slightly improved format of its own which addresses one of the security concerns. Changing format is as simple as changing the vault password see this article. They:

Truecrypt and its replacement VeraCrypt create a Virtual Disk with the contents encrypted into a single file or onto a disk partition or removable media such as a USB stick. The encryption is all on the fly so you have a file, you mount it as a disk and from then on it is used just like a real disk and everything is decrypted and re-encrypted invisibly in real time. The virtual Drive is unmounted automatically at close down and one should have closed all the open documents using the Virtual Drive by that point just like when you shut down normally. The advantage is that you never have the files copied onto a real disk so there are no shadows or temporary files left behind and one does not have to do a secure delete.

Truecrypt and its replacement VeraCrypt obviously install deep into the operating system in order to encrypt decrypt invisibly on the fly. This has meant in the past that it was specific to a Linux Kernel and had to be recompiled/installed every time a Kernel was updated. Fortunately it can be downloaded as as an installer in both 32 and 64 bit versions n – make sure you get the correct version.

The VeraCrypt installers for Linux are now packed into a single compressed file typically veracrypt-1.21-setup.tar.gz just download, double click to open the archive and drag the appropriate installer say veracrypt-1.21-setup-gui-x64 to the desktop, double click it then click 'Run in Terminal' to run the installer script.

The linux version of Vera/TrueCrypt has a GUI interface almost identical to that in Windows. It can be run from the standard menu although with Cinnamon you may need to do a Cinnamon restart before it is visible. It can also be run by just typing veracrypt in a terminal. It opens virtual disks which are placed on the desktop. Making new volumes (encrypted containers) is now trivial – just use the wizard. This is now a very refined product under Linux.

The only feature I have found is that one has to have administrative privileges to mount ones volumes. This means that one is asked for ones administrative password on occasions as well as the volume password. There is a way round this by providing additional 'rights' specific to just this activity to a user (or group) by additions to the sudoer file. There is information on the Sudoers file and editing it at:

https://help.ubuntu.com/community/Sudoers

Because sudo is such a powerful program you must take care not to put anything formatted incorrectly in the file. To prevent any incorrect formatting getting into the file you should edit it using the command visudo run as root or by using sudo ( sudo visudo ). The default editor for visudo has changed to vi, a terminal editor, which is close to incomprehensible at least to those used to Windows so it is fortunate we only have single line to add!

You launch visudo in a terminal

sudo visudo

There are now two ways to proceed, if you have a lot of users then it is worth creating a group, changing veracrypt to be a member of that group and adding all the users that need veracrypt to that group. You then use sudoer to giving group members the 'rights' to use it without a password. See:

http://wiki.archlinux.org/index.php/Truecrypt

If you only have one or two users then it is easier to give them individual rights by adding a line(s) to the configuration file by launching visudo in a terminal and appending one of the following lines for either a single user (replace USERNAME with your username) or a group called veracrypt (the last option is brute force and gives everyone access) :

USERNAME ALL = (root) NOPASSWD:/usr/bin/veracrypt
or
%veracrypt ALL = (root) NOPASSWD:/usr/bin/veracrypt
or
ALL ALL=NOPASSWD:/usr/bin/veracrypt

Type the line carefully and CHECK - there is no cut and paste into Visudo

Make sure there is a return at the end.

Save by Cntr O and exit by Cntr X - if there are errors there will be a message and a request what to do in the terminal.

I have used it both the simple way and by creating a group.

Encrypting your home folder - new

The ability to encrypt your home folder has been built into Mint for a long time and it is an option during installation for the initial user. It is well worth investigating if you have a laptop but there are a number of considerations and it becomes far more important to back-up your home folder in the mounted (un-encrypted) form to a securely stored hard drive as it is possible to get locked out in a number of less obvious ways such as changing your login password incorrectly.

There is a passphrase generated for the encryption which can in theory be used to mount the folder but the forums are full of issues with less solutions! You should generate it for each user by

ecryptfs-unwrap-passphrase

Now we will find there is considerable confusion in what is being produced and what is being asked for in many of the encryptfs options and utilities as it will request your passphrase to give you your passphrase!. I will try to explain. When you login as a user you have a login password or passphrase. The folder is encrypted with a much longer randomly generated passphase which is looked up when you login with your login password and that is whatt you are being given and what is needed if something goes dreadfull wrong. These are [should be] kept in step if you change your login password using the GUI Users and Groups utility but not if you do it in a terminal. It is often unclear which password is required as both are often just referred to as the passphrase in the documentation.

Encrypting an existing users home folder.

It is possible to encrypt an existing users home folder provided there is at least 2.5 times the folder's size available in /home - a lot of waorkspace is required and a backup is made.

You also need to do it from another users account. If you do not already have one an extra basic user with admin (sudo) priviledges is required and the user should be given a password otherwise sudo can not be used.

You can create this basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, and set Type to Administrator provide username and Full name... -> Create -> Highlight User, Click Password to set a password otherwise you can not use sudo.

Restart and Login in to your new basic user. You may get errors if you just logout as the gvfs file system may still have files in use.

Now you can run this command to encrypt a user:

sudo ecryptfs-migrate-home -u user

You'll have to provide your user account's Login Password. After you do, your home folder will be encrypted and you should be presented with some important notes In summary, the notes say:

  1. You must log in as the other user account immediately – before a reboot!
  2. A copy of your original home directory was made. You can restore the backup directory if you lose access to your files. It will be of the form user.8random8
  3. You should generate and record the recovery passphrase (aka Mount Passphrase).
  4. You should encrypt your swap partition, too.

The highlighting is mine and I reiterate you must log out and login in to the users whose account you have just encrympted before doing anything else.

Once you are logged in you should also create and save somewhere very safe the recovery phrase (also described as a randomly generated mount passphrase). You can repeat this any time whilst you are logged into the user with the encrypted account like this:

user@lafite ~ $ ecryptfs-unwrap-passphrase
Passphrase:
randomrandomrandomrandomrandomra
user@lafite ~ $

Note the confusing request for a Passphrase - what is required is your Login password/passphrase. This will not be the only case where you will be asked for a passphrase which could be either your Login passphrase or your Mount passphrase! The Mount Passphrase is important - it is what actually unlocks the encryption. There is an intermediate stage when you login into your account where your account login is used to used to temporarily regenerate the actual mount passphrase. This linkage needs to updated if you change your login password and for security reasons this is not done if you change your login password in a terminal using passwd user which could be done remotely. If you get the two out of step the mount passphrase may be the only way to retrieve your data hence the great importance. It is also required if the system is lost and you are accessing backups on disk.

The documentation in various places states that the GUI Users and Groups utility updates the linkage between the Login and Mount passphrases but I have found that the password change facility is greyed out in Users and Groups for users with encrypted home folders. In a single test I used just passwd from the actual user and that did seem to update both and everything kept working and allowed me to login after a restart.

Mounting an encrypted home folder independently of login.

A command line utility ecryptfs-recover-private is provided to mount the encrypted data but it currently has several bugs when used with the latest Ubuntu or Mint.

  1. You have to specify the path rather than let the utility search.
  2. You have to manually link keychains with a magic incantation which I do not completely understand namely sudo keyctl link @u @s after every reboot. A man keyctl indicates that it links the User Specific Keyring (@u) to the Session Keyring (@s). See https://bugs.launchpad.net/ubuntu/+source/ecryptfs-utils/+bug/1718658 for the bug report

The following is an example of using ecryptfs-recover-private and the mount passphrase to mount a home folder as read/write (--rw option), doing a ls to confirm and unmounting and checking with another ls.

pcurtis@lafite:~$ sudo keyctl link @u @s
pcurtis@lafite:~$ sudo ecryptfs-recover-private --rw /home/.ecryptfs/pauline/.Private
INFO: Found [/home/.ecryptfs/pauline/.Private].
Try to recover this directory? [Y/n]: y
INFO: Found your wrapped-passphrase
Do you know your LOGIN passphrase? [Y/n] n
INFO: To recover this directory, you MUST have your original MOUNT passphrase.
INFO: When you first setup your encrypted private directory, you were told to record
INFO: your MOUNT passphrase.
INFO: It should be 32 characters long, consisting of [0-9] and [a-f].

Enter your MOUNT passphrase:
INFO: Success! Private data mounted at [/tmp/ecryptfs.8S9rTYKP].
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
Desktop Dropbox Pictures Templates
Documents Videos Downloads Music Public
pcurtis@lafite:~$ sudo umount /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$

The above deliberately took the long way rather than use the matching LOGIN passphrase as a demonstration.

I have not bothered yet with encrypting the swap partition as it is rarely used if you have plenty of memory and swoppiness set low as discussed earlier.

Once you are happy you can delete the backup folder to save space. Make sure you Delete it (Right click delete) if you use nemo and as root - do not risk it ending up in a root trash which is a pain to empty!

Feature or Bug - home folders remain encrypted after logout?

In the more recent versions of Ubuntu and Mint the home folders remain mounted after logout. This also occurs if you login in a consul or remotely over SSH. This is useful in many ways and you are still protected fully if the machine is off when it is stolen. You have little protection in any case if you are turned on and just suspended. Some people however logout and suspend expecting full protection which is not the case. In exchange it makes backing up and restoring a home folder easier.

Backing up an encrypted folder.

A tar archive can be generated from a mounted home folder in exactly the same way as before as the folder stays unencrypted when you change user to ensure the folder is static. If that was not the case you could use a consul (by Ctrl Alt F2) to login then switch back to the GUI by Ctrl Alt F7 or login via SSH to make sure it was mounted to allow a backup. Either way it is best to logout at the end.

Warning: I have found dangers in using a consul to login to a different user when folders are already mounted - I have not lost data but it becomes very confusing as some folders seem to be mounted multiple times.

Another and arguably better alternative is to mount the user via encryptfs-recover-private and backup using Method 3 from the mount point like this:

sudo ecryptfs-recover-private --rw /home/.ecryptfs/user1/.Private

cd /tmp/ecryptfs.8S9rTYKP && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

Restoring to an encrypted folder - Untested

Mounting via encryptfs-recover-private --rw seems the most promising way but not tested yet. The mount point corresponds to /home (see example above) so you have to use Method 3 (or 4) to create and retrieve your archive in this situation namely:

cd /home/user1 && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=user1/.gvfs
# or
cd /tmp/ecryptfs.8S9rTYKP && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" -C /tmp/ecryptfs.randomst

These are all single lines if you cut and paste. The . (dot) means everything at that level goes into the archive.

Solving Problems with Dropbox after encrypting home folders

The following day to when I encrypted the last home folder I got a message from Dropbox say that they would only support EXT4 folders under Linux from 3 months time and encryption would not be supported. They also noted the folders should be on the same drive as the operating system.

My solution has been to move the dropbox folders to a new EXT4 partition on the SSD. What I actually did was to make space on the hard drive for a swap partition and move the swap from the SSD to make space for the new partition. It is more sensible to have the swap on the hard drive as it is rarely used and if it is it ends to reduce the life of the SSD. Moving the swap partition need several steps and some had to be repeaed for both the operating systems to avoid errors in booting. The stages in summary were:

  1. Use gparted to make the space by shrinking the DATA partition by moving the end
  2. Format the free space to be a swap partition.
  3. Right click on the partition to turn it on by swapon
  4. Add it in /etc/fstab using blkid to identify the UUID so it will be auto-mounted
  5. Check you now have two swaps active by cat /proc/swaps
  6. Reboot and check again to ensure the auto-mount is correct
  7. Use gparted to turn off swap on the SSD partition - Rt Click -> swapoff
  8. Comment out the SSD swap partition in /etc/fstab to stop it auto-mounting
  9. Reboot and check only one active partition by cat /proc/swaps
  10. Reformat the ex swap partition to EXT4
  11. Set up a mount point in /etc/fstab of /media/DROP; set the label to DROP
  12. Reboot and check it is mounted and visible in nemo
  13. Get to a root browser in nemo and set the owner of media/DROP from root to 1000, group to adm and allow rw access to everyone.
  14. Create folders called user1, user2 etc in DROP for the dropbox folders to live in. It may be possible to share a folder but I did not want to risk it.
  15. Move the dropbox folders using dropbox preferences -> Sync tab -> Move: /media/Drop/user1
  16. Check it all works.
  17. Change folders in KeePass2, veracrypt, jdotxt and any others that use dropbox.
  18. Repeat from 15 for other users.

Dropbox caused me a lot of time-wasting work but it did force me to move the swap partition to the correct place.

Secure deletion of data

Encrypting data is very important but one also needs a way to erase files without traces. It is no good being able to encrypt a file if you can not delete the original or working copies.

Secure Delete Files

shred: Linux has a built in command shred which does a multiple pass write of data selected to make a read based on residual information at the edges of the magnetic tracks almost impossible before the file is deleted. This is not foolproof for all file systems and programs as temporary copies made be made and modern file systems do not always write data in the same place however on an ext2, ext3 or ext4 system with the default settings in Ubuntu Linux it is acceptable

shred -u sensitive.text

Do a man shred to find out more.

srm: This is part of the secure-delete package which needs to be installed. You can then use srm everywhere you could use rm See man srm

Secure Wipe Unused Space

See https://superuser.com/questions/19326/how-to-wipe-free-disk-space-in-linux

zerofree looks the way to go but I have not tried it yet. I have installed it using the Synaptic Package Manager

System Housekeeping to reduce TimeShift storage requirements.

Note: This area is still under investigation and there may be more information in my System Diary part 29.

Note 2: There is currently a major problem if you have done a Restore using Timeshift which will hopefully be solved by the time you read this. See System Diary part 29. Currently many utilities run daily, weekly and monthly are not doing so including log file rotation.

The new update philosophy by Mint of presenting every possible update and using TimeShift if there are problems may lead to a more secure system but does lead to a large build up of cached downloads of updates in the apt cache and of kernels. There are possibilities for some automatic reduction in cached downloads but Kernels have to be deleted manually.

Another area where there can be large amounts of little used data is in log files which are normally transient in the filesystem but have long term copies in TimeShift. On one of my machines I had about 7 Gbytes of system and systemd log files in /var/log . A good way to look for problems is to use the built in Disk Usage Analyser in the menu which allows one to zoom in to find unexpectedly large folders. The main areas to look into are:

  1. The apt cache of downloaded updates ~ 1 Gbyte
  2. Old unused kernels ~ 450 Mbytes per kernel and ~ 5 kernels
  3. Systemd Journal Files ~ 3 Gbytes on one of my machines.
  4. Log Files ~ 2.1 Gybytes reduction on one machine - should be less as I also found there was a problem with log file rotation at the time.

Overall I quadrupled the spare space on the Defiant from 4.2 Gbytes to 16.4 Gbytes on a 42 Gbyte root partition and that should improve further as the excess rolls out of the saved snapshots.

The last two may have been higher than would be expected as there was a problem with log file rotation and the changes need not be made unless you have freqent.

You need not make the changes for 4. Log files unless you find you frequently have large files and it is much more complex involving extensive use of the terminal.

We will now look at the most likely problem areas in detail:

1. The apt cache of downloaded updates.

All updates which are downloaded are, by default, stored in /var/cache/apt/archive until they are no longer current ie there is a more up-to-date version which has been downloaded. As time goes on and more and more updates take place this will grow in size and TimeShift will not only back them up but also the ones which have been replaced. Fortunately it is easy to change the default to delete download files after installation which will also save space in TimeShift. The only reason to retain them is to copy them to another machine to update it without an extra download which is most appropriate if you are on mobile broadband and the default can be changed when you need to do that and they can then be cleared manually before TimeShift finds them.

The best and very simple way is to use the Synaptic Package Manager

Synaptic Package Manager -> Settings -> Preferences -> Tick box for 'Delete download files after installation' and click 'Delete Cached Package Files'.

This freed 1.0 Gbyte in my case and they will slowly fall out of the TimeShift Snapshots - a good start

2. Out of date Kernels

Use

Update Manager -> View -> Linux Kernels - You will get a warning which you continue past to a list of kernels.

I had 7 installed . You need at least one well tried earlier kernel, perhaps 2 but not 7. So I deleted all but three of the the earlier kernels. You have to do each one separately and it seems that it updates the grub menu automatically. Each Kernel takes up about 72 Mbytes in /boot and 260 Mbytes in /lib/modules and 140 in /usr/src as kernel headers so removing 4 old kernels saved a useful 1.9 Gbytes.

There is a script being considered for 19.1 https://github.com/Pjotr123/purge-old-kernels-2 which may be used to limit the number of kernels.

3. Limiting the maximum systemd journal size

The journal is a special case of a raw log file type and comes directly from systemd. I sought information on limiting the size and found https://unix.stackexchange.com/questions/130786/can-i-remove-files-in-var-log-journal-and-var-cache-abrt-di-usr which gave a way to check and limit the size of the systemd journal files in /var/log/journal/

You can query using journalctl to find out how much disk space it's consuming by:

journalctl --disk-usage
You can control the total size of this folder using a parameter in your /etc/systemd/journald.conf . Edit from the file manager (as root) or from the terminal by:

xed admin:///etc/systemd/journald.conf

and set SystemMaxUse=100M

Note - do not forget to remove the # from the start of the line!

After you have saved it it is best to restart the systemd-journald.service by:

sudo systemctl restart systemd-journald.service
This immediately saved me 3.5 Gbytes and that will increase as Timeshift rolls on.

4. Limiting the size of Log files [Advanced]

The system keeps a lot of log files which are also saved by TimeShift and these were causing me to run out of space on my Defiant. The main problem was identified by running the Disk Usage Manager as being in the /var/log area where about 7 Gbytes was in use! Of this 3.6 Gbytes was in /var/log/journal/ which I have already covered and the single syslog file was over 1 .4 Gbytes. In contrast the Lafite had only 385 Mbytes in journal, 60 Mbytes in the cups folder and 57 Mbytes in sylog out of a total of 530 Mbytes. I have a very easy way to pick out the worst offenders using a single terminal command which looks for the largest folders and files in /var/log where they are all kept. You may need to read man du to understand how it works! It needs sudo as some of the log files have unusual owners.

sudo du -hsxc --time /var/log/* | sort -rh | head -30

which on my system now produces

pcurtis@defiant:~$ sudo du -hsxc --time /var/log/* | sort -rh | head -20
217M 2018-08-31 05:11 total
128M 2018-08-31 05:11 /var/log/journal
28M 2018-06-14 12:02 /var/log/installer
27M 2018-08-31 05:11 /var/log/syslog
20M 2018-08-31 05:11 /var/log/kern.log
6.1M 2018-08-31 04:48 /var/log/syslog.1
3.1M 2018-08-31 05:11 /var/log/Xorg.0.log
1.6M 2018-08-31 05:00 /var/log/timeshift
480K 2018-08-26 16:00 /var/log/auth.log.1
380K 2018-07-28 11:27 /var/log/dpkg.log.1
336K 2018-08-26 11:15 /var/log/boot.log
288K 2018-08-30 04:06 /var/log/apt
288K 2018-08-16 08:12 /var/log/lastlog
192K 2018-08-26 11:37 /var/log/kern.log.1
176K 2018-08-26 16:02 /var/log/samba
172K 2018-08-30 04:06 /var/log/dpkg.log
152K 2018-08-31 05:00 /var/log/cups
148K 2018-06-29 03:40 /var/log/dpkg.log.2.gz
128K 2018-08-26 16:02 /var/log/lightdm
120K 2018-08-26 11:37 /var/log/wtmp

Note: The above listing does not distinguish between the folders (where the total of all subfolders is shown) and single files. For example journal is a folder and syslog and kern.log are files. We have considered journal already and one should note that although journal is in etc/log it is Not rotated by logrotate. installer, timeshift and Xorg. are further special cases.

When I started I had two huge log files (sytem and kern.log) which did not seem to be rotating out and totalled nearly 3 Gbytes. The files are automatically regenerated so I first checked there were no errors filling them continuosly by use of tail to watch what was being added and to try to understand what had filled them:

tail -fn100 /var/log/syslog

tail -fn100 /var/log/kern.log

The n100 means the initial 'tail' is 100 lines instead of the default 10. I could see nothing obviously out of control so I finally decided to delete them and keep a watch on the log files from now on.

Note: Although the system should be robust enough to regenerate the files I have seen warnings it is better to truncate. This ensures the correct operation of all programs still writing to that file in append mode (which is the typical case for log files).

sudo truncate -s 0 /var/log/syslog
sudo truncate -s 0 /var/log/kern.log

Warning: If your log files are huge there is probably a reason. Something is not working properly and is filling them. You should try to understand before just limiting the size or deleting. See https://askubuntu.com/questions/515146/very-large-log-files-what-should-i-do. The main reason was that they had not been rotating for about 6 weeks but just increasing. But there were other errors I found.

Once you are sure you do not have a problem you can avoid future problems by limiting the size of each log file by editing /etc/logrotate.conf and adding a maxsize 10M parameter.

xed admin:///etc/logrotate.conf

From man logrotate

maxsize size: Log files are rotated when they grow bigger than size bytes even before the additionally specified time interval (daily, weekly, monthly, or yearly). The related size option is similar except that it is mutually exclusive with the time interval options, and it causes log files to be rotated without regard for the last rotation time. When maxsize is used, both the size and timestamp of a log file are considered.

If size is followed by k, the size is assumed to be in kilobytes. If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G are all valid

My Note: The maxsize parameter only comes into play when logrotate is called which is currently once daily so its effect is limited unless you increase the frequency.

The easiest way to increase the frequency is to move the call of logrotate from /etc/cron.daily to /etc/cron.daily.

If you look at the bottom of /etc/logrotate.conf you will see it has the potential to bring in configuration files for specific logs which inherit these parameters but can also overrule them and add additional parameters - see networkworld article 3218728 - so we also need to look at for specific configuration files in /etc/logrotate.d . As far as I can see none of them use any of the size parameters added so they should all inherited from /etc/logrotate.conf so one should only need to add it if you want a different or lower value. /etc/logrotate.d/rsyslog is the most important to check as it controls the rotating the syslog and many of the other important log files such as kern.log.

cat /etc/logrotate.d/rsyslog

Most of the other file names in /etc/logrotate.d are obvious except for cups seems to be handled by /etc/logrotate.d/cups-daemon .

Last rotation times are a very valuable diagnostic and are available by:

cat /var/lib/logrotate/status

Home Pauline Howto Articles Uniquely NZ Small Firms Search
Copyright © Peter and Pauline Curtis
Layout revised: 2nd September, 2018
Settings - opens in a new window or tab Link to W3C HTML5 Validator