Home Uniquely NZ Travel Howto Pauline Small Firms Search
Diary of System and Website Development
Part 29 (January 2018 - August 2018)

2nd April 2018

Using Office 365 email under Linux

The Open University has started to provide access to Microsoft Office 365 to Students as well as Tutors. This has a big advantage even to a Linux user in that it provides an academic email address ie one ending in .ac.uk which is a standard way to check before firms provide academic discounts.

The way it works is that one first goes to the Office 365 Site office365.com where there should be a sign in link at the top right. Entering a email addess of the form ouusername@ou.ac.uk takes one to a OU specific login page and then one is into the online Office 365 which has a list of Apps you can access so -> Outlook and you have webmail access to your emails. Note that the forwarding option does not seem to work.

POP3 and STMP access to Office 365 Emails Accounts

Webmail access is useful but it is much better to be able to access from Thunderbird like all ones other accounts under linux or to use an Android phone or tablet. The setup is found in settings (the wheel at top right) -> Settings -> Mail (under your app settings) -> Accounts -> POP and IMAP which gives one all the information you need:

POP Setting

Server name: outlook.office365.com
Port: 995
Encryption method: TLS

IMAP Setting

Server name: outlook.office365.com
Port: 993
Encryption method: TLS

SMTP setting

Server name: outlook.office365.com
Port: 993
Encryption method: TLS

You only need this information and although there are some other tick boxes I left everything alone and it may well be that you never need to sign into Office 365 but I suggest you do as it may be required to initiate everything.

We now have the information to set up Thunderbird as usual when adding a new POP3 or IMAP account which I will not go into in detail although I have put some screen shots below to show what has been set or changed fro defaults







8 May 2018

Use of systemd

The applets need to access various system functions such as suspend and some of the applets need programs running the background as daemons ie the vnstat daemon. The following is a very brief introduction to systemd sufficient to act as a reminder for me and is based on https://wiki.archlinux.org/index.php/Systemd

Basic systemctl usage

The main command used to introspect and control systemd is systemctl. Some of its uses are examining the system state and managing the system and services. See systemctl(1) for more details.

Tip: You can use all of the following systemctl commands with the -H user@host switch to control a systemd instance on a remote machine. This will use SSH to connect to the remote systemd instance.

Analyzing the system state

Show system status using:

systemctl status

List running units:



systemctl list-units

List failed units:

systemctl --failed

The available unit files can be seen in /usr/lib/systemd/system/ and /etc/systemd/system/ (the latter takes precedence). List installed unit files with:

systemctl list-unit-files

Using units

Units can be, for example, services (.service), mount points (.mount), devices (.device) or sockets (.socket).

When using systemctl, you generally have to specify the complete name of the unit file, including its suffix, for example sshd.socket. There are however a few short forms when specifying the unit in the following systemctl commands:

See systemd.unit(5) for details.
Note: Some unit names contain an @ sign (e.g. name@string.service): this means that they are instances of a template unit, whose actual file name does not contain the string part (e.g. name@.service). string is called the instance identifier, and is similar to an argument that is passed to the template unit when called with the systemctl command: in the unit file it will substitute the %i specifier.

To be more accurate, before trying to instantiate the name@.suffix template unit, systemd will actually look for a unit with the exact name@string.suffix file name, although by convention such a "clash" happens rarely, i.e. most unit files containing an @ sign are meant to be templates. Also, if a template unit is called without an instance identifier, it will just fail, since the %i specifier cannot be substituted.

Tip: Most of the following commands also work if multiple units are specified, see systemctl(1) for more information.
Tip: The --now switch can be used in conjunction with enable, disable, and mask to respectively start, stop, or mask immediately the unit rather than after the next boot.

Start a unit immediately:

systemctl start unit

Stop a unit immediately:

systemctl stop unit

Restart a unit:

systemctl restart unit

Ask a unit to reload its configuration:

systemctl reload unit

Show the status of a unit, including whether it is running or not:

systemctl status unit

Check whether a unit is already enabled or not:


Enable a unit to be started on bootup:

systemctl enable unit

Enable a unit to be started on bootup and Start immediately:

systemctl enable --now unit

Disable a unit to not start during bootup:

systemctl disable unit

Mask a unit to make it impossible to start it:

systemctl mask unit

Unmask a unit:

systemctl unmask unit

Show the manual page associated with a unit (this has to be supported by the unit file):

systemctl help unit

Reload systemd, scanning for new or changed units:

systemctl daemon-reload

Power management

polkit is necessary for power management as an unprivileged user. If you are in a local systemd-logind user session and no other session is active, the following commands will work without root privileges. If not (for example, because another user is logged into a tty), systemd will automatically ask you for the root password.

Shut down and reboot the system:

systemctl reboot

Shut down and power-off the system:

systemctl poweroff

Suspend the system:

systemctl suspend

Put the system into hibernation:

systemctl hibernate

Put the system into hybrid-sleep state (or suspend-to-both):

systemctl hybrid-sleep

I found the above information when looking at how to set up vnstat on systems which did not do so automatically like Mint.

Setting up vnstat on systems such as Arch Linux (WORK IN PROGRESS and untested)

vnStat is a lightweight (command line) network traffic monitor. It monitors selectable interfaces and stores network traffic logs in a database for later analysis.

Install the vnstat package

Distribution dependent


Start/Enable the vnstat.service daemon. For systemd systems the above indicates that one does

systemctl enable --now vnstat.service

Pick a preferred network interface and edit the Interface variable in the /etc/vnstat.conf accordingly. To list all interfaces available to vnstat, use vnstat --iflist.

To start monitoring a particular interface you must initialize a database first. Each interface needs its own database. The command to initialize one for the eth0 interface is:

vnstat -u -i eth0


Query the network traffic:

vnstat -q

Viewing live network traffic usage:

vnstat -l

To find more options, use:

vnstat --help


Instructions based on https://www.howtoforge.com/tutorial/vnstat-network-monitoring-ubuntu/

Monitoring Network Traffic or Bandwidth Usage is an important task in an organisational structure or even for developers. It is sometimes required to monitor traffic on various systems which share the internet bandwidth. There might be situations where network statistics are required for decision making in the networking areas or use the logged information on the network traffic for analysis tasks.

vnStat and vnStati are command line utilities which are very useful tools that help a user to monitor, log and view network statistics over various time periods. It provides summaries on various network interfaces, may it be wired like "eth0" or wireless like "wlan0". It allows the user to view hourly, daily, monthly statistics in the form of a detailed table or a command line statistical view. To store the results in a graphical format, we can use the vnStati to obtain and provide visual display of statistics in the form of graphs and store them in the form of images for later use.

This deals with the procedure to install and use vnStat and vnStati. It also details the options and usage methods required to view and store the type of information you want. vnStat does most of the logging and updating , where as vnStati is used to provide a graphical display of the statistics.

Installing vnStat and vnStati

System dependent

vnStat setup and running

Once the installation is complete, vnStat has to be setup or configured as it does not start on its own. vnStat has to be told explicitly which interfaces have to be monitored. We then start the vnStat daemon called "vnstatd", which starts vnStat and monitors for as long as it is not stopped explicitly.

The first thing to do here is tell vnStat the network interfaces to monitor. Here we look at a wired interface "eth0" and a wireless interface "wlan0". Type the following commands in the terminal.

vnstat -u -i eth0

This above command activates monitoring that interface. The first time you run this command on any interface you will get an error saying 'Unable to read database "/var/lib/vnstat/eth0" '. Ignore this as you will also see it says it has set up a suitable database for the future!

Similar to above we also set the wireless network interface using the command:

vnstat -u -i wlan0

To view all the network interfaces available in your system, use the command:

vnstat --islist

Once you know all the interfaces that you want to be monitored, use the command above with that interface name to monitor the traffic on it.

Once the above steps are complete, we can now start the vnStat daemon.


7 August 2018

Adding an SSD to the Defiant

Timeshift can also be used for 'cloning' as you can chose what partition you restore to. For example I have recently added an SSD to the Defiant and I just created the partition scheme on the SSD , took a fresh snapshot and restored it to the appropriate partition on the SSD. It will not be available until Grub is alerted by a sudo update-grub after which it will be in the list of operating systems available at the next boot. Assuming you have a separate /home it will continue to use the existing one and you will probably want to also move the home folder - see the previous section on Moving a Home Folder to a dedicated Partition or different partition (Expert level) for the full story.

The majority of the section has been incorporated into the Defiant page.

12th August 2018

New Overall Backup Philosophy for Mint

My thoughts on Backing Up have evolved considerably over time and now take much more into account the use of several machines and sharing between them and within them giving redundancy as well as security of the data. They now look much more at the ways the backups are used, they are not just a way of restoring a situation after a disaster or loss but also about cloning and sharing between users, machines and multiple operating systems. They continue to evolve to take into account the use of data encryption.

So firstly lets look at the broad areas that need to be backed up:

  1. The linux operating system(s), mounted at root. This area contains all the shared built-in and installed applications but none of the configuration information for the applications or the desktop manager which is specific to users. Mint has a built in utility called TimeShift which is fundamental to how potential regressions are handled - this does everything required for this areas backups and can be used for cloning. TimeShift will be covered in detail in a separate section.
  2. The users home folders which are folders mounted within /home and contain the configuration information for each of the applications as well as the desktop manager which is specific to users such as themes, panels, applets and menus. It also contains all the Data belonging to a specific user including the Desktop, the standard folders such as Documents, Video, Music and Photos etc. It will probably also contain the email 'profiles' if an SSD is in use. This is the most challenging area with the widest range of requirements so is the one covered in the greatest depth here.
  3. Shared Data. The above covers the minimum areas but I have an additional DATA area which is available to all operating systems and users and is periodical synchronised between machines as well as being backed up. This is kept independent and has a separate mount point. In the case of machines dual booted with Windows it uses a files system format compatible with Windows and Linux such as NTFS. The requirement for easy and frequent synchronisation means Unison is the logical tool for DATA between machines with associated synchronisation to a large USB hard drive for backup. Unison is covered in detail elsewhere in this page.

Email and Browsers. I am going to also mention Email specifically as that has specific issues as it needs to be collected on every machine as well as pads and phones and some track kept on replies regardless of source. All incoming email is retained on the servers for months if not years and all outgoing email is copied to either a separate account accessible from all machines or where that is not possible automatically such as android a copy is sent back to the senders inbox. Thunderbird has a self contained 'profile' where all the local configuration a filing system for emails is retained and that profile along with the matching one for the firefox browser need to be backed up and depends where they are held. The obvious places are the DATA area allowing sharing between operating systems and users or in each users home folder which offers more speed if an SSD is used and better security if encryption is implemented.

Physical Implications of Backup Philosophy - Partitioning

I am not going to go into this in great depth as it has already been covered in other places but my philosophy is:

  1. The folder containing all the users home folders should be a separate partition mounted as /home. This separates the various functions and makes backup, sharing and cloning easier.
  2. There are advantages in having two partitions for linux systems so new versions can be run for a while before committing to them. A separate partition for /home is required if different systems are going to share it.
  3. When one has an SSD the best speed will result from having the linux systems and the home folder using the SSD especially if the home folders are going to be encrypted.
  4. Shared DATA should be in a separate partition mounted at /media/DATA. If one is sharing with a Windows system it should be formatted as ntfs which also reduces problems with permissions and ownership with multiple users. DATA can be on a separate slower but larger hard drive.
  5. If you have an SSD swaping should be minimised and the swap partition should be on a hard drive if it is available to maximise SSD life.

The Three Parts to Backing Up

System Backup - TimeShift - Scheduled Backups and more.

TimeShift which is now fundamental to the update manager philosophy of Mint and backing up the linux system very easy. To Quote "The star of the shown Linux Mint 19, is Timeshift. Thanks to Timeshift you can go back in time and restore your computer to the last functional system snapshot. If anything breaks, you can go back to the previous snapshot and it's as if the problem never happened. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot (thus canceling the effects of the regression) and you still have the ability to apply updates selectively (as you did in previous releases)." The best information I hve found about TimeShift and how to use it is by the author.

TimeShift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded. This ensures that your files remains unchanged when you restore your system to an earlier date. Snapshots are taken using rsync and hard-links. Common files are shared between snapshots which saves disk space. Each snapshot is a full system backup that can be browsed with a file manager. TimeShift is efficient in use of storage but it still has to store the original and all the additions/updates over time. The first snapshot seems to occupy slightly more disk space than the root filesystem and six months of additions added another approximately 35% in my case. I run with a root partition / and separate partitions for /home and DATA. Using Timeshift means that one needs to allocate an extra 2 fold storage over what one expects the root file sytem to grow to.

In the case of the Defiant the root partition has grown to about 11 Gbytes and 5 months of Timeshift added another 4 Gbyes so the partition with the /timeshift folder neeeds to have at least 22 Gbytes spare if one intends to keep a reasonable span of sheduled snapshots over a long time period. After three weeks of testing Mint 19 my TimeShift folder has reached 21 Gbytes for a 8.9 Gbyte system!

This space requirements for TimeShift obviously have a big impact on the partition sizes when one sets up a system. My Defiant was set up to allow several systems to be emplyed with multiple booting. I initially had the timeshift folder on the /home partition which had plenty of space but that does not work with a multiboot system sharing the /home folder. Fortunately two of my partitions for Linux systems plenty big enough for use of TimeShift and the third which is 30 Gbytes is accceptable if one is prepared to prune the snapshotss occasionally.

Cloning your System using TimeShift

Timeshift can also be used for 'cloning' as you can chose what partition you restore to. For example I have recently added an SSD to the Defiant and I just created the partition scheme on the SSD , took a fresh snapshot and restored it to the appropriate partition on the SSD. It will not be available until Grub is alerted by a sudo update-grub after which it will be in the list af operating systems available at the next boot. Assuming you have a separate /home it will continue to use the existing one and you will probably want to also move the home folder - see the previous section on Moving a Home Folder to a dedicated Partition or different partition (Expert level) for the full story.

Warning about deleting system partitions after cloning.

When you come to tidy up your partitions after cloning You Must do a sudo update-grub after any partitio deletions before any reboot. If Grub can not find a partition it expects it hangs and you will not be able to boot your system at all and you will drop into a grub-recovery prompt.

I made this mistake and used the proceedure in https://askubuntu.com/questions/493826/grub-rescue-problem-after-deleting-ubuntu-partition by Amr Ayman and David Foester which I reproduce below

grub rescue > ls
(hd0) (hd0,msdos5) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) (hd1) (hd1,msdos1)
grub rescue > ls (hd0,msdos1) # try to recognize which partition is this
grub rescue > ls (hd0,msdos2) # let's assume this is the linux partition
grub rescue > set root=(hd0,msdos2)
grub rescue > set prefix=(hd0,msdos2)/boot/grub # or wherever grub is installed
grub rescue > insmod normal # if this produced an error, reset root and prefix to something else ..
grub rescue > normal

For a permanent fix run the following after you successfully boot:

sudo update-grub
sudo grub-install /dev/sdX

where /dev/sdX is your boot drive.

It was not a pleasant activity and had far too much trial and error so make sure you do update-grub.

Users - Home Folder Archiving using Tar.

Tar is a very powerful command line archiving tool round which many of the GUI tools are based which should work on most Linux Distributions. In many circumstances it is best to access this directly to backup your system. The resulting files can also be accessed (or created) by the archive manager accessed by right clicking on a .tgz .tar.gz or .tar.bz2 file. Tar is an ideal way to backup many parts of our system, in particular one's home folder. The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system.

The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system. After it is complete the file should be moved to a safe location, preferably a DVD or external device. If you want to do a higher compression method the command "tar cvpjf mybackup.tar.bz2" can be used in place of "tar cvpzf mybackup.tgz". This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file.

You can access parts of the archive using the GUI Archive Manager by right clicking on the .tgz file - again slow on such a large archive.

Tar, in the simple way we will be using it, takes a folder and compresses all its contents into a single 'archive' file. With the correct options this can be what I call an 'exact' copy where all the subsiduary information such as timestamp, owner, group and permissions are stored without change. Soft, symbolic links, and hard links can also be retained. Normally one does not want to follow a link out of the folder and put all of the target into the archive so one needs to take care.

We want to back up each users home folder so it can be easily replaced on the existing machine or on a replacement machine. The ultimate test is can one back up the users home folder, delete it (safer is to rename) and restore it exactly so the user can not tell in any way. The home folder is, of course, continually changing when the user is logged in so backing up and restoring should be really be done when the user is not logged in, ie from a different user, a LiveUSB or from a consul.

Firstly we must consider what is, arguably, the most fundamental decision about backing up, the way we specify the location being saved when we create the tar archive and when we extract it - in other words the paths must usually restore the folder to the same place. If we store absolute locations we must extract in the same way. If it is relative we must extract the same way. So we will always have to consider pairs of commands depending on what we chose.

Method 1 has absolute paths and shows home when we open the archive with just a single user folder below it. This is what I have always used for my backups and the folder is always restored to /home on extraction.

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Method 2 shows the users folder at the top level when we open the archive. This is suitable for extracting to a different partition or place but here the extraction is back to the correct folder.

cd /home && sudo tar cvpzf "/media/USB_DATA/mybackup2.tgz" user1 --exclude=user1/.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackup2.tgz" -C /home

Method 3 shows the folders within the users folder at the top level when we open the archive. This is also suitable for extracting to a different partition or place and has been added to allow backing up and restoring encrypted home folders where the encrypted folder may be mounted to a different place at the time

cd /home/user1 && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" -C /home/user1

These are all single lines if you cut and paste.

Archive creation options: The options used when creating the archive are: create archive, verbose mode (you can leave this out after the first time) , retain permissions, -P do not strip leading backslash, gzip archive and file output. Then follows the name of the file to be created, mybackup.tgz which in this example is on an external USB drive called 'USB_DATA' - the backup name should include the date for easy reference. Next is the directory to back up. Next are the objects which need to be excluded - the most important of these is your back up file if it is in your /home area (so not needed in this case) or it would be recursive! It also excludes the folders (.gvfs) which is used dynamically by a file mounting system and is locked which stops tar from completing. The problems with files which are in use can be removed by creating another user and doing the backup from that user - overall that is a cleaner way to work. If you want to do a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Archive Restoration uses options - extract, verbose, retain permissions, from file and gzip. This will take a while. The "-C / ensures that the directory is Changed to a specified location, in case 1 this is root so the files are restore to the original locations. In case two you can chose but is normally /home. Case 3 is useful if you mount an encrypted home folder independently of login using ecryptfs-recover-private --rw which mounts to /tmp/user.8random8

Deleting Files: If the old system is still present note that tar only overwrites files, it does not deleted files from the old version which are no longer needed. I normally restore from a different user and rename the users home folder before running tar as above, when I have finished I delete the renamed file. This needs root/sudo and the easy way is to right click on a folder in Nemo and 'open as root' - make sure you use a right click delete to avoid going into a root deleted items folder.

Higher compression: If you want to use a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Deleting Archive files: If you want to delete the archive file then you will usually find it is owned by root so make sure you delete it in a terminal - if you use a root browser then it will go into a root Deleted Items which you can not easily empty so it takes up disk space for ever more. If this happens then read http://www.ubuntugeek.com/empty-ubuntu-gnome-trash-from-the-command-line.html and/or load the trash-cli command line trash package using the Synaptic Package Manager and type

sudo trash-empty

Alternative to multiple methods The tar manual contains information on an option to strip given number of leading components from file names before extraction namely --strip-components=number

To quote

For example, suppose you have archived whole `/usr' hierarchy to a tar archive named `usr.tar'. Among other files, this archive contains `usr/include/stdlib.h', which you wish to extract to the current working directory. To do so, you type:

$ tar -xf usr.tar --strip=2 usr/include/stdlib.h

The option --strip=2 instructs tar to strip the two leading components (`usr/' and `include/') off the file name.

If you add the --verbose (-v) option to the invocation above, you will note that the verbose listing still contains the full file name, with the two removed components still in place. This can be inconvenient, so tar provides a special option for altering this behavior:


This shoulds allow an archive saved with the full path information to be extracted with the/home/user information striped off. I have only done partial testing but invocations of the following sort seem to work:

Method 4 This should also enable one to restore to an encrypted home folders where the encrypted folder may be mounted to a different place at the time by encryptfs-recover-private --rw

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" --strip=2 --show-transformed-names -C /home/user1

Archiving a home folder and restoring

Everything has really been covered above so this is really just a slight expansion of the above for a specific case.

This uses Method 1 where all the paths are absolute so the folder you are running from is not an issue. This is the method I have always used for my backups so it is well proven. The folder is always restored to /home on extraction so you need to remove or preferably rename the users folder before restoring it. If a backup already exists delete it or use a different name. Both creation and retrieval must be done from a temporary user.

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Cloning between machines and operating systems using a backup archive (Advanced)

It is possible that you want to clone a machine, for example when you buy a new machine. It is usually easy if you have the home folder on a separate partition and the user you are cloning was the first user installed and you make the new username the same as the old. I have done that many times. There is however a catch which you need to watch for and that is that user names are a two stage process. If I set up a system with the user peter when I install that is actually just an 'alias' to a numeric user name 1000 in the linux operating system. I then set up a second user pauline who will correspond to 1001. If I have a disaster and reinstall and this time start will pauline who is then 1000 and peter is 1001. I get my carefully backed up folders and restore the folders which now have all the owners etc incorrect as they use the underlying numeric value apart, of course, where the name is used in hard coded scripts etc.

You can check all the relevant information for the machine you are cloning from in a terminal by use of id:

pcurtis@mymachine:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),6(disk),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),29(audio),30(dip),44(video),46(plugdev),104(fuse),108(avahi-autoipd),109(avahi),110(netdev),112(lpadmin),120(admin),121(saned),122(sambashare),124(powerdev),128(mediatomb)

So when you install on a new machine you should always use the same username and password as on the original machine and then create an extra user with admin (sudo) rights for convenience for the next stage. Change to your temporary user, rename the first users folder (you need to be root) and replace it from the archived folder from the original machine. Now login to the user again and that should be it. At this point you can delete the temporary user. If you have multiple users to clone the user names must obviously be the same and, more importantly, the numeric id must be the same as that is what is actually used by the kernel, the username is really only a convenient alias. This means that the users you may clone must alaways be installed in the same order on both machines or operating systems so they have the same numeric UID.

So we first make a backup archive in the usual way and take it to the other machine or switch to the other operating system and restore as usual. It is prudent to backup the system you are going to overwrite just in case.

So first check the id on both machines for the user by use of

id user

If and only if the ids are the same

On the first machine and from a temporary user:

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

On the second machine or after switching operating system and fro a temporary user:

mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Moving a Mint Home Folder to a different Partition (Expert level)

I have recently added a SSD to the Defiant computer and this covers part of the moving of the Linux System and home folder /home from the existing hard drive to the faster SSD. This uses the same techniques as used for backing up the users home folders and is also appropriate for a move of a home folder to a partition.

The problem of exactly copying a folder is not as simple as it seems - see https://stackoverflow.com/questions/19434921/how-to-duplicate-a-folder-exactly. You not only need to preserve the contents of the files and folders but also the owner, group, permissions and timestamp. You also need to be able to handle symbolic links and hard links. I initially used a complex proceedure using cpio but am no longer convinced that covers every case especially if you use wine where .wine is full of links and hard coded scripts. The stackoverflow thread has several sensible options 'exact' copies. I also have a well proven way of creating and restoring backups of home folders exactly using tar which has advantages as we would create a backup before proceeding in any case!

When we back up normally we use tar to create a compressed archive which can be restored exactly and To the Same place, even during cloning we are still restoring the user home folders to be under /home.If you are moving to separate partition you want to extract to a different place, which will become /home eventually after the new mount point is set up in file system mount point list in /etc/fstab. It is convenient to always use the same backup proceedure so you need to get at least one user in place in the new home folder. I am not sure I trust any of the copy methods totally for my real users but I do believe it is possible to move a basic user (created only for the transfers) that you can use to do the initial login after changing the location of /home and can then use to extract all the real users from their backup archives. The savvy reader will also realise you can use Method 2 above to move them directly to the temporary mount point

You can create this basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, Type Administrator ... -> Add -> Click Password to set a password otherwise you can not use sudo

An 'archive' copy using cp is good enough in the case of a very basic user which has recently been created and little used, such a home folder may only be a few tens of kbytes in size:

sudo cp -ar /home/basicuser /media/whereever

The -a option is an archive copy preserving most attributes and the -r is recursive to copy sub folders and contents.

So the proceedure to move /home to a different partition is to:

Example of changing file system table to auto-mount different partition as /home

I will give an example of the output of blkid and the contents of etc/fstab after moving my home folder to the SSD drive highlighting the changes in red. Note this is under Mint 19 and the invocation for the text editor.

pcurtis@defiant:~$ blkid
/dev/sda1: LABEL="EFIRESERVED" UUID="06E4-9D00" TYPE="vfat" PARTUUID="333c558c-8f5e-4188-86ff-76d6a2097251"
/dev/sda2: LABEL="MINT19" UUID="e07f0d65-8835-44e2-9fe5-6714f386ce8f" TYPE="ext4" PARTUUID="4dfa4f6b-6403-44fe-9d06-7960537e25a7"
/dev/sda3: LABEL="MINT183" UUID="749590d5-d896-46e0-a326-ac4f1cc71403" TYPE="ext4" PARTUUID="5b5913c2-7aeb-460d-89cf-c026db8c73e4"
/dev/sda4: UUID="99e95944-eb50-4f43-ad9a-0c37d26911da" TYPE="ext4" PARTUUID="1492d87f-3ad9-45d3-b05c-11d6379cbe74"
/dev/sdb1: LABEL="System Reserved" UUID="269CF16E9CF138BF" TYPE="ntfs" PARTUUID="56e70531-01"
/dev/sdb2: LABEL="WINDOWS" UUID="8E9CF8789CF85BE1" TYPE="ntfs" PARTUUID="56e70531-02"
/dev/sdb3: UUID="178f94dc-22c5-4978-b299-0dfdc85e9cba" TYPE="swap" PARTUUID="56e70531-03"
/dev/sdb5: LABEL="DATA" UUID="2FBF44BB538624C0" TYPE="ntfs" PARTUUID="56e70531-05"
/dev/sdb6: UUID="138d610c-1178-43f3-84d8-ce66c5f6e644" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="56e70531-06"
/dev/sdb7: UUID="b05656a0-1013-40f5-9342-a9b92a5d958d" TYPE="ext4" PARTUUID="56e70531-07"
/dev/sda5: UUID="47821fa1-118b-4a0f-a757-977b0034b1c7" TYPE="swap" PARTUUID="2c053dd4-47e0-4846-a0d8-663843f11a06"
pcurtis@defiant:~$ xed admin:///etc/fstab

and the contents of /etc/fstab after modification

# <file system> <mount point> <type> <options> <dump> <pass>

UUID=e07f0d65-8835-44e2-9fe5-6714f386ce8f / ext4 errors=remount-ro 0 1
# UUID=138d610c-1178-43f3-84d8-ce66c5f6e644 /home ext3 defaults 0 2
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
UUID=2FBF44BB538624C0 /media/DATA ntfs defaults,umask=000,uid=pcurtis,gid=46 0 0
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0

In summary: there are many advantages in having ones home directory on a separate partition but overall this change is not a proceedure to be carried out unless you are prepared to experiment a little. It is much better to get it right and create one when installing the system.

Cloning into a different username - Not Recommended but somebody will want to try

I have also tried to clone into a different username but do not recommend it. It is possible to change the folder name and set up all the permissions and everything other than Wine should be OK on a basic system. The .desktop files for Wine contain the user hard coded so these will certainly need to be edited and all the configuration for the wine programs will have been lost. You will also have to change any of your scripts which have the user name 'hard coded'. I have done it once but the results were far from satisfactory. If you want to try you should do the following before you try to log in the first time after replacing the home folder from the archive. Change the folder name to match the new user name and the following commands set the owner and group to the new user and gives standard permissions for all the files other than .dmrc which is a special case.

sudo chown -R eachusername:eachusername /home/eachusername
sudo chmod -R 755 /home/eachusername
sudo chmod 644 home/eachusername/.dmrc

This needs to be done before the username is logged into the first time otherwise many desktop settings will be lost and the following warning message appears.

Users $Home/.dmrc file is being ignored. This prevents the default session and language from being saved. File should be owned by user and have 644 permissions.
Users $Home directory must be owned by user and not writable by others.

It this happens it is best to start again remembering that the archive extraction does not delete files so you need to get rid of the folder first!

Changing the rate of the automated trim in Mint 19

Mint 19 is set up to do an automatic trim every week which seems to be a bit to long a time if one inspects the result of occasional manual trim by: sudo fstrim -av you may well see quite large numbers

Earlier version of Ubuntu/Mint used cron to schedule the trims but systemd timers are used in Mint 19. When a service is controlled by a timer there are two linked 'units' which are files with the same name and the service has an extention of .service and the timer an extension of .timer. Only the timer has to be enabled in Systemd for the service to be run as prescribed in the timer. Both are part of the linux system in the case of fstrim and are found in /usr/lib/systemd/system/ and should not be edited directly as systemctl provides built-in mechanisms for editing and modifying unit files if you need to make adjustments.

Editing Unit Files with a drop-in file using fstrim.timer as an example

This section is losely based on https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units , https://wiki.archlinux.org/index.php/Systemd/Timers#Timer_units and the manual page systemd.timer.5

The correct way to change a system service or timer is to add an drop-in (override) file. The edit command, by default, will open a drop-in file for the unit in question, in this case fstrim.timer:

sudo systemctl edit fstrim.timer

This will be a blank file that can be used to override or add directives to the unit definition. A directory will be created within the /etc/systemd/system directory which contains the name of the unit with .d appended. For instance, for the fstrim.timer, a directory called fstrim.timer.d will be created.

Within this directory, a drop-in file will be created called override.conf. When the unit is loaded, systemd will, in memory, merge the override with the full unit file. The override's directives will take precedence over those found in the original unit file.

The override.conf file will be opened in the terminal by sudo systemctl edit fstrim.timer in vim which is an unfriendly line editor.

To remove any additions you have made, delete the unit's .d configuration directory and drop-in file from /etc/systemd/system. For instance, to remove our override, we could type:

sudo rm -r /etc/systemd/system/fstrim.timer.d

After deleting the file or directory, you should reload the systemd process so that it no longer attempts to reference these files and reverts back to using the system copies. You can do this by typing:

sudo systemctl daemon-reload

So if we look inside the current fstrim.timer using systemctl cat fstrim.timer we find:

pcurtis@defiant:~$ systemctl cat fstrim.timer
# /etc/systemd/system/fstrim.timer
Description=Discard unused blocks once a week



We now edit it so we can add a suitable override with

sudo systemctl edit fstrim.timer

and paste in the following to change the title and timing from weekly to daily:

Description=Discard unused blocks once a day


Timers are cumulative, so in the override one needs to reset the existing timers by an initial OnCalender= otherwise your OnCalendar setting gets added to the existing one.

In theory we do not need reload the systemd process if we use edit by:

sudo systemctl daemon-reload

but I did it anyway and when I repeated the cat I got.

pcurtis@defiant:~$ systemctl cat fstrim.timer
# /etc/systemd/system/fstrim.timer
Description=Discard unused blocks once a week



# /etc/systemd/system/fstrim.timer.d/override.conf
Description=Discard unused blocks once a day


When I checked the status of the fstrim service after about 24 hours I got:

pcurtis@defiant:~$ systemctl status fstrim.service

Loaded: loaded (/lib/systemd/system/fstrim.service; static; vendor preset: enabled)
Active: inactive (dead) since Sat 2018-08-25 03:08:38 BST; 11h ago
Main PID: 31821 (code=exited, status=0/SUCCESS)

Aug 25 03:08:37 defiant systemd[1]: Starting Discard unused blocks...
Aug 25 03:08:38 defiant fstrim[31821]: /home: 965.9 MiB (1012805632 bytes) trimmed
Aug 25 03:08:38 defiant fstrim[31821]: /: 1.7 GiB (1760002048 bytes) trimmed
Aug 25 03:08:38 defiant systemd[1]: Started Discard unused blocks.

I have gone through this at some length as it is a useful technique with wider applicability and I had to spend a lot of time before I understood how systemd timers worked and how to correctly change one.

26 August 2018

System Housekeeping to reduce TimeShift storage requirements.

The new update philosophy by Mint of presenting every possible update and using TimeShift if there are problems may lead to a more secure system but does lead to a large build up of cached downloads of updates in the apt cache and of kernels. There are possibilities for some automatic reduction in cached downloads but Kernels have to be deleted manually.

Another area where there can be large amounts of little used data is in log files which are normally transient in the filesystem but have long term copies in TimeShift. On one of my machines I had about 7 Gbytes of system and systemd log files in /var/log . A good way to look for problems is to use the built in Disk Usage Analyser in the menu which allows one to zoom in to find unexpectedly large folders. The main areas to look into are:

  1. The apt cache of downloaded updates ~ 1 Gbyte
  2. Old unused kernels ~ 450 Mbytes per kernel and ~ 5 kernels
  3. Systemd Journal Files ~ 3 Gbytes on one of my machines.
  4. Log Files ~ 2.1 Gybytes reduction on one machine - should be less as I also found there was a problem with log file rotation at the time.

Overall I quadrupled the spare space on the Defiant from 4.2 Gbytes to 16.4 Gbytes on a 42 Gbyte root partition and that should improve further as the excess rolls out of the saved snapshots.

The last two may have been higher than would be expected as there was a problem with log file rotation and the changes need not be made unless you have freqent.

You need not make the changes for 4. Log files unless you find you frequently have large files and it is much more complex involving extensive use of the terminal.

We will now look at the most likely problem areas in detail:

1. The apt cache of downloaded updates.

All updates which are downloaded are, by default, stored in /var/cache/apt/archive until they are no longer current ie there is a more up-to-date version which has been downloaded. As time goes on and more and more updates take place this will grow in size and TimeShift will not only back them up but also the ones which have been replaced. Fortunately it is easy to change the default to delete download files after installation which will also save space in TimeShift. The only reason to retain them is to copy them to another machine to update it without an extra download which is most appropriate if you are on mobile broadband and the default can be changed when you need to do that and they can then be cleared manually before TimeShift finds them.

The best and very simple way is to use the Synaptic Package Manager

Synaptic Package Manager -> Settings -> Preferences -> Tick box for 'Delete download files after installation' and click 'Delete Cached Package Files'.

This freed 1.0 Gbyte in my case and they will slowly fall out of the TimeShift Snapshots - a good start

2. Out of date Kernels


Update Manager -> View -> Linux Kernels - You will get a warning which you continue past to a list of kernels.

I had 7 installed . You need at least one well tried earlier kernel, perhaps 2 but not 7. So I deleted all but three of the the earlier kernels. You have to do each one separately and it seems that it updates the grub menu automatically. Each Kernel takes up about 72 Mbytes in /boot and 260 Mbytes in /lib/modules and 140 in /usr/src as kernel headers so removing 4 old kernels saved a useful 1.9 Gbytes.

There is a script being considered for 19.1 https://github.com/Pjotr123/purge-old-kernels-2 which may be used to limit the number of kernels.

3. Limiting the maximum systemd journal size

The journal is a special case of a raw log file type and comes directly from systemd. I sought information on limiting the size and found https://unix.stackexchange.com/questions/130786/can-i-remove-files-in-var-log-journal-and-var-cache-abrt-di-usr which gave a way to check and limit the size of the systemd journal files in /var/log/journal/

You can query using journalctl to find out how much disk space it's consuming by:

journalctl --disk-usage
You can control the total size of this folder using a parameter in your /etc/systemd/journald.conf . Edit from the file manager (as root) or from the terminal by:

xed admin:///etc/systemd/journald.conf

and set SystemMaxUse=100M

Note - do not forget to remove the # from the start of the line!

After you have saved it it is best to restart the systemd-journald.service by:

sudo systemctl restart systemd-journald.service
This immediately saved me 3.5 Gbytes and that will increase as Timeshift rolls on.

4. Limiting the size of Log files [Advanced]

The system keeps a lot of log files which are also saved by TimeShift and these were causing me to run out of space on my Defiant. The main problem was identified by running the Disk Usage Manager as being in the /var/log area where about 7 Gbytes was in use! Of this 3.6 Gbytes was in /var/log/journal/ which I have already covered and the single syslog file was over 1 .4 Gbytes. In contrast the Lafite had only 385 Mbytes in journal, 60 Mbytes in the cups folder and 57 Mbytes in sylog out of a total of 530 Mbytes. I have a very easy way to pick out the worst offenders using a single terminal command which looks for the largest folders and files in /var/log where they are all kept. You may need to read man du to understand how it works! It needs sudo as some of the log files have unusual owners.

sudo du -hsxc --time /var/log/* | sort -rh | head -30

which on my system now produces

pcurtis@defiant:~$ sudo du -hsxc --time /var/log/* | sort -rh | head -20
217M 2018-08-31 05:11 total
128M 2018-08-31 05:11 /var/log/journal
28M 2018-06-14 12:02 /var/log/installer
27M 2018-08-31 05:11 /var/log/syslog
20M 2018-08-31 05:11 /var/log/kern.log
6.1M 2018-08-31 04:48 /var/log/syslog.1
3.1M 2018-08-31 05:11 /var/log/Xorg.0.log
1.6M 2018-08-31 05:00 /var/log/timeshift
480K 2018-08-26 16:00 /var/log/auth.log.1
380K 2018-07-28 11:27 /var/log/dpkg.log.1
336K 2018-08-26 11:15 /var/log/boot.log
288K 2018-08-30 04:06 /var/log/apt
288K 2018-08-16 08:12 /var/log/lastlog
192K 2018-08-26 11:37 /var/log/kern.log.1
176K 2018-08-26 16:02 /var/log/samba
172K 2018-08-30 04:06 /var/log/dpkg.log
152K 2018-08-31 05:00 /var/log/cups
148K 2018-06-29 03:40 /var/log/dpkg.log.2.gz
128K 2018-08-26 16:02 /var/log/lightdm
120K 2018-08-26 11:37 /var/log/wtmp

Note: The above listing does not distinguish between the folders (where the total of all subfolders is shown) and single files. For example journal is a folder and syslog and kern.log are files. We have considered journal already and one should note that although journal is in etc/log it is Not rotated by logrotate. installer, timeshift and Xorg. are further special cases.

When I started I had two huge log files (sytem and kern.log) which did not seem to be rotating out and totalled nearly 3 Gbytes. The files are automatically regenerated so I first checked there were no errors filling them continuosly by use of tail to watch what was being added and to try to understand what had filled them:

tail -fn100 /var/log/syslog

tail -fn100 /var/log/kern.log

The n100 means the initial 'tail' is 100 lines instead of the default 10. I could see nothing obviously out of control so I finally decided to delete them and keep a watch on the log files from now on.

Note: Although the system should be robust enough to regenerate the files I have seen warnings it is better to truncate. This ensures the correct operation of all programs still writing to that file in append mode (which is the typical case for log files).

sudo truncate -s 0 /var/log/syslog
sudo truncate -s 0 /var/log/kern.log

Warning: If your log files are huge there is probably a reason. Something is not working properly and is filling them. You should try to understand before just limiting the size or deleting. See https://askubuntu.com/questions/515146/very-large-log-files-what-should-i-do. The main reason was that they had not been rotating for about 6 weeks but just increasing. But there were other errors I found.

Once you are sure you do not have a problem you can avoid future problems by limiting the size of each log file by editing /etc/logrotate.conf and adding a maxsize 10M parameter.

xed admin:///etc/logrotate.conf

From man logrotate

maxsize size: Log files are rotated when they grow bigger than size bytes even before the additionally specified time interval (daily, weekly, monthly, or yearly). The related size option is similar except that it is mutually exclusive with the time interval options, and it causes log files to be rotated without regard for the last rotation time. When maxsize is used, both the size and timestamp of a log file are considered.

If size is followed by k, the size is assumed to be in kilobytes. If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G are all valid

My Note: The maxsize parameter only comes into play when logrotate is called which is currently once daily so its effect is limited unless you increase the frequency.

The easiest way to increase the frequency is to move the call of logrotate from /etc/cron.daily to /etc/cron.daily.

If you look at the bottom of /etc/logrotate.conf you will see it has the potential to bring in configuration files for specific logs which inherit these parameters but can also overrule them and add additional parameters - see networkworld article 3218728 - so we also need to look at for specific configuration files in /etc/logrotate.d . As far as I can see none of them use any of the size parameters added so they should all inherited from /etc/logrotate.conf so one should only need to add it if you want a different or lower value. /etc/logrotate.d/rsyslog is the most important to check as it controls the rotating the syslog and many of the other important log files such as kern.log.

cat /etc/logrotate.d/rsyslog

Most of the other file names in /etc/logrotate.d are obvious except for cups seems to be handled by /etc/logrotate.d/cups-daemon .

Last rotation times are a very valuable diagnostic and are available by:

cat /var/lib/logrotate/status

Errors in syslog showing anacron not starting under Mint 19

In looking in the 'tails' to find out why my syslog had been solarge I found a repeated error in syslog on both machines running Mint 19 namely:

Aug 26 14:00:18 defiant systemd[1]: Started Run anacron jobs.
Aug 26 14:00:18 defiant anacron[5859]: Can't chdir to /var/spool/anacron: No such file or directory
Aug 26 14:00:18 defiant anacron[5859]: anacron: Can't chdir to /var/spool/anacron: No such file or directory
Aug 26 14:00:18 defiant systemd[1]: anacron.service: Main process exited, code=exited, status=1/FAILURE
Aug 26 14:00:18 defiant systemd[1]: anacron.service: Failed with result 'exit-code'.
Aug 26 14:17:01 defiant CRON[21199]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 26 15:00:01 defiant CRON[28454]: (root) CMD (timeshift --check --scripted)
Aug 26 15:00:01 defiant crontab[28487]: (root) LIST (root)
Aug 26 15:00:01 defiant crontab[28488]: (root) LIST (root)

This seemed fairly serious as a number of jobs seem to be started by anacron and may not be running so I tried:

sudo mkdir /var/spool/anacron

after which I found entries like this had started.

Aug 26 16:07:33 defiant anacron[17957]: Job `cron.weekly' started
Aug 26 16:07:33 defiant anacron[27690]: Updated timestamp for job `cron.weekly' to 2018-08-26
Aug 26 16:08:01 defiant anacron[17957]: Job `cron.weekly' terminated

I have finally found the cause for the missing folder. I had done a system Restore using TimeShift and on closer inspection of the Exclude List Summary in the TimeShift configuration Filters tab I found that /var/spool/* is excluded hence the Issue is with TimeShift.

I created a MintSystem Issue https://github.com/linuxmint/mintsystem/issues/84 and @gm10 has commented that was already fixed in timeshift version 18.8 back in June (teejee2008/timeshift@eb2115a), which is not yet in the Mint repo, which has an old version.

In the meantime, one can consider getting timeshift directly from its developer's PPA:

sudo add-apt-repository -y ppa:teejee2008/ppa
sudo apt-get update
sudo apt-get install timeshift

Before You Leave

I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.

Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Content revised: 8th July, 2020