Home Uniquely NZ Travel Howto Pauline Small Firms Search
Sharing, Networking, Backup, Synchronisation and Encryption under Ubuntu and Linux Mint
LEGACY VERSION

There is a newer version of this page for Mint

Contents of this Page


Introduction

This page has been extended from my original page covering standard backing up and now covers the prerequisites of Mounting Drives, File Sharing between Computers running both Linux and Windows, secure access to remote machines and Synchronisation. It also covers other techniques in the armory for preservation of data, namely separation of data and system and encryption

The ability to Backup the system has always been considered an essential to any system. The amount of data involved has increased dramatically from the days when data was preserved on floppy disks and our internal drives were 850 Mbytes (which was adequate for a Windows 95 system with Office Office 95 - now a minimum size Windows XP or Ubuntu Linux system is about 6 Gbytes and even a 5 year old laptop has a 40 Gbyte drive . Our photographs occupy 40 Gbytes and Audio 15 Gbytes - videos are not practical to back up and occupy some 750 Gbytes.

Backing up takes many forms now that the norm is for machine to be networked so this article covers many forms of achieving redundancy and preserving data. It covers shared drives between different operating systems on dual booted computer, networking of both Linux and Windows machines, techniques to separate system and data areas, conventional backing up to internal and external drives and most important in this time of mobility - synchronisation both between machines and to external hard drives.

File Systems and Sharing

Preliminaries - Background on the Linux File System

I am not going to extol the virtues of Linux, in particular the Ubuntu Distribution here, but I need to explain a little about the ways that file systems and disks differ in Windows and Linux before talking about backing up. In Windows, Physical Disk drives and the 'virtual' Partitions they are divided up into show up in the top level of the File System as Drives with names such as C: , a floppy disk is almost always A: and the system drive is C: . This sort of division does not appear at all in Linux where you have a single file system starting from root ( / ) which is where the operating system is located booted. Any additional disk drives which are present or are 'mounted' at a latter time such as a USB disk drive will by default in most distributions appear in the file tree in /media and the naming will depend on several things but expect to entries such as /media/cdrom , /media/floppy and /media/disk. If you create a special partition for a different part of the filesystem such as /home where all the users live then it can be 'mounted' at /home. In theory you could mount partitions for all sorts of other parts of the file system. If, for example, you add a new disk and choose to mount a partition to just contain your home directory it is only the addition of a single line in a file although you need to exactly copy the old contents to the new partition first - I will cover that in detail latter in this article.

There is a nearly perfect separation of the kernel, installed programs and users in Linux. The users each have a folder within the home folder which has all the configuration of the programs they use which is specific to them - it is set up the first time they run a program. A users folder only contains a few tens of kbytes until programs are used and all the program settings are hidden (hidden files and folders start with a . (dot). There are usually a number of folders generated by default including /Desktop /Documents /Music /Pictures /Videos /Templates and /Public which are used by program as their defaults. This makes backing up very easy - an exact copy of the home folder allows the system to be restored after a complete reload of the operating system and programs. Note the word exact as the 'copy' has to preserve Symbolic Links and the Permissions of all the files - permissions are key to the security of Linux so special archiving utilities are best employed.

Networks

Permanently Mounting a shared drive in Ubuntu (Advanced)

If we are going to use a shared drive for data then we must ensure that it is permanently mounted. The mounting points in all major flavours of Linux are defined in a file system table which is in /etc/fstab and and the convention is that the mount points are in /media. We therefore need to set modify /etc/fstab to set up to mount points in /media and we must also create the folders for them using sudo to make them owned by root and set the permissions so they are accessible to all users.

One approach and open the File Browser as Root in a terminal by

gksudo nautilus

This allows me to use the graphical file browser to create the folders and set permissions by standard navigation and right click menus for create folder and Properties -> Permisions. It is best to make these folders with the same names as those assigned by mounting from 'Places' which is derived from the partition label if it is set – see below. Do not continue to use this Root File Browser after setting up the shared folder as running anything as Root has many dangers of accidental damage although the aware reader will realise that the terminal can be avoided in the next stage by also opening fstab from within the Root File Browser - but do take care!

You will however have to use a terminal for some of the other actions so I now feel the easiest way is to use a terminal and use the following two commands.

sudo mkdir /media/DRIVE_NAME
sudo chmod 777 /media/DRIVE_NAME

I have found that fstab does not like the folder name to include any spaces even if it is enclosed in quotes so it is best to use short single word drive names or join them with underscores. I seem to recall an early Windows restriction of 13 characters maximum.

It is desirable to back up /etc/fstab and then make the changes using the editor gedit. This is done in a terminal by:

sudo cp /etc/fstab /etc/fstab_backup
sudo gedit /etc/fstab

Before proceeding we need some information on the identifier for the file systems. The UUID can be found by typing blkid in a terminal – typical output looks like:

pcurtis@matrix:~$ blkid
/dev/sda1: LABEL="VISTA" UUID="2E6121A91F3592E4" TYPE="ntfs"
/dev/sda2: LABEL="HP_RECOVERY" UUID="0B9C003E5EA964B2" TYPE="ntfs"
/dev/sda5: LABEL="DATA" UUID="47859B672A5D9CD8" TYPE="ntfs"
/dev/sdb5: UUID="a7c16746-1424-4bf5-980e-1efdc4500454" TYPE="swap"
/dev/sdb6: UUID="432c69bd-105c-454c-9808-b0737cab2ab3" TYPE="ext4"
/dev/sdb7: UUID="a411101c-f5c6-4409-a3a1-5a66de372782" SEC_TYPE="ext2" TYPE="ext3"

The recommended procedure for modifying /etc/fstab in 10.04 and higher is to use the drives UUID rather than the device's location ie append lines to /etc/fstab looking like:

# /dev/sda3: LABEL="DATA" UUID="47859B672A5D9CD8" TYPE="ntfs"
UUID=47859B672A5D9CD8 /media/DATA ntfs nls=utf8,uid=pcurtis,gid=pcurtis,umask=0000 0 0

Note – This is for a NTFS partitions. This provides a read/write mounts ( umask=0000 for read/write for owner, group and everyone else) If you want a read only mount use umask=0222. The uid=pcurtis and gid=pcurtis are optional and define the user and group for the mount. See below for reason for defining the user. This does not work for ext3 or ext4 partitions, only vfat and ntfs and it seems the only way is the permisions have to be set up after every boot in a script for ext3 and ext4.

After modifying /etc/fstab and rebooting the chosen Windows Drives are mounted and appear on the desktop in addition to being in 'Places' - they can not be unmounted without root privileges which is just what we want.

Ownership and permissions of Windows Filesystems mounted by fstab

There is one 'feature' of this way of mounting which seems to be totally universal and that is that only root (the owner) can set the time codes - this means that any files or directories that are copied by a user have the time of the copy as their date stamp. What seems to happen is this:

A solution for a single user machine is to find out your user id and mount the partition with option uid=user-id, then all the files on that partition belong to you - even the newly created ones. This way when you copy you keep the original file date. This is important if you have a file synchronisation program such as Unison which checks file creation and modification dates.

# /dev/sda5 - note this is from a vfat (FAT32) filesystem
UUID=706B-4EE3 /media/DATA vfat utf8,uid=yourusernam,gid=yourusernamee,umask=0000 0 0

You must change yourusername to you own user name.

In the case of machines with multiple users you should not mount at boot time and instead mount the drives from Places.

The uid can also be specified numerically and the first user created has user id 1000.

Correctly mount ext4 'DATA' partition to allow use by all users in adm group.

Drives of typoes ext3 and ext4 mounted by fstab are by default owned by root. You can set the owner to the main user but not the group and access rights so other users can access a shared area as you can with an ntfs file system.

The only way round this seems to be to have a script which is run at the end of a reboot which sets all the owners, groups and permissions after a slight delay to allow the file system to have been mounted. The location of the 'call' of this script varies depending on whether it is an old upstart/runlevel system (Mint <18 and Ubuntu < 15.04) or a systemd system.

My script file is /etc/setDATA and must have execute permission and looks like:

#!/bin/bash
# setDATA file to set ownership, group and permissions of the media/DATA partition mount point after a delay to allow the mount to be complete.
sleep 10
chown pcurtis:adm -R /media/DATA
chmod 770 -R /media/DATA
exit 0

This sets the owner to my primary user see above in ntfs mounting for reasons and group to adm so everyone added to the adm group can access it but no others who might be guests.

If one is dealing with a systemd initialisation and the best place seems to be in the /etc/crontab where you can use the @reboot format for a special event see http://www.unixdaemon.net/linux/how-does-cron-reboot-work/

I have modified the /etc/crontab file to end like:

......
@reboot sh /etc/setDATA
exit 0

If one is using an Upstart initialisation it should be called from rc.local which should exist and will look like this after the addition:

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sh /etc/setDATA
exit 0

Ownership and Permissions of USB Drives with Unix filesystems (ext3, ext4, jfs, xfs etc)

The mount behavior of Unix file systems such as ext*, jfs, xfs etc is different to windows file systems such as NTFS and FAT32. When a USB drive with a 'Windows' type file system which has no internal knowledge of each folder and files permissions is plugged in it is auto-mounted with the Owner and Group corresponding to the user who mounted it. Unix filesystems have the permissions built in and don't (and shouldn't) get their ownership and permissions changed simply by mounting them. If you want the ownership/permissions to be different, then you change them, and the changes persist across unplug/replugs

Defined Mounting of USB Drives with Non Unix filesystems or mounting early in the boot process using fstab

It is sometimes necessary to mount a USB drive early in the boot process but this should only be done using this proceedure for ones which are permanently connected as boot process will halted and need intervention if the drive is not present. Even so it does allow a mount with defined ownership and permissions at an early enough stage for programs started automatically and run in the background as Daemons to use them. I plan to do this for an external drive with all my media which wil be servered by MediaTomb - a UPnP server which is access by by TV via a Humax Personal Video Recorder.

Change auto-mount point for USB drives back to /media

Ubuntu (and therefore Mint) have changed the mount points for USB drives from /media/USB_DRIVE_NAME to /media/USERNAME/USB_DRIVE_NAME. One can change the behaviour by using a udev feature in Ubuntu 13.04 and higher based distributions (needs udisks version 2.0.91 or higher). This has been tested with the latest Mint 18.1.

Create and edit a new file /etc/udev/rules.d/99-udisks2.rules

sudo gedit /etc/udev/rules.d/99-udisks2.rules

and cut and paste into the file

ENV{ID_FS_USAGE}=="filesystem", ENV{UDISKS_FILESYSTEM_SHARED}="1"

then activate the new udev rule by restarting or by

sudo udevadm control --reload

When the drives are now unplugged and plugged back in they will mount at /media/USB_DRIVE_NAME

Labelling/Relabelling a FAT 32 or NTFS file-system (partition)

Note: The latest versions of Gparted, the partition manager provided has the ability to Label/Re-label a FAT 32 file-system on a USB drive without needing tricks.

New partitions which you create when installing Ubuntu will not have a label and will show up as a size when mounted on the desktop.

Install gparted and ntfsprogs packages using the Synaptic Package Manager if not already installed. Or you can do this in a terminal by:

sudo apt-get install gparted ntfsprogs

  1. System -> Administration -> GParted
  2. To find the partition you want to re-label, you first have to find the disk drive that contains it, using the drop-down menu in the upper right. It will show a device name like /dev/sdb and the drive's total size in parentheses.
  3. After selecting a drive, you will see a list of all partitions on that drive.
  4. If the partition is mounted (has a key icon next to it), right-click on the partition and select Unmount.
  5. With the key icon gone, right-click on the partition and select Label. If you can't select it, install the ntfsprogs package.
  6. Enter the new partition name and press Ok.
  7. The label change is now pending, but has not been completed.
  8. Press the Apply button near the top of the window.
  9. After confirming, it should say "All operations successfully completed".
  10. The drive now has a new label.

Accessing Files on Windows Machines over the Windows Network

I originally considered this would be an important step but the order of difficulty was not what I expected - I could access files on Windows machines over the Network immediately using Places -> Network Servers although the creation date is only displayed correctly for files on NTFS partitions. I have not found a workaround yet or any reference to it despite extensive web searches. You will usually be asked for a password the first time.

Accessing Files on your Linux machine from Windows machines (and other Linux machines on your local network).

  1. Install Samba: A file server to be installed on the Linux machine called Samba (Using Add/Remove as usual) System -> Administration -> Samba this gives access to a control panel for setting up the shares, workgroup, users, passwords etc.
  2. Set Workgroup: the workgroup used by all the machines on the Network is set by System -> Administration -> Samba -> Preferences -> Server Settings -> Basic tab and enter the workgroup name used by the Windows Network.
  3. Provide a Username and Password for login from other machines: You the need to set up a username and password to use from the Windows and other machines when you log into this machines in the Samba Control Panel by: System -> Administration -> Samba -> Preferences -> Samba Users and add/edit a user to provide a username and password (I use the same as on my main Ubuntu user but you may wish to use a different one if others use the Windows machines) for Windows Networking Users to log into the machine. NB the password must be set even if you use an existing user name.
  4. Set up Shared Folders: You can now set up the shares you want by Add Share - the screens are clear in their requirements.

All these settings need a restart to become active. To save a full restart you can just restart Samba by typing on a terminal:

sudo /etc/init.d/samba restart

You can now use e Network Browser on another machine to access the machine you have just set up by Places -> Network - you will be asked for the Username and Password you have just set up and you can specify how long the password remains active.

For information: There are other ways of file sharing between Linux machines but samba is the popular choice as it also covers Windows machines.

Printing over the Network

This was again much easier than I expected. Use System -> Administration -> Printing -> New printer -> Network Printer:Windows Printer which takes one on to identify the printer on the network and has a selection procedure which has a huge range of printers with drivers available including the members of the Epson Stylus series - I have done this with many printers including an Epson C66 and Wifi linked Epson SX515.

Moving an Ubuntu Home Folder to a dedicated Partition (Expert level)

This is based on Psychocats - Create a separate home partition in Ubuntu with quite a few modifications to get it to first work for me using Ubuntu Hardy Heron. As an aside the original article is one of an excellent series which are well worth a look. The proceedures here are not very well tested as I have only done it for a few machine and one should make backups of everything of value but the techniques are used in many places. Both the partitioning and the "copying" and mounting have risks but the proceedures here are designed to minimise the risk and provide some escape routes if it does not work. It is however best to think ahead and provide the separation of system and data when the initial installation is carried out.

The first stage is to make a new partition using Gparted from an Ubuntu LiveCD. This will usually involve shrinking an existing partition, probably the root partition with the home folder you are moving. It is always safest to only reduce/increase the size of a partition leaving the start position the same. In the following I have used as an example a new ext3 partition /dev/sdb8 - you need to substitute what it turns out to be for you from Gparted. I have covered Partitioning elsewhere so I will not do more here than suggest you make sure you have a sensible division leaving space for both the main system and the /home directory and enough working space so you can keep the original /home for a while until everything is debugged.

Assuming you have made a new partition you now need to go back into your normal system. If you are using Hardy heron you can mount the new partition using Places -> Removable Disk (check the mount point is /media/disk or modify the following suitably) then use cpio which is an archiving program to 'copy' your home directory to it by the following commands in a terminal:

cd /home
sudo find . -depth -print0 | sudo cpio -p0vud /media/disk

I initially thought doing this from a LiveCD would be more sensible as it would avoid problems with files being in use or changing but it does not work as the permissions end up being incorrect as the original root folder one mounts and uses for the "copy" does not have the original permissions when using the live CD. I found that out the hard way as it was not clear in the original article and I had to start again several times! It takes a while to realise that Linux is very different from Windoze and one can work on a live system without problems of files being locked and/or in use. The complexity and use of cpio instead of cp is to maintain permissions and the various links and comes from a variety of sources. It is worth doing a man cpio and a man find to understand the programs and parameters but I will try to explain some of basics of it step by step:

cd /home - this changes to the home directory.

sudo find . -depth -print0 - this provides a list of filenames. I am not sure if the sudo is desirable or essential, it is not needed for a single user system. The - printo parameter specifies "null terminated strings" which will work even if some of the files have spaces, newlines, or other dubious characters in them. The -depth parameter specifies processing each directory’s contents before the directory itself.

The results are written into a pipe by the | and the program reading them must be capable of using this list. The cpio archiving command has this feature.

| cpio -p0vd /media/disk - here's the tricky part. This uses the "passthrough" mode of cpio, an archiving program that normally copies files "in" or "out" but it can do "both" using this "passthrough" mode. The -p sets the "passthrough" mode for cpio which then expects a list of filenames on its standard input (which we are providing from the 'find' command). It then copies the corresponding file "in" from the path specified (as part of the input line) and "out" to the the path specified as the final one of cpio's arguments (/media/disk in this case).

The rest of the switches on this cpio command are: -0 - expect the input records (lines) to be null terminated, -v provides verbose output, and -d - make leading directories as needed. -u forces overwriting of any existing files but it is best to make sure that everything is deleted before starting.

We sould now have an exact "copy" of the /home folder on the new partition with the correct permissions (at least for the username you used when carrying out the cpio copy). It is prudent to mount the partition and have a look to see if everything is reasonable. It should be [still] be visible in Places -> Removable Media for you to check the permissions and ownership is correct for each user.

Now we need to mount this in the correct place at turn on. Automatic mounting is carried out by backing up then modifying /etc/fstab by:

sudo cp /etc/fstab /etc/fstab_backup
sudo gedit /etc/fstab

and adding an extra lines at the end

# /dev/sdb8 
UUID=3b50dbce-28c8-46fc-bc63-f89bb06c54e5 /home ext3 nodev,nosuid 0 2

where your UUID can be found by:

sudo blkid

At the next start-up the modified fstab will mount the new home folder from the disk partition over the top of the existing one which is then invisible. A return to the old set up is as simple as restoring the original fstab or commenting out the new final line at which point the old /home is once more visible. At a point in the future when one is completely happy the old contents can be deleted to save space using the LiveCD.

An alternative is to rename /home to /home-backup using the LiveCD and create a new /home ready for the mounting with the appropriate permissions before changing fstab as in the original article. This is more complex and risky than mounting over the top but the 'backup' /home directory can be deleted without having to use the LiveCD.

The proceedure above was what finally worked for me with a single user setup, it is possible that permissions may need to be set up for other users.

If the permissions are wrong, as happens if you do the cpio 'copy' using the LiveCD you get messages when you try to log in such as:

Users $Home/.dmrc file is being ignored. This prevents the default session and language from being saved. File should be owned by user and have 644 permissions.
Users $Home directory must be owned by user and not writable by others.

If you can get past this message, perhaps by changing permissions from the liveCD the following brute force procedure (which is a refinement of what I did in one of my early attempts) to set the groups and permissions can be tried on the other users:

sudo chown -R eachusername:eachusername /home/eachusername
sudo chmod -R 755 /home/eachusername
sudo chmod 644 home/eachusername/.dmrc

This needs to be done before each username is logged into the first time otherwise many desktop settings will be lost at the point the warning message appears - this means that the cpio 'copy' will need to be repeated and the permissions set for each additional user as above before logging in the first time.

The best way I believe now is to create a new user just before making the change and logging into that user to carry it out. The overhead in size of a the new user's directory in /home is only about 28K and it makes sure that there is very little going on and no important files are missed or changed during the copy - I worry about programs with Daemons running such as Picasa and Skype. The new user can be removed after the change has taken place. The new user is easy to create by System -> Administration -> Users and Groups. The new user needs a name, to have the Profile of an Administrator set in the box and the passwords input, nothing else matters and can be empty or default. The new user will automatically have the ability to Sudo if set up with the profile of an Administrator.

When you delete the user using the GUI you do not delete the /home directory or another few files - to also remove the home directory it is easiest to instead use the command line and the deluser utility:

sudo deluser --remove-home username

Alternatively all files owned by the user, including those in the user's home directory may be removed as follows:

sudo deluser --remove-all-files username

One possible improvement which I have not tried would be to use rsync instead of cpio to do the intelligent copy by

rsync -avx /home/ /media/disk

you do not need to be in the home directory. rsync is an advanced archiving and copying tool which can work over ssh (Secure Shell) to other machines as just a simple intellegent copy. It is also useful for regular 'backups' as it will only copy files which have changed so it can be run on a regular basis to syncronise two machines although unison (which uses rsync) is a much better way to go.

In summary: there are many advantages in having ones home directory on a separate partition but overall this change is not a proceedure to be carried out unless you are prepared to experiment a little. It is much better to get it right and create one when installing the system.

Backing up.

Simple Backup (sbackup).

There are a number of backup utilities available for Ubuntu Linux but use Simple Backup which is a simple backup solution intended for desktop use created within Google Summer of Code 2005 for Ubuntu with the mentoring of the Ubuntu team. It is a tiny download and seems very comprehensive in the ways you can chose to include and exclude files based on location, size, type etc. It can run automatically and will do periodic full backups with incremental backups of changed files. By default it saves the backup as a compressed file in /var/backup and the file seems very small at under 400 Mbytes with default settings. You can restore individual files etc and in theory they can be restored to a different location although that feature has not worked for me. You can also access the various levels of archive folders and use archive manager to extract directly from the .tar archives in the folders.

It is all very easy with a good GUI interface and is found in Systems ->Administration -> Simple Backup. Simple Backup is downloaded by Add/Remove and search for Simple Backup or Synaptic package manager and search for sbackup.

The default settings create a daily incremental backup and a weekly full backup. Although the files are quite small you need to have a policy of a monthly tidy up otherwise you will end up with a full disk - it has happened to me twice

Standard Tar Archives

There is also a very powerful command line archiving tool round which many of the GUI tools are based which should work on most Linux Distributions. In many circumstances it is best to access this directly to backup your system. The resulting files can also be accessed (or created) by the archive manager accessed by right clicking on a .tgz .tar.gz or .tar.bz2 file. To show the power the following commands will back up your entire file system but can easily be adapted to just back up your home folder by starting there instead of in the root directory:

In a terminal first change to the root directory:

cd /

then:

sudo tar cvpzf /media/FREEAGENT/mybackup.tgz / --exclude=/mybackup.tgz --exclude=/proc --exclude=/lost+found --exclude=/media --exclude=/sys --exclude=/mnt

This is a single line if you cut and paste. The options cvpzf are: create archive, verbose mode, retain permissions, gzip archive and file output. Then follows the name of the file to be created, mybackup.tgz which should use the date for easy reference - in this case we are using a USB hard drive called FREEAGENT for the backup. Next is the directory to back up, in this case / the root so everything will be stored. (Another common possibility would be /home or /home/USERNAME). Next are the objects which need to be excluded - the most important of these is our back up file or it would be recursive! It also excludes directories which are recreated dynamically.

The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system. After it is complete the file should be moved to a safe location, preferably a DVD or external device. If you want to do a higher compression method the command "tar cvpjf mybackup.tar.bz2" can be used in place of "tar cvpzf mybackup.tgz". This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file.

You can access parts of the archive using the GUI Archive Manager by right clicking on the .tgz file - again slow on such a large archive. A full system backup is most useful if you can effectively restore your entire system or data. This can be again done by a few commands but do not try this for fun as it will overwrite the entire Ubuntu file system, thus restoring the older image that we took. I have not yet had problems so large I have had to try this! The following assumes the backup image is still in the root directory or has been replaced there from DVD:

cd /
sudo tar xvpfz /media/FREEAGENT/mybackup.tgz -C /

The restoration uses the options - extract, verbose, retain permissions, from file and gzip. This will take a while because all your files will be overwritten with the versions from the image you previously backed up. The "-C /" ensures that the directory is Changed so the file is restore to the root directory /.

If this has been done onto an empty drive from a LiveCD one needs to replace any excluded directories before rebooting the system.

mkdir proc
mkdir lost+found
mkdir sys

If the old system is still present note that it only overwrites files, it does not deleted files from the old version which are no longer needed.

This is one of the simplest and fastest methods for backing up. It may take some customizing and tweaking for your purposes but it is powerful, versatile, and free.

Archiving your home folder

It is very useful if you want to preserve your home directory to allow you to upgrade or 'clone' machines. The following commands will back up your home folder /home/yourusername if you change pcurtis to yourusername - all other programs must be closed.

In a terminal type:

sudo tar cvpPzf "/media/DATA/mybackup.tgz" /home/pcurtis/ --exclude=/home/*/*backup*.tgz --exclude=/home/*/.gvfs

This is a single line if you cut and paste. The options cvpzf are: create archive, verbose mode (leave this out after the first time) , retain permissions, -P do not strip leading backslash, gzip archive and file output. Then follows the name of the file to be created, mybackup.tgz which in this example is on an external USB drive called 'DATA' - the backup name should include the date for easy reference. Next is the directory to back up, in this case /home/pcurtis). Next are the objects which need to be excluded - the most important of these is your back up file if it is in your /home area (so not needed in this case) or it would be recursive! It also excludes the folders (.gvfs) which is used dynamically by a file mounting system and is locked which stops tar from completing. The problems with files which are in use can be removed by creating another user and doing the backup from that user - overall that is a cleaner way to work.

The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system. After it is complete the file should be moved to a safe location, preferably a DVD if you did not use an external device. If you want to do a higher compression method the command "tar cvpjf mybackup.tar.bz2" can be used in place of "tar cvpzf mybackup.tgz". This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file. You can run into problems if the file is over 4 Gbytes as most USB sticks and USB drives are formated with a file system called FAT32 which has a maximum file size of a tad over 4 Gbytes so you may need to format an external drive to the more modern ntfs format which which still be compatible with all windows machines. It should take a while to reach that point and when you do you may wish to exclude some other folders such as Music and back that up separately.

You can access parts of the archive using the GUI Archive Manager by right clicking on the .tgz file - again slow on such a large archive. A backup is most useful if you can effectively restore your entire system or data. This can be again done by a few commands but do not try this for fun as it will overwrite the entire part of the Ubuntu file system we archived, thus restoring the older image that we took.The following assumes the backup image is in the root directory of an external drive called 'WD Passport' :

sudo tar xvpfz "/media/DATA/mybackup.tgz" -C /

The restoration uses the options - extract, verbose, retain permissions, from file and gzip. This will take a while because all your files will be overwritten with the versions from the image you previously backed up. The "-C / ensures that the directory is Changed to the root so the files are restore to the original locations.

If the old system is still present note that it only overwrites files, it does not deleted files from the old version which are no longer needed.

Deleting Archive files: If you want to delete the archive file then you will find it is owned by root so make sure you delete it in a terminal - if you use a root browser then it will go into a root Deleted Items which you can not easily empty so it takes up disk space for ever more. If this happens then read http://www.ubuntugeek.com/empty-ubuntu-gnome-trash-from-the-command-line.html and/or load the trash-cli command line trash package using the Synaptic Package Manager and type

sudo trash-empty

Cloning a user using a backup archive (Advanced)

It is possible that you want to clone a machine, for example when you buy a new machine. It is easy if you have the home folder on a separate partition and the user you are cloning was the first user installed (so the uid and gid are the same and 1000) and you make the new username the same as the old. I have done that several times. You can check all the relevant information for the machine you are cloning from in a terminal by use of id:

pcurtis@mymachine:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),6(disk),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),29(audio),30(dip),44(video),46(plugdev),104(fuse),108(avahi-autoipd),109(avahi),110(netdev),112(lpadmin),120(admin),121(saned),122(sambashare),124(powerdev),128(mediatomb)

When you do the initial on the new machine use the same username and password as on the original machine and then create an extra user for convenience for the next stage. Change to your temporary user, rename the first users folder (you need to be root) and replace it from the archived folder from the original machine. Now login to the user again and that should be it. At this point you can delete the temporary user.

I have also tried to clone into a different username but do not recommend it. It is possible to change the folder name and set up all the permissions and everything other than Wine should be OK on a basic system. The .desktop files for Wine contain the user hard coded so these will certainly need to be edited and all the configuration for the wine programs will have been lost. You will also have to change any of your scripts which have the user name 'hard coded'. I have done it once but the results were far from satisfactory. If you want to try you should do the following before you try to log in the first time after replacing the home folder from the archive. Change the folder name to match the new user name and the following commands set the owner and group to the new user and gives standard permissions for all the files other than .dmrc which is a special case.

sudo chown -R eachusername:eachusername /home/eachusername
sudo chmod -R 755 /home/eachusername
sudo chmod 644 home/eachusername/.dmrc

This needs to be done before the username is logged into the first time otherwise many desktop settings will be lost and the following warning message appears.

Users $Home/.dmrc file is being ignored. This prevents the default session and language from being saved. File should be owned by user and have 644 permissions.
Users $Home directory must be owned by user and not writable by others.

It this happens it is best to start again remembering that the archive extraction does not delete files so you need to get rid of the folder first!

Synchronisation between machines and hard drives - Unison (Advanced)

I have put this as an advanced activity as a number of features of the normal set up of the way removable and fixed drives are mounted need to be understood and configured to avoid unexpected results. See below under Permissions and Automounted Drives (Removable Drives) and Drives Mounted Permanently at startup using fstab.

Backing up is very important and often the easiest way has been to make copies onto external hard drives and onto other machines. But what does one do when one has multiple copies of important folders and different files have been added or edited on each machine or external hard drive copy. This is a particular problem when one uses a laptop or netbook away from home for a few months and returns especially when only subsets of files may have been taken with one or the two machines were not in sync when one started.

Unison

Linux has a very powerful tool available called Unison to synchronise folders, and all their subfolders, between drives on the same machine and across a local network using a secure transport called SSH (Safe 'S Hell). At its simplest you can use a Graphical User Interface (GUI) to synchronise two folders which can be on any of your local drives, a USB external hard drive or on a networked machine which also has Unison and SSH installed. Versions are even available for Windows machines but one must make sure that the Unison versions numbers are compatible even between Linux versions .

You just enter or browse for the two folders if local and it will give you a list of differences and recommended actions which you can review and it is a single keystroke to change any you do not agree with. Unison uses a very efficient mechanism to transfer/update files which minimises the data flows based on a utility called rsync. The initial Synchronisation can be slow but after it has made its lists it is quite quick even over a slow network between machines because it is running on both machines and transferring minimum data - it is actually slower synchronising to another hard drive on the same machine.

For more complex synchronisation with multiple folders and perhaps exclusions you set up a very simple configuration file for each Synchronisation Profile and then select and run it from the GUI as often as you like. It is easier to do than describe - a file to synchronise my four important folders My Documents, My Web Site, Web Sites, and My Pictures is only 10 lines long and contains under 200 characters yet synchronises 25,000 files! The review list is intelligent and if you have made a folder full of sub folders of pictures whilst you are away it only shows the top folder level which has to transferred which is a good job as we often come back with several thousand new pictures!

Both Unison and SSH are available in Ubuntu but need to be installed using System -> Administration -> Synaptic Package Manager and search for and load the following three packages:

unison-gtk ssh winbind sshfs

or they can be directly installed from a terminal by:

sudo apt-get install unison-gtk ssh winbind sshfs

The first is the GUI version of Unison which also installs the terminal version if you need it. The second is a meta package which install the SSH server and Client, blacklists and various library routines. The third is required to allow ssh to use host names rather than just absolute IP addresses which will also requires some extra configuration described below - you may find with some versions of Ubuntu that winbind is already installed but it will still need the configuration. The last is not needed yet but will prove useful as it allows you to mount a remote file system on your machine usind ssh.

Unison is then accessible from the Applications Menu and can be tried out immediately. There is one caution - it is initially configured so that the date of any transferred file is the current date ie the file creation date is not preserved although that can easily be set in the configuration files - the same goes for preserving the user and group.

Workround for Unison and Ubuntu Natty 11.04 bug

Unison hangs at the Contacting Server Screen under Natty - this is a known bug ' unison -ui graphic' does not show password dialog '. The way round this is to use Public Key Authentication without a passphrase as I have covered below for use of ssh and sshfs - I will not duplicate it here so go to the section Using Public Key Authentication with ssh, sshfs and unison to avoid password requests - I inhibited checking that the IP address had not changed because every time my router is restarted new IP addresses are issued.

Permissions for External USB Drives and Drives Mounted via 'Places': The file systems on External USB Drives are almost always Windows type file systems such as FAT32 which do not support the definitions of owners and groups and their associated permissions. The same applies to such file systems on internal drives which are mounted through Places or permanently mounted when the machine is started up. The Owner, Group and Permissions are therefore set up each time when the drives are mounted. Unison by default checks for matches in the permissions so one has to make sure that USB drives are mounted with suitable and matching permissions - by default they are mounted with 0700 whilst drives permanently mounted in the File System Table at startup usually have the permissions set to 777 or 775 in /etc/fstab. See above for information on permanently mounting drives and /etc/fstab

The permissions for internal drives mounted via Places or automounted when external drives are plugged in can and should be set using the Gnome Desktop Configuration Editor to match what is set up for permanently mounted drives in /etc/fstab. The Configuration Editor is norminally installed but does not show under Applications -> System -> Configuration Editor (because it is powerful and open to abuse by the unwary) . You can set up the system to display by a Right click on the Applications drop down -> Edit Menus and find and tick the box for it to be displayed. It is a useful program so it is worth making it easily available but if you are in a hurry it can also be run from a terminal by:

gconf-editor

Once you have the configuration editor running expand the parameters on the right through system -> storage -> default_options and set umask=000 for the drive type which will probably be vfat. Double click down till you get to the existing umask=077 and change it to 000 or 002 to match drives mounted by fstab. Note umask is a mask and you set the inverse of the permissions you want which are 777 or 775 !

Even so Unison may seems to get some miss-matchs on the first synchronisation which have to be Resolved. I have set up a number of profiles for back-up and synchronisation with my WD Passport portable 120 Gbyte USB drive which is formated to FAT32 (vfat) to keep Pauline's Toshiba Laptop and MSI Wind netbook synchronised and a Profile to Synchronise my MSI Wind Netbook and homebuilt Desktop over the local network.

Update on Natty

The ability to change the permissions using gconf-editor seems to have disappeared under Natty with Unity so you have to turn off checking ownership and permissions when accessing Fat32 drives.

Ownership of Drives Mounted permanently using fstab: There is one 'feature' of mounting using the File System Table in the usual way is that the owner is root and only the owner (root) can set the time codes - this means that any files or directories that are copied by a user have the time of the copy as their date stamp which can cause problems with Unison when Synchronising. What seems to happen is this:

A solution for a single user machine is to find out your user id and mount the partition with option uid=user-id, then all the files on that partition belong to you - even the newly created ones. This way when you copy you keep the original file date.

# /dev/sda5
UUID=706B-4EE3 /media/DATA vfat iocharset=utf8,uid=yourusername,umask=000 0 0

In the case of multiple user machines you should not mount at boot time and instead mount the drives from Places.

The uid can also be specified numerically and the first user created has user id 1000.

SSH (Secure aS Hell)

Using SSH to synchronise between machines over a network

So far we have looked at the set needed up for Unison to Synchronise files on the same machine. We will now look at synchronising between machines using ssh. When you set up the two 'root' directories for synchronisation you get four options when you come to the second one - we have used the local option but if you want to synchronise between machines you select the SSH option. You then fill in the hostname which can be an absolute IP address or a hostname (but see below on the need to set this up). You should then specify the username you will use to log into the other machine ie the samba sharing username as set up for windows sharing, leave the port blank. You will also need to know the folder on the other machine as you can not browse for it. When you come to synchronise you will have to give the password corresponding to that username.

Setting up ssh .

I have always checked out that ssh has been correctly set up on both machines before trying to use Unison. In its simplest form ssh allows on to log using a terminal on a remote machine. Both machines must have ssh installed (and the ssh daemons running which is the default after you have installed it). For reasons we will come to (and sort out) the set up of Ubuntu only allows one to use and absolute IP address so you need to know the address on your network of the other machine. This can be found within a mass of other information by opening a terminal on the machine whose IP address you need and typing

ifconfig

which will give a lot of information on all your connections including the active network - the IP address you are looking for will follow an inet addr: . There will be two addresses, one will be for the local loopback (127.0.0.1) and the other is the external address which is what we want.

if you want to reduce the amount of information to just the lines containg inet addr then you can pipe the output through a program called grep - I leave it as a task for the reader to find out more about pipes and grep once you have tried it and can see the power of the command line!

ifconfig | grep "inet addr"

It is now time to try ssh and I assume that the machine we want to access has an IP address 192.168.1.2

ssh 192.168.1.2

You will get some warnings that it can not authenticate the connection which is not surprising as it is the first time and will ask for confirmation and you have type yes rather than y. It will then tell you it has saved the authentication information for the future and you will get a request for a password which is the start of your log in on the other machine. After providing the password you will get a few more lines of information and be back to a normal terminal prompt but note that it is now showing the address of the other machine. You can enter some simple commands such as a directory list (ls) if you want. When you have tired of this type exit to return to your own machine.

Enabling Hostname Resolution under Ubuntu: Using absolute IP addresses is not very satisfactory in the long term as they are allocated dynamically from your router or other server on most modern networks. This means that they may change so you could end up synchronising with the wronge machine. In some ways the consistency and ssecurity checks run by ssh to prevent hacking are even more of a problem when the IP address changes and I have sometimes had to clear out the file containing that information (~/.ssh/known-hosts) . It is therefore much better to use the hostname to access the machine. Unfortunately the default set up in Ubuntu does not support hostname resolution without an additional utility being loaded and a change made in a configuration file to enable what is called WINS (Windows Internet Name Service) resolution by:

Installing winbind using System -> Administration -> Synaptic Package Manager and search for winbind - it is often installed with samba so you may already have it but it is best to check.

then in a terminal:

sudo gedit /etc/nsswitch.conf

and change this line:

hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4

to:

hosts: files wins mdns4_minimal [NOTFOUND=return] dns mdns4

Note - in some cases there may be more or less on the line - if that is the case put wins immediately after files.

Now ssh <hostname> will log you into <hostname> and you can use hostnames in Unison

Problems with some networks giving slow response especially after wins modification for ssh: There is a problem with some ISPs who use DNS redirection for advertising purposes which is accentuated when the above modification is made. It is a known bug see https://bugs.launchpad.net/hundredpapercuts/+bug/389909 and the effect is that the ability to click on Places > Network and browse the local network has essentially been unusable. It takes over a minute and a half to just display the gui and then it fails to display the network properly, if at all. Network places that are displayed often return error messages if clicked on. Sometimes running the Network browse gui several times in a row returns normal function, sometimes it does not. The fix is very simple. Edit

sudo gedit /etc/samba/smb.conf

In the file you need to change the resolve order and remove the leading semicolon so that bcast comes before host - in 12.04 bcast should be first

change this:


# What naming service and in what order should we use to resolve host names
# to IP addresses
; name resolve order = lmhosts host wins bcast

to this:


# What naming service and in what order should we use to resolve host names
# to IP addresses
name resolve order = bcast lmhosts wins host

Notice that this is a change in 12.04, earlier it was sufficient to just move host from 2nd place to 4th place.

Restart and network browsing returns to normal and hostname resolution works better.

Some sample Profiles for Unison:

The profiles live in /home/username/.unison

# Profile to synchronise from desktop triton-ubuntu to netbook vortex-ubuntu
# with username pcurtis on vortex-ubuntu
# Note: if the hostname is a problem then you can also use an absolute address
# such as 192.168.1.4 on vortex-ubuntu
#
# Roots for the synchronisation
root = /media/DATA
root = ssh://pcurtis@vortex-ubuntu//media/DATA
#
# Paths to synchronise
path = My Backups
path = My Web Site
path = My Documents
path = Web Sites
#
# Some typical regexps specifying names and paths to ignore
ignore = Name temp.*
ignore = Name *~
ignore = Name .*~
ignore = Name *.tmp
#
# Some typical Options - only times is essential
#
# When fastcheck is set to true, Unison will use the modification time and length of a
# file as a ‘pseudo inode number’ when scanning replicas for updates, instead of reading
# the full contents of every file. Faster for Windows file systems.
fastcheck = true
#
# When times is set to true, file modification times (but not directory modtimes) are propagated.
times = true
#
# When owner is set to true, the owner attributes of the files are synchronized.
owner = true
#
# When group is set to true, the group attributes of the files are synchronized.
group = true
#
# The integer value of this preference is a mask indicating which permission bits should be synchronized other than set-uid.
perms = 0o1777

The above is a fairly comprehensive profile file to act as a framework and the various sections are explained in the comments.

Secure access to a Remote File System using SSH and SSHFS (Advanced)

SSHFS (SSH File System) uses SSH to mounting a remote filesystem on a local machine. SSHFS uses FUSE (Filesystem in Userspace) which is included in Ubuntu and authenticates connections so only those who are authorised can mount them. SSH connections are encrypted, no one can see your files as they are transferred over the network and once mounted the FUSE filesystem is owned by the user that created it and other users on the local machine can not access them.

Firstly we must install sshfs using the Synaptic Package Manager by System -> Administration -> Synaptic Pacakage Manager -> Search for sshfs, Rigtht Click and Mark for Installation -> Apply.

Secondly add yourself to the fuse group by System -> Administration -> Users and Groups -> select user -> Properties -> User Privileges, then tick the following option:

Allow use of fuse filesystems ...

Once you have added yourself to the fuse group, you need to either log out and log back in again or reboot for the change to take effect.

There is a risk that your ssh session will automatically log out if it is idle. To keep the connection active add the following parameter to ~/.ssh/config or to /etc/ssh/ssh_config on the client.

ServerAliveInterval 5

This will send a "keep alive" signal to the server every 5 seconds.

Using a Terminal

A local mount point must exist and have permissions set for you to use it before you connect. If you have it in your home directory you can use the file manager to create the new directory which will have the correct permissions.

Assuming that you have ssh installed and the server running on the remote machine you now just need to run the sshfs command to mount the remote directory. In a terminal type:

sshfs -o idmap=user username@remotemachine:/remotefolder ~/localfolder

The idmap=user option ensures that files owned by the remote user are owned by the local user. If you don't use idmap=user, files in the mounted directory might appear to be owned by a different user.

To unmount,

fusermount -u ~/localfolder
This method seems simple and reliable and you should test its use.

Using ‘Connect to Server’ in the Nautilus file browser with the SSH option

An Alternative way to mount a secure remote file system is to open the Nautilus file browser then File -> Connect to Server and select SSH from the drop down menu. Fill in the server name and other parameters as required. This way will give you a mount on the desktop and you can unmount by a right click -> Unmount Volume. You still need SSH on both machines and the fix to the Hostname resolution problem using winbind covered above. I am not sure if you also need the keep alive signals as I already had the fix in place.

If this fails with a timeout error try deleting or renaming ~/.ssh/known_hosts - you will get a message saying that it is a new host you are connecting to and everything will then work. It seems to be caused by the IP address changing for the remote hostname and you not getting the warning message.

Using Places menu

This is very similar to the above but is accessed by Places -> Connect to Server and filling in as above. Nautilus is then opened. It seems less reliable than accessing the connection Window from in Nautilus under one of my machine with Hardy Heron whilst it works fine under Jaunty.

Conclusion

I discovered this way of working much later than Networking using samba. It has many advantages once the basics have been set up as it is a much more secure solution especially over Wifi as snooping is impossible with an encrypted connection. However you still need to have a strong password to prevent a remote login. I plan to bring it up in this document once I have more experience.

Using Public Key Authentication with ssh, sshfs and unison to avoid password requests

It is often inconvenient to have to provide a password when one logs into a remote machine using ssh or want to use Unison and impossible if you are automatically mounting a file system at login. The way round this is to use public/private key authentication instead. The key is encrypted and passed between the machines but there is obviously a slight security implication as anyone who gains access to one machine has access to the other. There is a good introducttion to ssh at http://kimmo.suominen.com/docs/ssh/ which covers this.

The following proceedure allows one to set up access via a public/private key pair between two machines running OpenSSH. You should first check that you can log in to the remote machine and are authenticated on it. When running ssh-keygen you will be prompted for a file name and a passphrase. Simply hit the return key to accept the default file name, and enter no passphrase at this time - you can and should enter one a latter stage to improve security. scp is a command for a secure copy using ssh which is part of the ssh package. The command examples below need to be edited to use your own machines and usernames. It assumes you are logged in as username1 on host1 and want remote access to username2 on host2. Run the following commands in a terminal.

cd .ssh
ssh-keygen -t dsa
scp id_dsa.pub username2@host2:/home/username2/.ssh/host1_username1_id_dsa.pub
ssh username2@host2:
cd .ssh
cat host1_username1_id_dsa.pub >> authorized_keys

You should find that you no longer have to provide a password when using ssh sshfs scp etc. If it ever stops working try ssh in terminal as it is possible the ip addresses have changed and you need to reset things by deleting ~/.ssh/known_hosts and doing a new ssh login. This gets tedious after a few IP address changes (due to DCHP) and I am mindfull to try the following change to configure the secure shell (ssh) on the local machine not to check the IP. You find the ssh configuration file in /etc/ssh/ssh_config and the section to change is:

####### CheckHostIP
#
# If this flag is set to "yes", ssh will additionally check the
# host ip address in the known_hosts file. This allows ssh to
# detect if a host key changed due to DNS spoofing. If the option is
# set to "no", the check will not be executed.
#
CheckHostIP no

 

Encryption

OpenPGP

We require encryption to protect any sensitive information whilst we are travelling. Linux has OpenPGP which offers a superset of the usual PGP standard but defaults to be compatible with the encryption levels available in PGP 8 which we are using. It is accessed by the gpg program which operates in terminal mode.

gpg --encrypt ~/Desktop/homewine.htm > ~/Desktop/homewine.htm.pgp
gpg --decrypt ~/Desktop/homewine.htm.pgp > ~/Desktop/homewine.htm

There is also a GUI interface called Seahorse which can be installed by Add/Remove which handles the creation and management of keys much easier than using gpg directly. It is also supposed to add facilities into the text editor and file browser. The right click encryption of a file and encryption/decryption in the text editor work fine as does the GUI program to create and manage keys. In theory double clicking on a PGP file ought to bring up the screens to open it but there seems to be a problem in Seahorse 8.1 or Ubuntu Dapper Drake which prevent that. After a bit of searching and playing about I realised that it worked if the .gpg extension was used so it was a simple job to add the same program as an option for opening .pgp files namely seahorse --decrypt using the right click menu on a .pgp file -> Open with other application -> Use a Custom Command and filling in the box with seahorse --decrypt .

Evolution has built in encryption and signing for emails using keys created in terminal mode or managed by Seahorse. Full details of how to set it up and use it are in the Evolution help files.

Thunderbird does not have secure encryption in the main program however it is added by an extension which can be downloaded by the usual Applications -> Add/Remove _> Advanced and search for monzilla-thunderbird-enigmail. This adds inline style pgp encryption and also provides a key manager interface. It found my existing keys etc set up by seahorse and gnupg and works well.

Encryption on Linux machines - GNUGP

As those who have come to the page from my Fun with Ubuntu Linux pages will know I have seen the light and am escaping from the security nightmare of Microsoft Windows and am shifting to Ubuntu Linux for all mobile activities and most serious work at home. An early priority was to investigate encryption under [Ubuntu] Linux. In tthe same way that OpenPGP use on Windows machines is still dominated by the original PGP now provided in Free and paid versions by PGP Corporation Linux has GNUGP. The GNU Privacy Guard is fully OpenPGP compliant, supports most of the optional features and provides some extra features. GNUPG is used as the standard encryption and signing tool included in all significant GNU/Linux distributions and offers a superset of the usual PGP standard but with defaults are compatible with the encryption levels available in PGP 8 which we are using. GNUPG is in fact not only freely available for GNU/Linux, nearly all other Unix systems but also Microsoft Windows and some other operating systems. As a GNU program it can be used commercially or non-commercially without any costs.

The basic access is through the gpg program which operates in terminal mode. To show that terminal access is not that bad I have included some examples. The following encrypts and decrypts files on the desktop in a way compatible with pgp ie the .pgp extension - the default extension and action gives a file with a .gpg extension added.

gpg --encrypt ~/Desktop/homewine.htm --output ~/Desktop/homewine.htm.pgp
gpg --decrypt ~/Desktop/homewine.htm.pgp --output ~/Desktop/homewine.htm

The above - for clarity - used the long format for the commands and, for example, the encryption can done with just

gpg -e ~/Desktop/homewine.htm -o ~/Desktop/homewine.htm.pgp

or even

gpg -e ~/Desktop/homewine.htm with encrypt into homewine.htm.gpg on the desktop

Most Windows users feel that even simple Command line operations are a retrograde step whilst forgetting they are still integral in Windows for any system work. Linux users tend to like command line operation in many cases and even converts from Windows like myself have to admit it often makes things quicker and more flexible. For those who wwant to avoid using a terminal then a GUI interface to gpg has been writen called Seahorse (which can be installed by Add/Remove on Ubuntu) which certainly handles the creation and management of keys much easier tha using gpg directly. It also adds facilities into the text editor and file browser. One only has to right click on a file to get to an encryption option and there are encryption/decryption in the text editor which work fine as does the GUI programme to create and manage keys. Double clicking on a .gpg file brings up the screens to open it but there seems to be a problem in Seahorse 8.1 or Ubuntu Dapper Drake which prevent the same for .pgp files although they were equally acceptable elsewhere. After a bit of searching and playing about I realised that if worked when the .gpg extension was used it was a simple job to add the same program as an option for opening .pgp files namely seahorse --decrypt using the right click menu on a .pgp file -> Open with other application -> Use a Custom Command and filling in the box with seahorse --decrypt

In the same way as Outlook has options built into it by PGP, Evolution has built in encryption and signing for emails using keys created in terminal mode or managed by Seahorse. Full details of how to set it up and use it are in the Evolution help files. Regretably, there is currently no support in Thunderbird under Windoz or Linux

Secure deletion of data - shred

The other feature which is required for looking after data securely is a way to erase files without traces. It is no good being able to encrypt a file if you can not delete the original or working copies. Linux has a built in command shred which does a multiple pass write of data selected to make a read based on residual information at the edges of the magnetic tracks almost impossible before the file is deleted. This is not foolproof for all file systems and programs as temporary copies made be made and modern file systems do not always write data in the same place however on an ext2 or ext3 system with the default settings in Ubuntu Linux it is acceptable. Do a man shred to find out more.

Scripting Example - Secure Delete

There is no GUI interface for shred so I wrote a simple script. This took a few days to get up the learning curve of programming in bash and how the system was set up which will pay off handsomely in the future. A good place to start is LinuxCommand.org: Learning the shell. The important piece of information is that the addition of a path to a /bin directory is set in ubuntu linux in .bashrc not .bash_profile as is described in some places. Also note that files starting with a . are hidden - use View -> Hidden Files in the File browser to find them. The lines I added were:

# Additions to the standard ~/.bashrc file to set up path to
# /bin directory in home folder
PATH=$PATH:~/bin

I then had a folder in which to put script files which could be accessed from any directory. My first script to shred a file follows - if you want to follow it in detail remember that man any_command gives a summary of what it does and its options:

#!/bin/bash
#
# script to shred a file
~/bin/Shredfile
#
echo "This will Shred - overwrite many times - and"
echo "delete a file in a secure manner"
echo "The filename should include the path"
echo "from your home directory"
echo "eg /Desktop/filetoshred.doc or /Safe/filetoshred)"
echo -n "Enter Filename to Shred ? "
read filename
temp=/home/pcurtis$filename
if [ -f ~/$filename ]
then
echo "File $temp will be deleted"
else
echo "File $temp does not exist"
echo "Hit Enter to exit ? "
read
exit
fi
echo "About to Shred" $temp "!!!"
echo -n "This is irrevocable - y to continue or n to Abort ? "
read t1
if [ "$t1" = "y" ]
then
shred -n 50 -u -z -v $temp
else
echo -n "Aborted- hit any key to exit ? "
read
exit
fi
echo -n "File shredded - hit any key to exit ? "
read
exit

The reads at the end of each part are necessary to prevent the Terminal Window closing before you have seen what happens.

The script files must be given the correct permissions by

chmod 755 scriptname

The last step is to create a launcher on the desktop which can also be dragged onto the bars. Right click anywhere on the desktop -> Create Launcher Fill in a name; browse to the ~/bin directory and script name; tick run in terminal; add an icon if required and that is it. The Launcher can also be dragged onto the panel.

It all sounds very simple but it took me a while to get scripting together the first time despite having done some programming in my time.

TrueCrypt and VeraCrypt

I have used TrueCrypt on all my machines and despite various well documented shock withdrawal by the authors it is still well regarded and safe by many. See https://www.grc.com/misc/truecrypt/truecrypt.htm. There are many conspiracy theories based round the fact that the security services could not crack it for its sudden withdrawal. Fortunately it has now been forked and continues Opensource with enhanced security as VeraCrypt. There is the transcript of a podcast by Steve Gibson which covers security testing and his views on changing to VeraCrypt at https://www.grc.com/sn/sn-582.htm and he now supports the shift. VeraCrypt is arguably now both the best and the most popular disk encryption software over all machines and I have shifted on most of my machines. VeraCrypt can continue to use Truecrypt vaults and also has a slightly improved format of its own which addresses one of the security concerns. Changing format is as simple as changing the vault password see this article. They:

Truecrypt and its replacement VeraCrypt create a Virtual Disk with the contents encrypted into a single file or onto a disk partition or removable media such as a USB stick. The encryption is all on the fly so you have a file, you mount it as a disk and from then on it is used just like a real disk and everything is decrypted and re-encrypted invisibly in real time. The virtual Drive is unmounted automatically at close down and one should have closed all the open documents using the Virtual Drive by that point just like when you shut down normally. The advantage is that you never have the files copied onto a real disk so there are no shadows or temporary files left behind and one does not have to do a secure delete.

Truecrypt and its replacement VeraCrypt obviously install deep into the operating system in order to encrypt decrypt invisibly on the fly. This has meant in the past that it was specific to a Linux Kernel and had to be recompiled/installed every time a Kernel was updated. Fortunately it can be downloaded as as an installer in both 32 and 64 bit versions n – make sure you get the correct version.

The VeraCrypt installers for Linux are now packed into a single compressed file typically veracrypt-1.21-setup.tar.gz just download, double click to open the archive and drag the appropriate installer say veracrypt-1.21-setup-gui-x64 to the desktop, double click it then click 'Run in Terminal' to run the installer script.

The linux version of Vera/TrueCrypt has a GUI interface almost identical to that in Windows. It can be run from the standard menu although with Cinnamon you may need to do a Cinnamon restart before it is visible. It can also be run by just typing veracrypt in a terminal. It opens virtual disks which are placed on the desktop. Making new volumes (encrypted containers) is now trivial – just use the wizard. This is now a very refined product under Linux.

The only feature I have found is that one has to have administrative privileges to mount ones volumes. This means that one is asked for ones administrative password on occasions as well as the volume password. There is a way round this by providing additional 'rights' specific to just this activity to a user (or group) by additions to the sudoer file. There is information on the Sudoers file and editing it at:

https://help.ubuntu.com/community/Sudoers

Because sudo is such a powerful program you must take care not to put anything formatted incorrectly in the file. To prevent any incorrect formatting getting into the file you should edit it using the command visudo run as root or by using sudo ( sudo visudo ). The default editor for visudo has changed to vi, a terminal editor, which is close to incomprehensible at least to those used to Windows so it is fortunate we only have single line to add!

You launch visudo in a terminal

sudo visudo

There are now two ways to proceed, if you have a lot of users then it is worth creating a group, changing veracrypt to be a member of that group and adding all the users that need veracrypt to that group. You then use sudoer to giving group members the 'rights' to use it without a password. See:

http://wiki.archlinux.org/index.php/Truecrypt

If you only have one or two users then it is easier to give them individual rights by adding a line(s) to the configuration file by launching visudo in a terminal and appending one of the following lines for either a single user (replace USERNAME with your username) or a group called veracrypt (the last option is brute force and gives everyone access) :

USERNAME ALL = (root) NOPASSWD:/usr/bin/veracrypt
or
%veracrypt ALL = (root) NOPASSWD:/usr/bin/veracrypt
or
ALL ALL=NOPASSWD:/usr/bin/veracrypt

Type the line carefully and CHECK - there is no cut and paste into Visudo

Make sure there is a return at the end.

Save by Cntr O and exit by Cntr X - if there are errors there will be a message and a request what to do in the terminal.

I have used it both the simple way and by creating a group.

Encrypting your home folder

The ability to encrypt your home folder has been built into Mint for a long time and it is an option during installation for the initial user. It is well worth investigating if you have a laptop but there are a number of considerations and it becomes far more important to back-up your home folder in the mounted (un-encrypted) form to a securely stored hard drive as it is possible to get locked out in a number of less obvious ways such as changing your login password in a terminal.

There is a passphrase generated for the encryption which can in theory be used to mount the folder but the forums are full of problems with less solutions. Yo u should generate it for each user by

ecryptfs-unwrap-passphrase

Now we will find there is considerable confusion in what is being produced and what is being asked for in many of the encryptfs options and utilities as it will request your passphrase to give you your passphrase!. I will try to explain. When you login as a user you have a login password or passphrase. The folder is encrypted with a much longer randomly generated passphase which is looked up when you login with your login password and that is whatt you are being given and what is needed if something goes dreadfull wrong. These are [should be] kept in step if you change your login password using the GUI Users and Groups utility but not if you do it in a terminal. It is often unclear which password is required as both are often just referred to as the passphrase in the documentation.

Encrypting the home folder of extra users.

The Add User in Users and Groups has no option for encryption so this must be done manually. This is also the way to encrypt an existing home folder.

First Backup your home folder preferably to two places, one of which is an external drive - there is a big risk this can go wrong at some stage.

Note file names over 143 characters are not allowed and may be lost or the process may fail. Also note you need spare disk space well over twice the size of the home folder you are encrypting as it tries to keep its own backup.

 

Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Content revised: 8th July, 2020