Home Uniquely NZ Travel Howto Pauline Small Firms Search
Diary of System and Website Development
Part 30 (September 2018 - December 2018)

2 September 2018

The significant differences between recent Mint versions based on Xenial (LTS version 16.04) and Bionic (LTS version 18.04) and Ubuntu.

Differences between Ubuntu and Mint

Missing utilities: You should install synaptic and inxi in Ubuntu and that has no effect on existing programs. Synaptic allows you to install programs more easily like I suggest in places and is needed specifically for a couple of actions.

Text Editor: xed and gedit. In general you can replace xed by gedit if you are using Ubuntu or vice versa on all my pages.

File managers: nemo and nautilus. Nemo has many useful abilities to allow safe working with higher priviledges on the right click menu. These were removed from nautilus with the result is that you need to use the terminal more rather than follow my suggestions of working with nemo and elevated priviledges to open files, set permissions etc off a right click menu to avoid the terminal.

Differences between Mint 18 and 19 and Ubuntu Xenial and Bionic

gksudo was removed from Bionic and hence Mint 19 and it is not safe to start GUI programs as root with sudo and that includes nautilus & nemo and even gedit & xed. The reason is that any preferences etc can be saved as root and may become inaccessible or even break the system.

Instead it is recommended by Mint and Ubuntu you use the use the gvfs admin backend:

xed admin:///etc/fstab
and
nemo admin:///etc

to reach and edit /etc/fstab as root - you will be asked for your password when required. This is the way used in many examples on the web but has dangers as it gives no further warning you have elevated priviledges.

The admin:// mechanism does not work on Ubuntu versions lower tham 17.04 or Mint versions lower than 19 so if you find a admin:// in my howtos use gksudo instead ie replace xed admin:///etc/fstab by gksudo xed /etc/fstab noting the different numbers of /. I do not use nemo admin:// as far as I know (other than examples).

More about the use of pkexec and policy files. (Advanced)

I must mention that there is a better mechanism than the admin:// and that is to use pkexec (the PolicyKit) to allow elevated privileges for the specific actions where it is needed. This is my prefered solution:

pkexec xed /etc/fstab

Which gives you all the facilities you would have had from the right click -> Open as root in Nemo and also retains all the banners so is more safe than the admin:// mechanism which I consider to be a bodge.

The catch is that pkexec needs policy files for every program. They exist in Mint 19 for nemo but currently not xed.

Here is the listing of the policy for the xed text editor (based on the one already existing for nemo) and placed it in /usr/share/polkit-1/actions/ as org.xed.root.policy .

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE policyconfig PUBLIC
"-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN"
"http://www.freedesktop.org/standards/PolicyKit/1/policyconfig.dtd">

<policyconfig>

<vendor>Xed Project</vendor>
<vendor_url>https://github.com/linuxmint/xed</vendor_url>

<action id="org.xed.root">
<description>Run Xed with elevated privileges</description>
<message gettext-domain="xed">Text Editor</message>
<icon_name>accessories-text-editor</icon_name>
<defaults>
<allow_any>no</allow_any>
<allow_inactive>no</allow_inactive>
<allow_active>auth_admin_keep</allow_active>
</defaults>
<annotate key="org.freedesktop.policykit.exec.path">/usr/bin/xed</annotate>
<annotate key="org.freedesktop.policykit.exec.allow_gui">true</annotate>
</action>

</policyconfig>

It can also be downloaded from here for xed and should be placed in /usr/share/polkit-1/actions/ after removing the .txt from the end - the .txt is there to prevent browsers interpreting the file as xml and incorrectly displaying it! It should be owned by root. I have it on all my Mint 19 systems

If you are using a an earlier version or system which does not a policy file for nemo tthere is one here for nemo and also one for Leafpad here . Both can also be placed in /usr/share/polkit-1/actions/ after removing the .txt from the end

Leafpad is a simple text editor used in Lubuntu - it gets used occasionally by me as it is sometimes useful to have two editors open at the same time.

I believe the pkexec method above will also work in ubuntu for the nautilus browser but not for ubuntu text editors such as gedit and leafpad unless you write policy files for pkexec . You should be able to adapt the one above for other programs such as gedit and nautilus if they do not exist in ubuntu.

GUI alternatives to The Terminal in Mint 19 and higher.

It is interesting that the commonest objection to Linux is that newcomers believe they have to use a terminal for many operations. It is often the best, simplest and safest way when you ar familiar with it but you almost everything in a GUI with a few tricks. Firstly you will need to be comfortable with right click menus as they will be used extensively to access operations from a GUI interface which would otherwise be done in a terminal. Mostly we will be working from the file browser rather than a terminal which is fine if one is working on ones own files and in ones own home folder - but what do we do if we want to work on system files owned by root? In nemo this is very simple as there is a right click menu item 'Open as Root'. From then on every activity from within the file browser is done as if you were root including opening the text editor for editing system files and setting the permissions, ownership etc of a file via properties. This is very powerful and also very dangerous if you forget that you are 'root' so nemo puts up a big visible banner to keep you aware and if you open a file from the browser in xed by double clicking it to will have a red banner saying elevated priviledges. This is much much more sensible than the new xed admin:// mechanism which does not put up the warning banner.

Using the file manager (browser) as Root

So how does this work in practice? In setting up Mint the way I want I have to back up then edit about 7 system files owned by root so I open my file browser as root by navigating to the level above the first folder with files I need to change and open that folder as root using the right click 'Open as Root'. If the file is a hidden system file starting with a dot then you also need to do a Ctrl h to display it. I now hold down the Ctrl key and drag the file I want to work on to an empty space - this makes a copy with (copy) added to its name. I now right click on the original file and click Open which will bring up a box offering to run in a terminal or display - we want Display which will open a text file in the text editor (xed as root so we can now make the changes we want and save then. We then close the text editor and then the file browser and that is it.

WARNING Be careful how you delete files when running as root in a GUI as they can go into roots Deleted Items folder which you have no easy way of emptying without loading and using a special tool in a terminal which defeats what I am trying to do here! Mint has two right click delete options, one goes to the recycle bin which you must not use, the other deletes irreversibly after a warning which is best if you are using root.

For those who do not heed warnings: You can probably empty the root trash by loading trash-cli which is a command line trash package using the Synaptic Package Manager and typing

sudo trash-empty

If that does not work its time for google!

What can't you do in a GUI.

This is not an exhaustive list

  1. Install PPA's.
  2. Anything to do with systemd. eg enable and disable services.
  3. Some grub related tasks eg update-grub
  4. Get diagnostics for many low level systems eg inxi, lspci
  5. Set up home folder encryption using ecryptfs
  6. Set up partition encryption using LUKS
  7. Carry out some backup proceedures.
  8. Use Git

 

12th August 2018

The Start of Encryption on our machines and Backup Media

Encrypting an existing users home folder.

It is possible to encrypt an existing users home folder provided there is at least 2.5 times the folder's size available in /home - a lot of waorkspace is required and a backup is made.

You also need to do it from another users account. If you do not already have one an extra basic user with admin (sudo) priviledges is required and the user should be given a password otherwise sudo can not be used.

You can create this basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, and set Type to Administrator provide username and Full name... -> Create -> Highlight User, Click Password to set a password otherwise you can not use sudo.

Logout and Login in to your new basic user.

Now you can run this command to encrypt a user:

sudo ecryptfs-migrate-home -u user

You'll have to provide your user account's Login Password. After you do, your home folder will be encrypted and you should be presented with some important notes In summary, the notes say:

  1. You must log in as the other user account immediately – before a reboot!
  2. A copy of your original home directory was made. You can restore the backup directory if you lose access to your files. It will be of the form user.8random8
  3. You should generate and record the recovery passphrase (aka Mount Passphrase).
  4. You should encrypt your swap partition, too.

The highlighting is mine and I reiterate you must log out and login in to the users whose account you have just encrympted before doing anything else.

Once you are logged in you should also create and save somewhere very safe the recovery phrase (also described as a randomly generated mount passphrase). You can repeat this any time whilst you are logged into the user with the encrypted account like this:

user@lafite ~ $ ecryptfs-unwrap-passphrase
Passphrase:
randomrandomrandomrandomrandomra
user@lafite ~ $

Note the confusing request for a Passphrase - what is required is your Login password/passphrase. This will not be the only case where you will be asked for a passphrase which could be either your Login passphrase or your Mount passphrase! The Mount Passphrase is important - it is what actually unlocks the encryption. There is an intermediate stage when you login into your account where your account login is used to used to temporarily regenerate the actual mount passphrase. This linkage needs to updated if you change your login password and for security reasons this is not done if you change your login password in a terminal using passwd user which could be done remotely. If you get the two out of step the mount passphrase may be the only way to retrieve your data hence the great importance. It is also required if the system is lost and you are using backups.

The documentation in various places states that the GUI Users and Groups utility updates the linkage between the Login and Mount passphrases but I have found that the password change facility is greyed out in Users and Groups for users with encrypted home folders. In a single test I used just passwd from the actual user and that did seem to update both and everything kept working and allowed me to login after a restart.

Mounting an encrypted home folder independently of login.

A command line utility ecryptfs-recover-private is provided to mount the encrypted data but it currently has several bugs when used with the latest Ubuntu or Mint.

  1. You have to specify the path rather than let the utility search.
  2. You have to manually link keychains with a magic incantation which I do not understand at all namely sudo keyctl link @u @s after every reboot. A man keyctl indicates that it links the User Specific Keyring (@u) to the Session Keyring (@s). See https://bugs.launchpad.net/ubuntu/+source/ecryptfs-utils/+bug/1718658 for the bug report

The following is an example of using ecryptfs-recover-private and the mount passphrase to mount a home folder as read/write (--rw option), doing a ls to confirm and unmounting and checking with another ls.

pcurtis@lafite:~$ sudo keyctl link @u @s
pcurtis@lafite:~$ sudo ecryptfs-recover-private --rw /home/.ecryptfs/pauline/.Private
INFO: Found [/home/.ecryptfs/pauline/.Private].
Try to recover this directory? [Y/n]: y
INFO: Found your wrapped-passphrase
Do you know your LOGIN passphrase? [Y/n] n
INFO: To recover this directory, you MUST have your original MOUNT passphrase.
INFO: When you first setup your encrypted private directory, you were told to record
INFO: your MOUNT passphrase.
INFO: It should be 32 characters long, consisting of [0-9] and [a-f].

Enter your MOUNT passphrase:
INFO: Success! Private data mounted at [/tmp/ecryptfs.8S9rTYKP].
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
Desktop Dropbox Pictures Templates
Documents Videos Downloads Music Public
pcurtis@lafite:~$ sudo umount /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$

The above deliberately took the long way rather than use the matching LOGIN passphrase as a demonstration.

I have not bothered yet with encrypting the swap partition as it is rarely used if you have plenty of memory and swoppiness set low as discussed earlier.

Once you are happy you can delete the backup folder to save space. Make sure you Delete it (Right click delete) if you use nemo and as root - do not risk it ending up in a root trash which is a pain to empty!

Feature or Bug - home folders remain encrypted after logout?

In the more recent versions of Ubuntu and Mint the home folders remain mounted after logout. This also occurs if you login in a consul or remotely over SSH. This is useful in many ways and you are still protected fully if the machine is off when it is stolen. You have little protection in any case if you are turned on and just suspended. Some people however logout and suspend expecting full protection which is not the case. In exchange it makes backing up and, in particular, restoring a home folder easier.

Backing up an encrypted folder.

A tar archive can be generated from a mounted home folder in exactly the same way as before as the folder stays unencrypted when you change user to ensure the folder is static. If that was not the case you could use a consul (by Ctrl Alt F2) to login then switch back to the GUI by Ctrl Alt F7 or login via SSH to make sure it was mounted to allow a backup. Either way it is best to logout at the end.

Another and arguably better alternative is to mount the user via encryptfs-recover-private and backup using Method 3 from the mount point like this:

sudo ecryptfs-recover-private --rw /home/.ecryptfs/user1/.Private

cd /tmp/ecryptfs.8S9rTYKP && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

Restoring to an encrypted folder - Untested

Mounting via encryptfs-recover-private --rw seems the most promising way but not tested yet. The mount point corresponds to /home (see example above) so you have to use Method 3 to create and retrieve your archive in this situation namely:

cd /home/user1 && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=user1/.gvfs
# or
cd /tmp/ecryptfs.8S9rTYKP && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupmethoduser13.tgz" -C /tmp/ecryptfs.randomst

These are all single lines if you cut and paste. The . (dot) means everything at that level goes into the archive.

Solving Problems with Dropbox after encrypting home folders

The following day to when I encrypted the last home folder I got a message from Dropbox say that they would only support EXT4 folders under Linux from 3 months time and encryption would not be supported. They also noted the folders should be on the same drive as the operating system.

My solution has been to move the dropbox folders to a new EXT4 partition on the SSD. What I actually did was to make space on the hard drive for a swap partition and move the swap from the SSD to make space for the new partition. It is more sensible to have the swap on the hard drive as it is rarely used and if it is it ends to reduce the life of the SSD. Moving the swap partition need several steps and some had to be repeaed for both the operating systems to avoid errors in booting. The stages in summary were:

  1. Use gparted to make the space by shrinking the DATA partition by moving the end
  2. Format the free space to be a swap partition.
  3. Right click on the partition to turn it on by swapon
  4. Add it in /etc/fstab using blkid to identify the UUID so it will be auto-mounted
  5. Check you now have two swaps active by cat /proc/swaps
  6. Reboot and check again to ensure the auto-mount is correct
  7. Use gparted to turn off swap on the SSD partition - Rt Click -> swapoff
  8. Comment out the SSD swap partition in /etc/fstab to stop it auto-mounting
  9. Reboot and check only one active partition by cat /proc/swaps
  10. Reformat the ex swap partition to EXT4
  11. Set up a mount point in /etc/fstab of /media/DROP; set the label to DROP
  12. Reboot and check it is mounted and visible in nemo
  13. Get to a root browser in nemo and set the owner of media/DROP from root to 1000, group to adm and allow rw access to everyone.
  14. Create folders called user1, user2 etc in DROP for the dropbox folders to live in. It may be possible to share a folder but I did not want to risk it.
  15. Move the dropbox folders using dropbox preferences -> Sync tab -> Move: /media/Drop/user1
  16. Check it all works.
  17. Change folders in KeePass2, veracrypt, jdotxt and any others that use dropbox.
  18. Repeat from 15 for other users.

Dropbox caused me a lot of time-wasting work but it did force me to move the swap partition to the correct place which could eventually be encrypted another way.

4 September 2018

Encryption using LUKS (dm-crypt LUKS)

I currently have an ext4 partition of 8 Gbytes mounted at /media/SAFE on the Defiant which I intend to convert to a LUKS encrypted Partition which I believe will be acceptable for Dropbox.

First we need to know details of the partition using blkid

pcurtis@defiant:~$ blkid
...
/dev/sda3: LABEL="MINT183" UUID="749590d5-d896-46e0-a326-ac4f1cc71403" TYPE="ext4" PARTUUID="5b5913c2-7aeb-460d-89cf-c026db8c73e4"
/dev/sda4: UUID="99e95944-eb50-4f43-ad9a-0c37d26911da" TYPE="ext4" PARTUUID="1492d87f-3ad9-45d3-b05c-11d6379cbe74"
/dev/sda5: LABEL="SAFE" UUID="1b77be28-65f5-49ad-8264-3614b9b275b3" TYPE="ext4" PARTUUID="7ad6cb0d-db2c-4dca-ad8a-4978786c02bf"
...

The first thing to do before making any changes is to check for auto-mounting in /etc/fstab, remove if required. It is also prudent to do a sudo update-grub after any changes made in partitioning. If you try to boot with a non-existant file in fstab the machine hangs as I have found to my cost. This is my /etc/fstab file

# <file system> <mount point> <type> <options> <dump> <pass>
UUID=e07f0d65-8835-44e2-9fe5-6714f386ce8f / ext4 errors=remount-ro 0 1
# UUID=138d610c-1178-43f3-84d8-ce66c5f6e644 /home ext3 defaults 0 2
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
UUID=2FBF44BB538624C0 /media/DATA ntfs defaults,umask=000,uid=pcurtis,gid=46 0 0
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0

In this case there was nothing to do as I had not got round to auto-mounting in fstab.

One can over-write the partition with random data if you want to be totally sure all information is lost. There are doubts this is effective on an SSD due to the way the data is shuffled to reduce wear by the controller. You have to really need to hide something to go through these proceedures to my mind.

shred --verbose --random-source=/dev/urandom --iterations=3 /dev/sda5

Note: This is a long job even on a small partition and I actually only used a single iteration just to test the proceedure.

Now we can create a cryptographic device mapper device in LUKS encryption mode:

sudo cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random luksFormat /dev/sda5

we now have an encrypted partition but it has no file system so we first or open, 'unlock', the device by

sudo cryptsetup open --type luks /dev/sda5 sda5e

and now create an ext4 file system in order to write encrypted data that will be accessible through the device mapper name. The 'label' sda5e is only used here to format the partition.

sudo mkfs.ext4 /dev/mapper/sda5e

If we now look gparted we find we have a partition with a filesystem decribed as [Encrypted] ext4 so we are well on the way.

It is a good idea to add a label to the file system within the LUKS partition so it is accessible with a meaningful name when it is mounted. This is a good time to do it as it needs to be mounted with a mapped address you know. eg:

sudo e2label /dev/mapper/sda5e VAULT

If we look in nemo we will find the a new device described as a 9.4 GB Volume and if we click on it we will be taken to a screen asking for the Pass Phrase and we can mount it. At this point you will see the label. Before being able to add files etc we may need to set its permissions - I set owner Root and group adm with read and write as users with admin rights are in group adm .

Important: There is one more important step and that is to create a backup header file for security - lose that vulnerable bit of the system and you are completely stuffed.

sudo cryptsetup -v luksHeaderBackup /dev/sda5 --header-backup-file LuksHeaderBackup.bin

and put it and perhaps a copy somewhere very safe.

Auto-mounting our LUKS partition

We now have two basic choices. I have tried both and both have advantages and disadvantages.

  1. Auto-mount at boot needs a keyfile created which is used in addition to the Pass Phrase to unlock the drive. This is arguably the best way if you have somewhere already encrypted to store it otherwise this has no security at all. So this is good for drives when you already have an encrypted root (you can't easily have an encrypted /boot) and possibly an encrypted /home might serve depending on timing.
  2. Auto-mount at login uses a utility you have to install (pam_mount) which works best for local logins which are what most users will do. You have to use the same Pass Phrase as the user login. LUKS has 8 slots for Pass Phrases so it will work with up to 8 users with different logins and if the login passpahrase changes you must change the matching one on the LUKS volume.

This is what to do in more detail for each option.

1. Auto-Mount a LUKS Partition at System Boot

First create a random keyfile

sudo dd if=/dev/urandom of=/etc/luks-keys/disk_secret_key bs=512 count=8

Notes: The folder has to exist otherwise it complains and use less obvious names!

Now we add the keyfile to LUKS

sudo cryptsetup luksAddKey /dev/sd5 /etc/luks-keys/disk_secret_key

You can see how many slots are in use by:

sudo cryptsetup luksDump /dev/sda5 | grep "Key Slot"

and you can see that we have used 2 of 8 slots, one with the Pass Phrase and the second a keyfile.

pcurtis@defiant:~$ sudo cryptsetup luksDump /dev/sda5 | grep "Key Slot"
Key Slot 0: ENABLED
Key Slot 1: ENABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED
pcurtis@defiant:~$

We now need to create a mapper for the LUKS device that can be referenced in the fstab. This is all very easy as there is built table for the mappings. Open /etc/crypttab by

xed admin:///etc/crypttab

and add then a line like this:

sda5_crypt /dev/sda5 /etc/luks-keys/disk_secret_key luks

or you better still is to use the UUID of the device:

sda5_crypt /dev/disk/by-uuid/1b77be28-65f5-49ad-8264-3614b9b275b3 /etc/luks-keys/disk_secret_key luks

What we have done there actually is telling that /etc/luks-keys/disk_secret_key should be used instead of password entry to unlock the drive.

Note: /etc/crypttab did not exist in my system and I had to create it

You can now mount in nemo without a password but we still need the last step of adding the device mapper to the file system table if we want to auto-mount at boot with all the other file systems.

xed admin:///etc/fstab

and add a line like this at the end of /etc/fstab:

/dev/mapper/sda5_crypt /media/SAFE ext4 defaults,noauto 0 0

to give

# <file system> <mount point> <type> <options> <dump> <pass>

UUID=e07f0d65-8835-44e2-9fe5-6714f386ce8f / ext4 errors=remount-ro 0 1
# UUID=138d610c-1178-43f3-84d8-ce66c5f6e644 /home ext3 defaults 0 2
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
UUID=2FBF44BB538624C0 /media/DATA ntfs defaults,umask=000,uid=pcurtis,gid=46 0 0
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0
/dev/mapper/sda5_crypt /media/SAFE ext4 defaults,noauto 0 0

And thats it, you should have your encrypted LUKS volume mounted at /media/SAFE

You may need to set the permissions and group of /media/SAFE

I have tried this and it all works but I have no secure encrypted place for the keyfile

2. Auto-Mount a LUKS Partition at User Login.

First we need to install the utility libpam-mount, a one off

sudo apt-get install libpam-mount

Now we need to add the login password(s) to the LUKS by

sudo cryptsetup luksAddKey /dev/sda5

This will ask for any existing passphrase the let you enter a new one and verification

You can have a total of 8 passphrases and keyfiles so this could cover 7 users plus your original Pass Phrase.

Now we edit the pam_mount configuration file at /etc/security/pam_mount.conf.xml

xed admin:///etc/security/pam_mount.conf.xml
and the books says add <volume fstype="crypt" path="/dev/sda5" mountpoint="/media/VAULT" /> after the line <!-- Volume definitions -->

like this

<!-- Volume definitions -->
<volume fstype="crypt" path="/dev/sda5" mountpoint="/media/VAULT" />

<!-- pam_mount parameters: General tunables -->

and that is it, no need to even modify the file system table!

This is basically what I am doing currently with a small partition but I plan to either encrypt my whole DATA partition with LUKS or increase the size of VAULT.

Note 1: This is not the whole story. On Mint 18.1 and posibly other version 18.x you may need to add a user="*" to the line ie <volume fstype="crypt" path="/dev/sda5" mountpoint="/media/VAULT" user="*" /> to avoid a number of programs using pkexec failing to start as root without a password - there have been issues raised but the solution the reason is not clear. A good example is synaptic which is started in the menu as synaptic-pkexec and also gparted. I have started to do this on every machine.

Note 2: There is another problem with the way the device path is specified as the device may change if other drives such as USB hard drives, sticks or camera cards are present at boot time. I finally discovered that one can use the disk UUID instead although the way it is specified is not what I had expected it seems to work. Find the UUID by use of blkid

peter@defiant:~$ blkid
/dev/sda1: LABEL="EFI" UUID="06E4-9D00" TYPE="vfat" PARTUUID="333c558c-8f5e-4188-86ff-76d6a2097251"
....
....
/dev/sdb6: UUID="9b1a5fa8-8342-4174-8c6f-81ad6dadfdfd" TYPE="crypto_LUKS" PARTUUID="56e70531-06"
peter@defiant:~$

so then the final addition that I am using is like this:

<!-- Volume definitions -->

<volume fstype="crypt" path="/dev/disk/by-uuid/9b1a5fa8-8342-4174-8c6f-81ad6dadfdfd" mountpoint="/media/VAULT" user="*"/>

<!-- pam_mount parameters: General tunables -->

There is some further information on persistent mounting at Disk_Encryption_User_Guide How_will_I_access_the_encrypted_devices_after_installation which seems to justify my solution and persistent_block_device_naming has an alternative.

User Login Password Changes when using pam_mount

If you change a login password you also have to change the matching password slot in LUKS by first unmounting the encrypted partition then using a combination of

sudo cryptsetup luksAddKey /dev/sda5
sudo cryptsetup luksRemoveKey /dev/sda5
# or
sudo cryptsetup luksChangeKey /dev/sda5

otherwise you will be using the previous password.

Warning: Never remove every password or you will nevere be able to access the LUKS volume

Changing label of LUKS filesystem

Firstly this is about the label of the fileystem which will only appear when the LUKS partition is mounted. You will not see it when unmounted so it is secure.

You can not change the label in gparted or nemo or any other GUI I have found. The normal way in a terminal to set a label is to use e2label /dev/sdxx LABEL but we need to find the mount point which is mapped.

So I looked for the mount point on my pam_mount system in /dev expecting it to be something like /dev/mapper

pcurtis@lafite:~$ ls -l /dev | grep -i map
drwxr-xr-x 2 root root 80 Sep 6 07:37 mapper

Which was not what I expected so I had a look a level down

pcurtis@lafite:~$ ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 Sep 6 07:36 control
lrwxrwxrwx 1 root root 7 Sep 6 07:37 _dev_sda3 -> ../dm-0

Now we know that it is a link called /dev/mapper/_dev_sda3

I did not need to set a new label but I checked the existing label by:

pcurtis@lafite:~$ sudo e2label /dev/mapper/_dev_sda3
[sudo] password for pcurtis:
VAULT

I have set the label on another machine and it all works fine using

pcurtis@defiant:~$ ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 Sep 4 17:19 control
lrwxrwxrwx 1 root root 7 Sep 6 12:08 _dev_sda5 -> ../dm-0
pcurtis@defiant:~$ sudo e2label /dev/mapper/_dev_sda5 VAULT
[sudo] password for pcurtis:
pcurtis@defiant:~$

Conclusions on LUKS Encryption

The encrypting of a partition using LUKS and the various ways of auto-mounting were much easier to understand and implement than I expected.

Link Dropbox to folder in LUKS encrypted folder

Having moved the Dropbox folder an encrypted partition one has a different and tedious path. Linking is good for programs using Dropbox but does not show the fancy icons if you look at it through the link for some reason.

ln -s /media/DROPBOX/Dropbox_pcurtis/Dropbox /home/pcurtis/Dropbox

And now all the programs can see it as it was!

Encrypt USB Drives and Sticks

IMPORTANT NOTE: I have set up my system to mount 'removable' drives at /media/label not /media/username/label so you may need to make some changes below to reflect that or do the same - I find with a multi-user machine the common mount to be much better so I will include instructions here.

Change auto-mount point for USB drives back to /media (Better for Multiple Users)

Ubuntu (and therefore Mint) have changed the mount points for USB drives from /media/USB_DRIVE_NAME to /media/USERNAME/USB_DRIVE_NAME in Ubuntu version 13.04. This seems logical as it makes it clear who mounted the drive as has permissions to modify it as one switches users but when users share information it is intrusive. I have always continued to mount mine to /media/USB_DRIVE_NAME. One can change the behavior by using a udev feature in Ubuntu 13.04 and higher based distributions (needs udisks version 2.0.91 or higher).

Create and edit a new file /etc/udev/rules.d/99-udisks2.rules

xed admin:///etc/udev/rules.d/99-udisks2.rules

and cut and paste into the file

ENV{ID_FS_USAGE}=="filesystem", ENV{UDISKS_FILESYSTEM_SHARED}="1"

then activate the new udev rule by restarting or by

sudo udevadm control --reload

When the drives are now unplugged and plugged back in they will now mount at /media/USB_DRIVE_NAME

Encrypting a USB stick with a LUKS container.

This is a summary of the sequence for a stick which has a partion mounted at sdb1

First we create the LUKS container

sudo cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random luksFormat /dev/sdb1

Now we open it with a temporary location of

sudo cryptsetup open --type luks /dev/sdb1 sdb1e
Now we can format it to ext4 and add the Label

sudo mkfs.ext4 /dev/mapper/sdb1e -L USB4

If we fail to add the label or change it we need to find out what the mount point used by the system is called when pluged in or mounted by nemo .

pcurtis@lafite:~$ ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 Sep 8 18:16 control
lrwxrwxrwx 1 root root 7 Sep 8 23:00 _dev_nvme0n1p3 -> ../dm-0
lrwxrwxrwx 1 root root 7 Sep 8 23:00 _dev_sda3 -> ../dm-1
lrwxrwxrwx 1 root root 7 Sep 8 23:00 luks-f43d424d-61c8-467b-9307-c054ac0d1086 -> ../dm-2

Now we know the mount point is /dev/mapper/luks-f43d424d-61c8-467b-9307-c054ac0d1086 which is a link to /dm-2 we can Label/change the label by:

sudo e2label /dev/mapper/luks-f43d424d-61c8-467b-9307-c054ac0d1086 USB4
# or
sudo e2label /dev/dm-2 USB4

Now we set the permissions, owner and group for the USB stick - these seem to be retained. They are set to the 'primary' owner 1000 and group adm which all my users belong to.

sudo chown 1000:adm -R /media/USB4 && sudo chmod 770 -R /media/USB4
and as a confirmation lets have a look at /media where all the partitions/mount points are encrypted.

pcurtis@defiant:~$ ls -l /media
total 20
drwxrwxrwx 1 pcurtis plugdev 8192 Sep 9 04:38 DATA
drwxrwxr-x 8 pcurtis adm 4096 Sep 7 15:07 DROPBOX
drwxrwx--- 3 pcurtis adm 4096 Sep 8 18:13 USB4
drwxrwx--- 9 pcurtis adm 4096 Sep 8 11:16 VAULT

The end result is that one is asked for a the pass phrase when the stick is pluged in or you can mount in nemo if it was in when the system was booted. You can also unmount before unpluging.

NOTE: The auto-mounting on plugin and use of nemo may be limited to latter kernel versions above 4.1.

Important: There is one more important step and that is to create a backup header file for security - lose that vulnerable bit of the system and you are completely stuffed.

sudo cryptsetup -v luksHeaderBackup /dev/sda5 --header-backup-file LuksHeaderBackup.bin

and put it and perhaps a copy somewhere very safe.

Encrypting information on existing backup drives

Little of by backup information is sensitive - I do not care about 200 Gbytes of boring pictures or Terabytes of videos. So I add a Veracrypt (ex truecrypt) volume to each hard drive which is between 8 and 256 Gbytes in size. That gives plenty of room for sensitive files at one end and partition backups at the other. Partition/Disk encryption is not as reliable as the well proven Veracrypt which is a fork of Truecrypt. One can nor easily auto-mount a Veracrypt volume but that is small price for reliability. A small volume can be hidden to look like video file. Using a keyfile gives ultimate security over a stolen drive if one keeps it in an encrypted folder on ones machines.

Finding filenames which are too long for ecryptfs (used for home folder encryption)

ecryptfs puts a limit on the maximum lenth of a filename. This statement based closely on one by Dustin Kirkland, one of the authors and the current maintainer of the eCryptfs userspace utilities, at https://unix.stackexchange.com/questions/32795 explains the problem:

Linux has a maximum filename length of 255 characters for most filesystems (including EXT4), and a maximum path of 4096 characters.

eCryptfs is a layered filesystem. It stacks on top of another filesystem such as EXT4, which is actually used to write data to the disk. eCryptfs always encrypts file contents, but it by default encrypt (obscure) filenames.

If filenames are not encrypted, then you can safely write filenames of up to 255 characters and encrypt their contents, as the filenames written to the lower filesystem will simply match. While an attacker would not be able to read the contents of index.html or budget.xls, they would know what file names exist. That may (or may not) leak sensitive information depending on your use case.

If filenames are encrypted, things get a little more complicated. eCryptfs prepends a bit of data on the front of the encrypted filename, such that it can identify encrypted filenames definitively. Also, the encryption itself involves "padding" the filename.

Empirically, it has been found that character filenames longer than 143 characters start require >255 characters to encrypt. So the eCryptfs upstream developers typically recommend you limit your filenames to ~140 characters.

I have found that a few of our files can not be transfered into the encrypted file system because of this limit so it is good to identify and shorten them.You can do this in the shell. find will give you a list of all files recursing down into the subdirectories, starting at dot - the current directory then awk can tell you the lengths. This clever command then outputs the length of the base name and the full original name. It is then sorted into reverse numeric order

sudo find . | awk '{base=$0;sub(/.*\//,"",base);x=length(base);if(x>133)print $0,x,base}'

The awk gets the whole line in $0, then copies it to base. It then strips off everything up to the last slash and re-saves that as base. Then it gets the length of base and, if greater than 133 does the printing of files close or over the limit. Thanks to Mark Setchell for this masterpiece of compact coding which goes through a complete home folder too short a time to measure! The sudo is my addition to cope with a couple of cache folders where the permissions prohibit access.

6th September 2018

How to kill a program that has frozen the machine

I have just had Googleearth lock up my machine, or the GUI part anyway.

I could still log in via a Consul using Ctrl Alt F2 -> login

A search for kill program from terminal ubuntu in google got me to https://helpdeskgeek.com/linux-tips/forcefully-close-a-program-in-ubuntu/ which was very helpful

I logged in on the Consul reached by Ctrl Alt F2

I got a list of running processes by ps -A but could not see anything which look a problem so I tried

ps -A | grep -i google

which showed me there was a googleearth-bin running and a pid

killall pid did not work but killall googleearth.bin did the job

killall googleearth-bin

and I could go back to the GUI by ctrl Alt F7

Went back to Consul to logout via exit

jdotxt and all other java programs stop working

See https://askubuntu.com/questions/695560/assistive-technology-not-found

xed admin:///etc/java-11-openjdk/accessibility.properties

Comment out line assistive_technologies=org.GNOME.Accessibility.AtkWrapper

All sweetness and light.

How-to fix "Warning: No support for locale: en_UK.utf"

This is because locale-gen is using an archive file to store all the locales, but many utilities are still looking for the locale files. https://forums.linuxmint.com/viewtopic.php?t=111527 The warning isn't critical, however, users have reported that the fix below also fixes the sort order in some file browsers.

sudo locale-gen --purge --no-archive
sudo update-initramfs -u

31 October 2018

Unison with LUKS

Subsumed into following sections.

17 November 2018

Picasa - Timestamps and Sorting anomalies

We have started to find that sorting our albums on date has been producing anomalous results. The sort order seems to use a date and time which is generated and stored in the local database and can be different to the EXIF stored dates by an hour in the cases we have identified. This does not matter if it is uniform through the album as the sort order is still correct but when some pictures are different it causes problems especially for slide shows.

My best guess is that it caused by changes in time zone and between summer and winter time and the way that the database is updated.

To understand what is going on one needs to understand how Picasa works and how we are synchronising between machines and users.

 

Ways to display information about JPEG files including EXIF, XMP and IPTC information

Command Line

stat x.jpg # Basic information including Owner, Group, Timestamps and Permissions of any file.

identify -verbose x.jpg # Part of ImageMagick which needs to be installed

jhead -v x.jpg # jhead needs to be installed

exiftool x.jpg # exiftool needs to be installed.

See https://libre-software.net/edit-metadata-exiftool/ for exiftool

Picasa

Picasa also provides its own properties of which the important ones at present are

Camera date = EXIF DateTimeOriginal

Digitised Date = EXIF DateTimeDigitized

Modified Date = Seem to be File Date in jhead and Modified Date in stat

File Date - this is what is used for 'Sort on Date' and seems to be stored in the database and can be out of step with Camera/Digitised Date. Is this because of timezone at time it is entered/imported into database? It seems that some changes do not cause an update of the database

Tags = IPTC Keywords - I add these in Picasa

Caption = IPTC Caption - Can be added in Picasa

Pix

I use Pix to import all my pictures into the folder structure

Pix can show properties and edit/add many of them including tags and captions.

See here for other GUI options https://libre-software.net/edit-image-metadata-on-linux/

17 November 2018

Techniques and Tools for multiple users.

This is a new section which tries to explain how our usage of our machines has evolved with the increasing capabilities of our machines. Up until recently our machines were all basically set up as single user machines - that seemed to be how most Linux 'desktop' users were set up when we started most technical information is still for that situation.. Each of us had a dedicated machine but with some ability for the other to use it whilst travelling by switching thunderbird and firefox profiles and common data areas. For many years we both had identical MSI Wind netbooks for travelling.

In that era we both used the same login name as the primary user (UID=1000) on our machines ( 2 MSI Winds, and Chillblast Defiant as well as Desktops) but the desktop configuration was quite different.

We had separate partitions for root / home /home and a Data partition mounted at /media/DATA which was initially an ntsf partition as most of our machines came with Windoz and remained dual booted.

The next step involved doing away with dual booting on the next two machines which were bought with operating systems (Chillblast Helios and our Lafite 2). The Data partition was changed to be ext4 whilst still at /media/DATA. Both have 256 Gbyte SSDs and the Lafite also has a 2 Tbyte hard drive. The Defiant which started with a 1 Tbyte hybrid drive gained a 256 Gbyte SSD which has speeded it up considerably but it is relatively heavy although still the fastest machine and the only one with Optimus Graphics and is mostly used with external monitor, keboard and mouse instead of our aged remaining Desktop machine.

At this point we started encrypting the home partitions using ecryptfs and reducing the size of the Data partition at /media/DATA to provide space to add a partition encrypted with LUKS and mounted at /media/VAULT for sensitive information including the local Dropbox folder. Thunderbird and Firefox profiles are also in encrypted areas (home folder or VAULT).

In parallel we created two new users for peter and pauline but kept the original user pcurtis UID=1000 as a tool for syncronising between machines etc. All three 'family' users belong to each others groups and also to group adm GID=4. Shared areas such as DATA and VAULT have most of there files and folders periodically updated to have UID=1000 and GID=admin(4) to aid syncronisation with unison carried out from user ID pcurtis(1000). Note timestamps can only be reset during syncronisation by the owner of a file.

We have found that it is very easy to use the machine with several users by Switching user rather than logout and login. It seems to work perfectly well although sometime after a suspend one finds messages in the other user. One can have a number of users logged in and also one can log in using a consul or SSH both of which open up the VAULT. There are however anamolies using Unison - see below.

Use w in a terminal to see a list of Show who is logged on and what they are doing. Use option -i to see more about remote logins (ssh etc)

peter@defiant:~$ w
11:30:48 up 2 days, 30 min, 2 users, load average: 0.26, 0.40, 0.62
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
peter tty7 :0 Sat11 2days 21:04 0.37s cinnamon-session --session cinnamon
pcurtis tty8 :1 Sun08 2days 3.27s 0.22s cinnamon-session --session cinnamon
peter@defiant:~$

Tip: The fastest way to switch after the initial logins is to use the consul commands Ctrl Alt F7 and Ctrl Alt F8 etc - the numbers match the tty numbers above and it is near instantaneous as you are just switching display.

BUG: Logging out or closing an SSH connection or closing Unison which is connected over an ssh connection can cause the an automounted VAULT to be unmounted which seems to be bug in pam_mount and ssh as it ought to keep a count and only logout when the last user has logged out. I have not got to the bottom of this yet but in the meantime I keep Unison open, even after syncronising over ssh is complete, and/or also use an ssh login.

The numbers of mounts is in /var/run/pam_mount/username giving the possibility of checking if and when the mount is present with code such as:

peter@defiant:~$ [ -f /var/run/pam_mount/peter ] && echo "Found" || echo "Not found"
Found

We have also changed most of our backup USB drives to ext4 which allows much faster access via Nemo and part or all of the has LUKS encryption. When not encrypted they have large veracrypt vaults for sensitive Data backups from the VAULTS.

Changes required to Applets and Desklets under Cinnamon 4.0 to avoid potential Segfaults

Mint 19.1 which has Cinnamon 4.0 no longer supports the old network manager and loading applets or desklets which link to the old non-existent libraries causes a segfault even when wrapped in the try-catch already used in my applets to support both types of network manager and hence a number of different distributions.

To aid testing a requirement and test was been put in place to check that problem xlets have been modified by checking they use use multiversion and have a 4.0 folder. So my some of my applets require multiversion to be defined and possibly cinnamon-version used in metadata.json. This has led to a search for information as the Cinnamon and xlet documentation is not always very up-to-date even when it can be found!

Cinnamon Extension Versioning

The only actual documentation I could find at the time said about where versions should live was at http://developer.linuxmint.com/reference/git/cinnamon-tutorials/xlet-versioning.html. To quote:

To enable extension versioning,

Extensions have to add the line

{
"multiversion": true
}

in metadata.json. The different versions should be put in different subdirectories. For example, the version for 2.6 should be put in extension@uuid/2.6/. The contents in 2.6/ should be exactly what you will normally put in extension@uuid/, apart from metadata.json, which should always be put in the parent directory, ie. extension@uuid/.

You do not need to create a subdirectory for every single Cinnamon version out there. Cinnamon looks for the most recent subdirectory that is not newer than the running Cinnamon version. For example, if you are running 2.6 and there are 2.4, 2.8 directories, the 2.4 directory will be loaded, until the you upgrade to 2.8 in which case the 2.8 directory will be used. Minor version numbers can also be used, eg. 2.6.4, and are sorted accordingly.

If no suitable directory is found, then the contents in extension@uuid/ will be loaded. Note that Cinnamon versions prior to 2.6 will not understand this directory magic, and will always try to load the contents in extension@uuid/. Hence it is suggested that extension maintainers put the version for 2.4 in extension@uuid, and create new directories if changes specific to newer Cinnamon versions are made. Don't make a new directory whenever a Cinnamon version is out since it is just a waste of space (and maintenance effort).

Extracted from http://developer.linuxmint.com/reference/git/cinnamon-tutorials/xlet-versioning.html

Cinnamon has another mechanism for version control handling which is also added to metadata.json in the form

{
"cinnamon-version": ["2.2","2.4","2.6","2.8","3.0","3.2","3.4","3.6","3.8","4.0"]
}

Only versions which match entries are loaded.

so a typical metadata.json file might look like this:

{
   "max-instances": "1",
   "uuid": "netusagemonitor@pdcurtis",
   "name": "Network Data Usage Monitor",
   "description": "A Comprehensive Data Usage Monitor with alerts and cumulative data functions",
   "version": "3.2.6",
   "multiversion": true,
   "cinnamon-version": ["2.2","2.4","2.6","2.8","3.0","3.2","3.4","3.6","3.8","4.0"]
}

I have noticed in https://github.com/linuxmint/Cinnamon/wiki/%5Bdevelopment%5D-extensions that metadata.json has:

requiredProperties: ['uuid', 'name', 'description', 'cinnamon-version'],
niceToHaveProperties: ['url'],

If this was the case as it means every applet would need cinnamon-version which would have to be updated every time the cinnamon version is changed which is lot of extra PRs!

I have now looked at all the applets which either use cinnamon-version (13) or multiversion (14) and they mostly overlap and I have assumed most of them work although I have put emphasis on those from those from authors I know about. As a first check I did searches with grep from my applet folder to see how many current applets have a line with 'cinnamon-version' :

peter@defiant:~/cinnamon-spices-applets$grep -Rw . -e 'cinnamon-version'

which showed there were only 13 of 148 applets which use 'cinnamon-version' in metadata.json and quite a few already had"4.0" included. So it as clear that a 'cinnamon-version' is not currently obligatory. There are also several which do not get to "3.8" and can no longer be loaded. The options are R for recursive and w whole word - you can add n which will also give the line number in the file. This is a very useful and powerful search without having to remember complex commands and I found the number of applets using multiversion (14) in same way.

I also run tests:

Some conclusions are:

So I have:

Examples of linking applets and desklets to repositories

This is a list of my current linking commands for my convenience covering the 7 xlets I am contributing to.

ln -s ~/cinnamon-spices-applets/stopwatch@pdcurtis/files/stopwatch@pdcurtis ~/.local/share/cinnamon/applets/stopwatch@pdcurtis

ln -s ~/cinnamon-spices-applets/batterymonitor@pdcurtis/files/batterymonitor@pdcurtis ~/.local/share/cinnamon/applets/batterymonitor@pdcurtis

ln -s ~/cinnamon-spices-applets/netusagemonitor@pdcurtis/files/netusagemonitor@pdcurtis ~/.local/share/cinnamon/applets/netusagemonitor@pdcurtis

ln -s ~/cinnamon-spices-applets/bumblebee@pdcurtis/files/bumblebee@pdcurtis ~/.local/share/cinnamon/applets/bumblebee@pdcurtis

ln -s ~/cinnamon-spices-applets/vnstat@linuxmint.com/files/vnstat@linuxmint.com ~/.local/share/cinnamon/applets/vnstat@linuxmint.com

ln -s ~/cinnamon-spices-desklets/netusage@30yavash.com/files/netusage@30yavash.com ~/.local/share/cinnamon/desklets/netusage@30yavash.com

ln -s ~/cinnamon-spices-desklets/simple-system-monitor@ariel/files/simple-system-monitor@ariel ~/.local/share/cinnamon/desklets/simple-system-monitor@ariel

Note: the commands are each a single line

Git Tips which will eventually be integrated into git.htm

Undo last Commit.

If you want to re-work on the last commit-assuming it has not been pushed:

git reset HEAD^


This will undo the commit and restore the index to the state it was in before that commit, leaving the working directory with the changes uncommitted, so one fix whatever you need to modify before commiting again.

If you have pushed it or for more information see https://stackoverflow.com/questions/19859486/how-to-un-commit-last-un-pushed-git-commit-without-losing-the-changes

Tidying up after Pull Requests have been accepted

My original philosophy was to not destroy information ie branches until absolutely necessary but I have realised that can be confusing and unnecessary the way the update cycle works with Github and it is better to delete them as soon as possible, esspecially if you use more than one machine.

With Github you never merge your branch into your master as it goes via your online repository (origin), the pull request is from upstream and is then fetched back from the main repository (upstream). If you want to see what you have merged at a latter stage you look at the commit into the main repository or your local repository which is in step with it. The problem with this is that it can take many days before your merge is accepted and you can fetch and merge the change - until then you need the branches. You will receive an email that the merge has been completed which will have links to github which you can follow to confirm the pull request is complete and will often contain a statement that your remote branch is no longer required and a button to delete it.

Deleting both Local and Remote branches

After a development is complete one will need to tidy up and make sure that your changes have been merged back into your master and the local branch you have used for the development can usually be deleted. If you are doing a collaborative development using Github your changes are usually incorporated via a Pull request from a branch on your Github repository (which is a fork of the repository you are submitting the changes to.). This means that you may need to not only delete your local branch but also the remote branch used to generate the Pull Request.

Deleting the remote branch is not as trivial as one would expect and the 'definitive' answer on stackoverflow on how to delete a git branch both locally and remotely has been visited nearly 4 million times! This article is so popular as the syntax is even more obscure than the rest of git and has changed with version number. I will assume that everyone reading this has at least git version 1.7.0 . I have already covered deleting a local branch and forcing a delete if changes have not been merged above but will repeat here for completeness.

To delete a single local branch use:

git branch -d branch_name

Note: The -d option only deletes the branch if it has already been fully merged. You can also use -D, which deletes the branch "irrespective of its merged status."

Delete Multiple Local Branches

git branch | grep 'string' | xargs git branch -d

where string is common to all the branch names, mine all have a - (dash/minus sign) in them.

You may need to force the delete by:

git branch | grep 'string' | xargs git branch -D

Delete Remote Branch

As of Git v1.7.0, you can delete a remote branch using

git push origin --delete <branch_name>
This is all you need to do if you are just using a single machine for development but if you are using several you may need an additional stage on the other machines in order to locally remove stale branches that no longer exist in the remote. Run this on all other machines:

git fetch --all --prune

to propagate changes.

Also if you have created a Pull Request on Github you will find that after it has been merged there is a tempting button to delete the branch. I often use it or sometimes use other mechanisms in github to delete my remote branches. This is perfectly acceptable but the local machines do not know and have stale branches which you need to remove at some point on all your local machines, as above, by:

git fetch --all --prune

You can also use:

git remote update --prune origin
git remote update --prune upstream

Reminder about difftool and meld

One can use difftool which is set up to call meld to check (and even correct) the unmerged set of changes by:

git difftool HEAD

And a good way of seeing what you have currently changed in a branch when everything is up-to-date is

git difftool master

Reminder about gitk

It is useful to be able to see all the branches by:

gitk --all &

Working from several machines

This section has been extended in Diary 31 - Planning for and working from several machines following further work on multiple machines

I have had to change machine and had an out of date (by 6 months) version of git and I needed to modify a branch I had already pushed and started a PR. So first update my master from upstream.

So first update my master branch from upstream by the usual

git checkout master
git fetch upstream
git merge upstream/master
git status

Now the important part is to get the remote branch and set it up so it is tracked (ie can be autiomatically pushed to and pulled from.. I looked at https://stackoverflow.com/questions/9537392/git-fetch-remote-branch. What seems to have worked for me is the simplest way namely

git fetch origin
git branch -r
git checkout remotebranchname

as below

peter@lafite:~/cinnamon-spices-desklets$ git branch -r
origin/F0701
origin/HEAD -> origin/master
origin/googleCalendar-l10n-fix
origin/master
origin/netusage-1.0.3
origin/netusage-1.0.4
origin/netusage@30yavash.com-issue_279
origin/simple-system-monitor-1.0.0
upstream/F0701
upstream/googleCalendar-l10n-fix
upstream/master
peter@lafite:~/cinnamon-spices-desklets$ git checkout netusage-1.0.4
Branch 'netusage-1.0.4' set up to track remote branch 'netusage-1.0.4' from 'origin'.
Switched to a new branch 'netusage-1.0.4'
peter@lafite:~/cinnamon-spices-desklets$

Note: This recogises it is a remote branch and sets up the tracking. This was using a very recent git version 2.17.1

April 2019 - Replacing Dropbox with pCloud - starts Part 31

Before You Leave

I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.

Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Content revised: 6th July, 2020