|Home||Uniquely NZ||Travel||Howto||Pauline||Small Firms|
Diary of System and Website Development
Part 32 (January 2020 ->)
I am increasingly using Opera as a browser as many sites no longer work with Firefox as it has a problem with Captcha validation.
Chrome extensions can be installed if first the Opera "Install Chrome Extensions" extension is installed.
Install Chrome Extensions allows you to install extensions from Google Chrome Web Store
in your Opera browser. Note: You can install extensions only. Themes are not supported. To use this extension:
I used this to install "Zoom Redirector" which was not available directly at the time.
Zoom Redirector transparently redirects any meeting links to use Zoom's browser based web client.
Instead of having to search for the hidden "join from your browser" link, this addon will take you there automatically.
Installing old versions from 6 debs from 18.04 version 52.and transfer of profiles seems to be OK
Only the starred .debs were installed after a complete uninstall of the existing thunderbird. Double starred installed on Defiant
Useful for transfers but 19.3 needed updated version at http://packages.linuxmint.com/pool/main/w/warpinator/warpinator_1.0.5+ulyana_all.deb
There were various comments that wine did not work well under Mint but I found that Wine worked under Mint 20 for me. I successfully installed an old windows program under wine using a LiveUSB using the following steps, starting in the terminal – some may be redundant but at least it worked!
sudo dpkg –add-architecture i386
sudo apt update
sudo apt install wine wine32 wine-binfmt
wine –version # showed version 5.0-3 was installed
Install program was run but no menu item at this point so:
wine-binfmt seems to be important to allow association but it still had to be done manually the first time.
The ttf-mscorefonts-installer package sould allow for easy installation of the Microsoft True Type Core Fonts for the web but using Synaptic seems to miss part of the ULA acceptance which is in two parts. So
Run sudo apt-get install ttf-mscorefonts-installer and accept the EULA, or else if you already have ttf-mscorefonts-installer installed and you didn't accept the EULA, then uninstall ttf-mscorefonts-installer and reinstall it like this:
sudo dpkg -P ttf-mscorefonts-installer
sudo apt install ttf-mscorefonts-installer
Use the Tab and Enter keys to accept the EULA in the Microsoft TrueType core fonts windows that pops up. The terminal will output a new message each time it finishes downloading a new font. It takes a while.
Googleearth always used to be a nightmare to install but I found no problems in an install using a .deb in the following way
sudo gdebi google-earth-pro-stable_current_amd64.deb
It appeared in the menu and worked straight off.
The Leafpad editor is no long available so I have had to modify my git installation which specified it as the default editor called during commit rebasing (to avoid conflicts when using the same editor for changes in the files. the following section in git.htm needs changing:
Git is now the most widely used source code management tool, with over 40% of professional software developers using Git as their primary source control system. Before you worry the whole installation and setup of Git is done in about 9 terminal commands you can cut and pasted from below. This assumes you have already set up an account on GitHub which is easy and well explained on their site and their is a repository you want to use..
sudo apt-get install git gitk leafpad meld
git config --global user.name "your_username_on_github"
git config --global user.email "yourname@your_git_email"
git config --global core.editor leafpad
git config --global diff.tool meld
git config --global merge.tool meld
git config --global color.ui auto
git config --global push.default simple
git config --global credential.helper 'cache --timeout=7200'
git clone https://github.com/username/repositoryname
The apt-get install command not only installs Git but its visualiser (gitk), a merging program (meld) and a simple editor (leafpad) which will only be used by Git.
The first two pieces of configuration are to provide a suitable name and email which is added to every commit you make. If you intend to use GitHub there are great advantages to having them the same and using your GitHub username rather than your full name.
The use of a simple text editor called leafpad avoids all sorts of problems in using the same editor for your editing of files and within Git. meld is a really good difference tool and is also set to be the default for hand crafting conflicts in a merge. The color.ui auto allows suitable terminal programs to display some information in colour. The push.default simple is to make sure that you only push your master to the remote repository by default - it should not be needed with the latest versions of Git but I started before that was the default so some of my early repositories seemed to have a different setting. I have put details of how to obtain the most recent version of Git in an Appendix.
The --global credential.helper 'cache --timeout=7200' means that Git will save your password in memory for some time after you have entered it the first time in a session - here I have set it to 2 hours.
in a couple of places
I have installed sublime as an extra editor for accessment - it looks a good and very powerful 'coding' editor but has it a big learning curve? Alternative is to reinstall gedit as a second editor if it will co-exist with xed.
One command to remember is Ctrl Shift p which is a command search. This is the way to get the top menu back!
This is based on Leafpad and looks to be the replacement although still under development
See https://ubuntuforums.org/showthread.php?t=1178974 In a terminal do:
sudo apt-get install apt-xapian-index
sudo update-apt-xapian-index -vf
It seemed a good idea to go back to trying Adobe Brackets but the PPA I had used previously has not beenupdated for some time so I made the mistake of using a Flatpak. I will not go into Flatpaks to much but they see a good idea but I had not realised what the overhead would be for the first major program. Installing Brackets had a 4 Gbyteoverhead, more than many Linux installations, which results from their installing almost all their dependencies as well as the program, true it makes them independent but at a considerable cost.
Worse stil I discovered that uninstalling stillleft a library overhead of 3.6 Gbytes. I eventually found an article which explained how to remove the bloat at https://www.linuxuprising.com/2019/02/how-to-remove-unused-flatpak-runtimes.html . It is simple in a terminal to do:
flatpak uninstall --unused
This command should list all unused Flatpak runtimes, and offer to uninstall them from your system.
peter@defiant:~$ flatpak uninstall --unused
ID Branch Op
1. [-] org.freedesktop.Platform.GL.default 19.08 r
2. [-] org.freedesktop.Platform.VAAPI.Intel 1.6 r
3. [-] org.freedesktop.Platform.VAAPI.Intel 19.08 r
4. [-] org.freedesktop.Platform.ffmpeg 1.6 r
5. [-] org.freedesktop.Platform.openh264 2.0 r
6. [-] org.freedesktop.Sdk 1.6 r
7. [-] org.freedesktop.Sdk.Locale 1.6 r
8. [-] org.freedesktop.Sdk 19.08 r
9. [-] org.freedesktop.Sdk.Locale 19.08 r
10. [-] org.gtk.Gtk3theme.Mint-Y-Darker 3.22 r
11. [-] org.gtk.Gtk3theme.Mint-Y 3.22 rUninstall complete.
I have writen briefly about Brackets before but this time I could not make the killer feature to work, namely the ability to continuously update your work in a browser giving a close or arguably better than WSIWIG experience
The process is not very obvious but the article at https://askubuntu.com/questions/4950/how-to-stop-using-built-in-home-directory-encryption gave some clues as to what to do. Most of the solutions seemed overly complex but combined with my observations about doing fresh program instals whilst retaing the information in ones home folder led to a proceedure which works.
Firstly, how does ecryptfs work?
This never got completed which is a problem for the future if I ever need to do it again!
My output from dmesg shows a huge number of errors. This is a problem which can existewhen using recent kernels with with an old BIOS. Many old BIOSes do not support the full set of current ACPI calls. To quote the Mint 20 Release Notes
Choosing the right version of Linux Mint"
Each new version comes with a new kernel and a newer set of drivers. Most of the time, this means newer versions are compatible with a larger variety of hardware components, but sometimes it might also introduce regressions. If you are facing hardware issues with the latest version of Linux Mint and you are unable to solve them, you can always try an earlier release. If that one works better for you, you can stick to it, or you can use it to install Linux Mint and then upgrade to the newer release.
One option with ACPI problems is to try various BIOS options acpi=off , acpi=noirq , acpi=strict , pci=noacpi but these tend to reduce functionality of the bios and various things like the temperature sensors fail. They are limited to geting a system running enough to lok for solutions.
Errors in my output from dmesg look like this
[180074.103311] ACPI Error: Aborting method \_SB.PCI0.LPCB.H_EC._Q50 due to previous error (AE_AML_OPERAND_TYPE) (20190816/psparse-529)
[180074.301840] ACPI Error: Needed type [Reference], found [Integer] 000000006398d7d6 (20190816/exresop-66)
[180074.301867] ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [Store] (20190816/dswexec-424)
[180074.301894] No Local Variables are initialized for Method [_Q50]
[180074.301900] No Arguments are initialized for method [_Q50]
[180074.301909] ACPI Error: Aborting method \_SB.PCI0.LPCB.H_EC._Q50 due to previous error (AE_AML_OPERAND_TYPE) (20190816/psparse-529)
[180074.503033] ACPI Error: Needed type [Reference], found [Integer] 000000001785f600 (20190816/exresop-66)
[180074.503044] ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [Store] (20190816/dswexec-424)
[180074.503055] No Local Variables are initialized for Method [_Q50]
[180074.503057] No Arguments are initialized for method [_Q50]
These are filling the log files and my standard check of log files which is:
sudo du -hsxc --time /var/log/* | sort -rh | head -16 && journalctl --disk-usage && date
mint@mint:~$ sudo du -hsxc --time /var/log/* | sort -rh | head -16 && journalctl --disk-usage && date
764M 2020-06-19 03:28 total
382M 2020-06-19 03:28 /var/log/kern.log
345M 2020-06-19 00:00 /var/log/syslog.1
38M 2020-06-19 03:28 /var/log/syslog
112K 2020-06-17 15:53 /var/log/dmesg
88K 2020-06-17 20:29 /var/log/apt
68K 2020-06-19 03:28 /var/log/auth.log
44K 2020-06-18 03:51 /var/log/Xorg.0.log
28K 2020-06-17 20:29 /var/log/dpkg.log
20K 2020-06-19 02:53 /var/log/cups
12K 2020-06-19 00:00 /var/log/lightdm
12K 2020-06-19 00:00 /var/log/boot.log.1
4.0K 2020-06-17 15:53 /var/log/wtmp
4.0K 2020-06-17 15:53 /var/log/ubuntu-system-adjustments-start.log
4.0K 2020-06-17 15:53 /var/log/ubuntu-system-adjustments-adjust-grub-title.log
4.0K 2020-06-17 15:53 /var/log/private
Archived and active journals take up 80.0M in the file system.
Fri Jun 19 03:28:27 BST 2020
This is actually a serious problem as it not only fills the log files but causes an extra very large nuber of write activities which are not good for the life of an SSD which have a limited number of write cycles so it must be addressed.
This may not be an option as earlier kernels may not support all of the features required for the underlying Ubuntu system or Mint itself. In this case I would probably need to go back to a kernel which is no longer supported.
Logrotate will have to be sorted as per changes for TimeShift. A quick check shows the files seem unchanged so the description in System Housekeeping to reduce TimeShift storage requirements can be used directly. This does not completely solve the excessive number of write cycles to an SSD.
This may seem a bit dramatic but the log files are useless anyway when full of junk. It is however not quite as simple as just disabling the syslog daemon syslogd. Syslog has been replaced by rsyslog on numerous OS. So, on Debian > 5, Ubuntu > 11.2, Centos 6.x. Also rsyslog now uses socket activation under systemd. So whenever there is a log message coming in, rsyslog will be started on demand. This means as fast as you stop it it will be restarted! The unit is named syslog.socket so if you want to stop the rsyslog service and the socket activation, you will need to stop both units:
systemctl stop syslog.socket rsyslog.service
This is, of course only a temporary stop as it will be reactivated at boot so you also need to disable them at boot:
systemctl disable syslog.socket rsyslog.service
You can re enable them at boot by
systemctl enable syslog.socket rsyslog.service
Add the --now flag to start now as well as enable for the next boot
You can check the state with
systemctl status syslog.socket rsyslog.service
You can also list all enable units by
systemctl list-unit-files | grep enabled
Enabled means the system will run the service on the next boot. So if you enable a service, you still need to manually start it, or reboot and it will start.
Stopping the syslog daemon does not completely stop logging as dmesg is still available to look at the kernel ring buffer. The 'kernel ring buffer' is a memory buffer created by the kernel at boot in which to store log data it generates during the bootloader phase. A ring buffer is a special kind of buffer that is always a constant size, removing the oldest messages when new messages come in. The main reason the kernel log is stored in a memory buffer is to allow the initial boot logs to be stored, until the system has bootstrapped itself to the point where the syslog daemon can take over. The contents of the buffer at that point seems to be saved in /var/log/dmesg but fortunately it continues to be updated.
dmesg offers a number of powerful tools to view and manipulate the ring buffer which, combined with piping to grep, allow one to still investigate most problems even without the full set of system log files. In practice I am not sure I have ever had to use the log files as the answers have always been available through dmesg, perhaps once when I had a series of kernel panics following segfaults.
We are still left with a problem as dmesg is still flooded with ACPI error messages at a rate of 5 lines every 200 msecs so it rapidly 'cycles'. There is however a command to pass the input from kernel straight on to the terminal which is accessed by the --follow or -w options. If that is piped to grep the offending 5 lines can lines be removed by the less well known grep option -v .
grep -w -e string1 -e string2 -e string3 -e string4 -e etc
passes on everything but lines containing string1 etc
In my case the magic incantation is:
dmesg -w | grep -v -e Q50 -e SB.PCI0 -e exec-424 -e sop-66
This makes it easy to watch as devices are plugged in etc
I have decided to disable logging through rsyslog at boot time and depend on use of dmesg in the knowledge that I can easily enable it again if I need full logs. This will save considerable wear on the SSD and reduce the requirement for large amounts of space in Timeshift images. There may however be unintended consequences I have not yet discovered in, for example, access to histories from various utilities.
This was put sucinctly in the Flash update:
This started as some fixes to get round a bug in some browsers mobile devices and turned into the start of much more major enhancement to the site, in particular for touch screen devices. In essence, the existing responsive design works by reloading a modified version of the page when absolutely essential, for example when changed from portrait to landscape view on a mobile or a major reduction in window size. In some browsers this is relatively seamless as the information is cached locally and the page is left at the same point, in others such as Firefox the page is returned to the top which is disturbing especially if it was a brief and unintentional orientation change. Browsers and web standards have developed over the last 4 years and all currently supported versions of browsers allow an alternative method called Media Queries, a feature of CCS3, to provide an alternative way to implement a responsive design by only changing the 'styles' when, for example the screen width meets a certain criteria (A Media Query)
The changes are progressing and most of the pages accessible from the 'entry' pages (those accessible from the header bar) have been updated to the new format other than the UNZ Travel pages and the early newsletters (~20). Some of the pages showed errors with the latest HTML5 validator or had not been converted to HTML5 at all and needed correcting. Travel pages require changes to every block of pictures as well as to the headers and footers to benefit from the Media Queries approach so that will be a major undertaking. Approximately 160 pages are now updated and validated of which ~45 are travel pages out of a total of 500 pages with picture blocks - a long way to go. Progres updated as of 14 July 2020.
The number of pages and searches for incompatibilities has been done by a grep search through all the web pages by commands like:
:~$ grep -iRl "<script>inhibitReload()</script>" "/media/DATA/My Web Site"
where other searches have been for "media.css" and "hpop("
Perhaps the most important tools has been use of the CopyQ clipboard manager – a desktop application which stores content of the system clipboard whenever it changes and allows to search the history and copy it back to the system clipboard or paste it directly to other applications. It has a sophisticated tab mechanism which enables me to store headers, footers and various other 'snippets' required for each of the major page types on separate tabs. CopyQ is available for all major operating systems and is in the Mint repositories. It is unusually well documented and offers a huge range of possibilities - I am only exploiting a few
It is normally accessed from the toolbar tray but I have also set it up so that it is opened by a system shortcut key of Ctrl Alt C (Preferences -> Shortcuts -> Global -> Show/Hide main window) and use the option to paste into the cusor position from CopyQ by a return.
So I can select the complete existiong header in an editor, open CopyQ by Ctrl Alt C selet the tab and snippet, Enter and on to the next. The snippets as I am calling the items on the CopyQ tabs can be edited in situ by F2 (or an external editor) so it is easy to set the date in the footer block for the day or the bulk of the title in the header. Individual items on a tab or the clipboard history can be pinned in positin which also gives protection against accidental deletion. It is possible to configure a tree view of tabs in use. It is already obvious that one does not want to lose the hard work in setting up and there is an Export command (Export on file menu) which enable selective exporting of tabs.
Background: There are three main formats for picture blocks in use. In all cases scripting is used to expand to the complex code required by writing on the fly as the page loads. This saves long strings with the same text for title, alt statements etc. All three have the same basic 'function parameters' - an 'image location', a 'title string' and a 'position' parameter (which becomes muti-use in the case of lightbox displays) and all expect images in triplicates in 4:3 or 3:4 aspect ratio. The small 'icon' image is always 160 x 120 or 120 x 160 pixels and ends in i the middle size is always 400 x 300 or 300 x 400 pixels and ends in w and is used for compact popup displays which are being phased out. The large images are used primarily for lightbox displays and the size is not baked in but historically were 600 x 450 and now 800 x 600 is the norm. Popups shrink this to be 600 x 450 pixels.
The first two formats were basically the same and used tables for layout and adapted the layout on the fly. Both depended an a reload to respond to window/screen size changes. Version two used a single script round the whole table and wrote the table details <table>, <tr> and <td> etc by function calls as well as the reponsivePictureblockSplit(n) call
Original type of block using table
<td class="pictureblock"><script>hpop('2011/qe11/img_3231', ' © P Curtis 2011', 'center' )</script></td>
<td class="pictureblock"><script>vpop('2011/qe11/img_3243', ' © P Curtis 2011', 'center' )</script></td>
<td class="pictureblock"><script>hpop('2011/qe11/img_3244', ' © P Curtis 2011', 'center' )</script></td>
<td class="pictureblock"><script>vpop('2011/qe11/img_3248', ' © P Curtis 2011', 'center' )</script></td>
Single script block with table codes written in scripts
tdCpictureblock();hpop('2018/qv18ev/',' © P Curtis 2018', 'center' );tdE();
tdCpictureblock();vpop('2018/qv18ev/',' © P Curtis 2018', 'center' );tdE();
tdCpictureblock();hpop('2018/qv18ev/',' © P Curtis 2018', 'center' );tdE();
tdCpictureblock();vpop('2018/qv18ev/',' © P Curtis 2018', 'center' );tdE();
New Solution without tables and depending on Media Queries to adapt the css dynamically with the need for reloading pages
<div class=pic><script>hpic('2019/auqe19/',' © P Curtis 2019', 'center' )</script></div>
<div class=pic><script>vpic('2019/auqe19/',' © P Curtis 2019', 'center' )</script></div>
<div class=pic><script>hpic('2019/auqe19/',' © P Curtis 2019', 'center' )</script></div>
<div class=pic><script>vpic('2019/auqe19/',' © P Curtis 2019', 'center' )</script></div>
The transformation of either type needs 5 repeated edits and deletion of n-1 responsivePictureblockSplit elements. Often pages will have both sorts.
Dreamweaver is capable of handling large numbers of open pages and I have used up to thirty for repeated edits. A certain ammount of checking and hand crafting is always needed.
So where are the priorities. Our world cruises are arguable the most import and comprise 33 pages between them it is logical to complete the other two holidays in 2017 for another 11 parts. New Zealand 2016 is important with the Wanaka air show and has 18 parts but 2017 is small at 6 parts and should be included. This would in total mean we completed the coverage of 3 years with 68 pages. So three multiple repeating edits of circa 25 pages each seems to be on the way forwards.
This is the other area where changes are needed. Once the size has reduced to below the width of an Active Map it has to be replaced by an image which shrinks to fit.
The old code used code like this:
<div id="activeMap" >
<img src="2017/qv17xm_map.jpg" style="width:600px; height:450px;" usemap="#qv17xm_map1" alt="Map">
<area shape="rect" coords="165,10,515,58" href="qv17xm-p1.htm#soton" alt="Introduction and Embarkation at Southampton" title="Introduction and Embarkation at Southampton">
<div id="shrinkableMap" >
<img src="2017/qv17xm_map.jpg" style="max-width:95%; height:auto;" alt="Map">
and the new code is
So the changes are:
Which is a considerable simplification
I have made use of grep in many ways to locate pages which needed changes, track progress and generally tidy up.
Basic list of all pages containing HTML 4.01 Transitional in current directory not sub-directories
grep -il "HTML 4.01 Transitional" *.htm
Count of all pages containing HTML 4.01 Transitional in current directory
grep -il "HTML 4.01 Transitional" *.htm | grep -c htm
Note: The grep -c is counting the file names piped to it which contain htm ie all. In my case it is useful to instead count files with nz or qe
Count of all .htm pages containing both HTML 4.01 Transitional and media.css (eg. new header in old HTML 4.01 page such as old diary page) in current directory
grep -il "HTML 4.01 Transitional" `grep -l media.css *.htm` | grep -c htm
Count of all .htm pages containing both HTML 4.01 Transitional but not media.css in current directory
grep -il "HTML 4.01 Transitional" `grep -L media.css *.htm` | grep -c htm
At the end of all these various searches and pruning of old unlinked pages by copying them to an archive folder it was possible to know that there still 41 NZ travel pages but only 11 other travel pages without the new headers and footers. Although they were all 2003 or earlier it seemed worth updating them to have the new headers and footers and basic responsive response although they remained HTML 4.01 standard.
Looking at the remaining NZ Touring pages by
peter@defiant:/media/DATA/My Web Site$ grep -il "HTML 4.01 Transitional" `grep -L media.css nz*.htm` | grep -c htm
We find that there are still 41 NZ touring pages at that tiime.
Update 14th August 2020 on conversion to Next Generation pages: The total site has 700 pages, of which ~550 were in HTML5 and were 'Responsive' but using ' Event Driven' rewriting of pages. They and some others have been fully updated (~575). In addition there are still ~145 HTML 4.01 pages in early parts of the Diary, Legacy technical articles and pages with pre digital era pictures ie before 2003. All Diary, early Travel and some early Technical pages (~80) have gained the improved functionality of the new Headers, Footers and use Media Queries whilst remaining in HTML 4.01. There are no plans to convert or add headers to any more of the ~65 very early legacy pages which have little or no useful content.
This section came abut from the work in updating the Lafite laptop to Mint 20 from Mint 19.2 by doing a fresh install into a 'spare' dual-boot partition. This naturally involved first backing up all the users home folders before the fresh install.
The procedures here for backing up an encrypted home folder and restoring depend a characteristic of the implementation of eCryptfs under Ubuntu/Mint which is regarded by many as a flaw. Currently an encrypted home folder remains mounted after you log out and into another user. This makes it easy to make a tar backup of an unencrypted home folder which is not in use as the user has logged out and hence is 'stable' (no files changing and no activity) by logging into another temporary user to access it. An alternative way (if the 'flaw' is removed) would be to log in as the user from consul (terminal) rather than via a GUI to mount the home folder then do the backup from a different user. A quick way to access a consul is via Ctrl Alt F1 which gives a standard user - password login which will mount the encrypted home folder. This has all been covered in greater depth earlier in the tar backup sections.
We now need to know how the basic home folder encryption works. If you examine /home you will find home folders for each user and you will also see an additional folder called .encrypts within which the are folders correspond to each encrypted user and within them are folders called .encryptfs and .Private . .Private has encrypted copies of every file and they are decrypted on the fly using configuration information held in .encryptfs . The actual users home folder just contains symlinks to these folders. The presence of valid symlinks and the associated folders seems to be all that is needed for the 'on-the-fly' encryption to be 'recognised'. This type of encryption is often known as 'stacked encryption' as the encryption is carried out on top of an existing file system. In contrast LUKS encryption is carried out on a whole partition or volume and is then formated with a file sytem.
Whatever we do the first action is to make a backup and the backup is normally made from the unencrypted version. So all we need to do is to make a backup tar archive is to login in to the user to mount the home folder, log out to keep the file system unchanging, and then log in from a different user and make the archive. The files will be decrypted as they are accessed and we now have a complete decrypted and archived copy with all permissions, owners and symlinks preserved. The typical command to create the archive is:
sudo tar cvpPzf --exclude=/home/*/.gvfs "/media/USB_DATA/mybackup1.tgz" /home/user1/
So to change to un-encrypted home folder we just need to replaced folder with a decrypted one. We can do this simple by now renaming the users home folder and preferably rename the user folder in .ecryptfs and replace the folder with one extracted from the tar archive. It is best to reboot and log back into the alternative user to make sure the decryption routines are inactive before the replacement by extraction from the tar archive. The matching command to extract the folder to its original position is:
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /
Following extraction the two symlinks can/should be removed. One can then log out of the temporary user and back into the user and use the unencrypted home folder. After testing all the backups can be be deleted.
A similar procedure can be used to transfer an [encrypted] user to a different machine or partition. If such transfers are planned you MUST ensure the users have the same id number which in practice means they are installed in the same order. Normal installations provides users starting with id 1000. id in a terminal provides a useful summary of information on all users and groups. The following is a typical outputof id :
uid=1001(peter) gid=1001(peter) groups=1001(peter),4(adm),24(cdrom),
Our machines all have three main users, a permanent administrative user to avoid having to create temporary users for this sort of task and for syncronising/backing-up,and our own two users. The admin user is always the first to be installed as user_id_1000 seems to have some slight extra privileges used by unison for synchronising then Peter and Pauline's users follow with ids 1001 and 1002 . In a family the user with id 1000 could be the Surname and the personal Users could use the Christian names. User_ id_1000 is not used for any sensitive information, email or browsing with passwords so is does no need to be encrypted
Up to now my approach has been to decrypting users home folders before reinstalling as part of a dual boot system, moving between machines or transfering to a new machine then re-encrypting if appropriate. This section first covers more background about how my machines are set up, then my attempts to avoid decrypting every folder when doing a fresh install before looking at how best to proceed in the future.
All my systems have a home folder mounted in a separate partition and several can be dual booted as they have two partitions containing root folders for Linux different systems. This enable me to do major updates from say Mint 19.x to Mint 20 by a fresh install yet preserve the individual users home folders provided they are installed into the new system in the same order with only minor changes in configuration. There are also partitions for shared data mounted at /media/DATA and sensitive data at /media/VAULT. Currently all filesystems are EXT4.
The conmplexity is increase when encryption is used for an additional volume for sensitive data and my laptops now all have a LUKS volume mounted at login using a partition mounted at /media/VAULT. Each LUKS volume has 8 slots for different passwords which can mount it so can be mounted by up to seven users with different passwords. When the new system is installed the new users now have to not only be intalled in the same order but use the same password as before to enable the LUKS volume to be mounted.
It is also desirable to encrypt the home folders of selected users using encryptfs, either during inital installation or at a later stage. This poses an additional set of problems when upgrading which I avoided on the Defiant and the Helios by removing the encryption before reinstalling the new Mint 20 system. That worked but has disadvantages as the users home folders need to be re-encrypted. That is not difficult but has a major problem that needs a lot of workspace - up to 2.5 times the size of the users home folder of wasted space.
On the Lafite I removed the encryption on the primary user and fully expected I would be able to re-install using the same home partition and keep user 1000 but that did not turn out to work. When I went through the install from the LiveUSB and got to providing information on the user it had the button for using an encrypted home checked and greyed out so it had looked at the existing et up and found some of the users had encrypted home folders although the actual user I was adding was not encrypted.
This meant I had to back out and start again without mounting the existing partition and creating a new user. I then had to change the system to mount my old partition at /home at boot by adding it to /etc/fstab and rebooting, not something to inflict on a newbie. This left a few mbytes of wasted space in / but not enough to be worth chasing. This was all things I had done before but adds to the work and learning curve for someone new to Linux. I had already removed the encryption from user 1001 (peter) as well so I could then add peter as a user in 'Users and Groups' and only a little tidying of the configurations was needed to have two users transfered to Mint 20 and one left accessible by booting into Mint 19.2 as originally intended to allow time to fully configure the system and install all the programs without risk to the main user of the machine. Configuration includes such activities as reducing use of swap, setting up battery saving by timeouts for hard dries, automounting the VAULT using PAM mount and setting up Timeshift. These are all best completed before the main users are transfered.
This took a good part of a day elapsed time but not all was spent actually working on the installation. At this point I decided to try to add the final user 1002 with an ecryptfs home folder. I found it the password change was greyed out and I tried with and without the option of no password. Without a password it just failed and with a password it asked for by LUKS password for the /media/VAULT drive and let me in but without a password functionality was severely limited (no sudo for example) and again the GUI did not allow me to set a password. So I forced a reset to the old password again using the sudo in a terminal command
sudo passwd user1002
which just prompts twice for the new password for user_id-1002 without requesting the old password
and after a reboot I could then use sudo etc
It is essential the password you set is the same as the old password otherwise the unwrapping in ecryptfs will fail. NOTE: this is the only time sudo should be used with passwd when you have an encrypted home folder.
My overall conclusion is that it is simpler, safer and possibly quicker to unencrypt all the folders before the reinstallation then re-encrypt as required. IF you know how to modify fstab to automount folders and are short of space then the proceedure above may be worth trying but make the backups anyway. If All the home folders are encrypted the system may sort it all out (untested) but you must keep the installation order and passwords the same and you should still make the backups.
The normal GUI utility accessed via Menu -> Users and Groups does not work as it has the change password section greyed out. I believe this is a bug as that used to be the only recommended way when you had an encrypted home folder..
So recall that the encryption uses a wrapped key (passphrase) that you hopefully saved earlier. When you log in with an encrypted home folder your normal user login password is used to unwrap the key for subsequent on the fly decryption. The built in utilities use the PAM module when you change a password and under most circumstances PAM handles changing the user login password and the unwrapping password updating in synchronisation. Any utility which checks the old password as well as allowing you to set the new password should allow PAM to keep the passwords in step. Those that do not check the old password and depend on being root or using sudo to force changes or resets of the user password DO NOT change the passphrase for encryption - that would be a security problem and allowing any user who gained root privaledges to access the encrypted folder. So use of sudo passwd does not check the old password and synchronise the ecryptfs decryption and you're in yogurt.
So when you change your users password you must be logged in as the user and just use passwd without sudo.
That tells you which user it will change and ask for the users current pasword before asking for the new password and a repeat to verify it. I have checked that works and correctly changes the password a couple of times on one of my machines. https://askubuntu.com/questions/33730/will-changing-password-re-encrypt-my-home-directory has the best explanation I have found and some thoughts on swimming in yogurt if you have messed things up and have lost the linkage which I have not tried.
None of my previous documents have gone into any depth about the details of Block Devices, the main ones we use being Hard Disks, Solid State Drives and USB sticks. I have looked for a good definition of a block device and have not found one. Basically comes from the fact that almost all practical bulk storage devices are read and written in block of data typically 512 Kbytes or greater. It is obviously not practical to read or write random single bits or bytes from a spinning disk and even solid state drives have similar constraints. So in practice block devices means data storage devices. We have already discussed the need to divide up (partition) large storage devices so they can be used for different purposes, often with different filesytems. here we are discussing how to examine the partions in detail and how they are linked into (mounted into) the overall Linux file system when the system is booted.
The method Block Devices are mounted at boot is basically very simple and is initially set up during installation. Each mount point is determined by a single line in the file /etc/fstab. Initially the drive and partition were specified by something like /dev/sda1 to a place in the Linux directory structure but that had problems if the drive partitions were changed as the partition number cound change or even the drive if an additional drive was added. So now the UUID of the partition is used - that is a unique alphanumeric string which is part of the metadata in the filesystem so a formated drive can always be mounted at the same place even if partitions are moved or drives added and removed. That is where the utility blkid comes in as it provides information on all the drives identified at boot time including their device designation and UUID. It does not need root permissions to run unless you want to also see all drives mounted after boot.
There are a further utility which I recently discovered which is more powerful still and does not need root permissions. lsblk provides a simple listing of the drives and partitions in a terminal whilst adding the -f parameter provides a lot more information including the UUIDs
So looking at our simplest system on the helios with only an SSD with multiple partitions and one partition encrypted with LUKS we get with gparted which is the easiest way to modify partitions and provides the UUID for the selected partition via Partitions -> Properties
or with Gnome disks which shows the UUID for the selected partition
these correspond to blkid in a terminal which is definitive for the drives mounted at boot time and suggested in the /etc/fstab file
and another useful display is a my custom version of lsblk which gives all the information to easily link a UUID to a partition and reflects the situation after hotplug USB drives have been plugged in. use:
lsblk -o NAME,FSTYPE,LABEL,SIZE,FSUSE%,UUID,MOUNTPOINT
# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda2 during installation
UUID=651de848-8c52-4d24-aa5f-b5f81ffaa8ce / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda1 during installation
UUID=1CE7-37D3 /boot/efi vfat umask=0077 0 1
# /home was on /dev/sda3 during installation
UUID=a0cade01-4161-45a2-8427-24baa8e3c88b /home ext4 defaults 0 2
# /media/DATA was on /dev/sda5 during installation
UUID=d8e192cf-0c5a-4bd6-a3a9-96589a68cde5 /media/DATA ext4 defaults 0 2
# swap was on /dev/sda4 during installation
UUID=3745b328-a576-482d-b05b-ac7f65720a64 none swap sw 0 0
I have recently realised that I take a lot for granted in the implementation of our system and it it quite difficult to explain to some one else how to install it and the logic behind the 'partitioning' and encryption of various sections. So I have decided to try to go back to basics and explain a little about the underlying structure of Linux.
Firstly Linux is quite different to Windows which more people understand. In Windows individual pieces of hardware are given different Drive designations A: was in the old days usually a floppy disk, C: was the main hard drive, D: might be a hard drive for data and E: a DVD or CD drive. Each would have a file system and directory and you would have to specify which drive the a file lived on. Linux is different - there is only one filesystem with a hierarchical directory structure. All files and directories appear under the root directory /, even if they are stored on different physical or virtual devices. Most of these directories exist in all Unix-like operating systems and are generally used in much the same way; however, there is a Filesystem Hierarchy Standard which defines the directory structure and directory contents in Linux distributions. Some of these directories only exist on a particular system if certain subsystems, such as the X Window System, are installed and some distributions follow slightly different implementations but the basics are identical where it be a web server, supercomputer or android phone. Wikipedia has a good and comprehensive Desciption of the Standard Directory structure and contents
That however tells us little about how the basic hardware (hard drives, solid state drives, external USB drives, USB stickes etc are linked (mounted in the jargon) into the filesystem or how and why parts of our system are encrypted. Nor does it tell us enough about users or about the various filesystems used. before going into that in details we also need to look at another major feature of Linux which is not present in Windows and that is ownership and permissions for files in Linux. [ ]
So how dose a storage device get 'mapped' to the Linux file system? There are a number of stages. Large devices such as Hard Disks and Solid State Drives are usually partitioned into a number os sections before use. Each partition is treated by the operating system as a separate 'logical volume', which makes it function in a similar way to a separate physical device. A major advantage is that each partition can have its own file system which helps protect protect data from corruption. Another advantage is that each partition can also use a different file system - a swap partition will have a completely different format to a system or data partition. Different partitions can selectively encrypted when a block encryption is used. Older operating systems only allowed you to partition a disk during a formatting or reformatting process. This meant you would have to reformat a hard drive (and erase all of your data) to change the partition scheme. Linux now has disk utilities now allow you to resize partitions and create new partitions without losing all your data - that said it is still a process with risks and important data should be backed up. Once the device is partitioned each partition needs to be formatted with a file system which provides a directory system and defines the way files and folders are organized on the disk and hence become part of the overall Linux Directory structure once mounted.
If we have several different disk drives we can chose which partitions to use to maximise reliability and performance. Normally the operating system (kernel/root) will be on a Solid State Drive (SD) for speed and large amounts of data such as pictures and music can be on a slower hard disk drive whilst the areas used for actually processing video and perhaps pictures is best on a high speed drive. Sensitive data may be best encrypted so ones home folder and part of the Data areas will be encrypted. Encryption may however slow down the performance with a slow processor.
We have thre machines in everyday use, all our laptops but only two are lightweight ultrabooks, the other and oldest machine still has the edge in performance being a nVidia Optimus machine with discrete Video card and 8 thread processor but is heavy and has a limited battery. The other two are true ultrabooks - light, small, powerful and many hours of use on battery power.
All three machines are multiuser so both of us can use either machine for redundancy and whilst travelling and most of the storage is shared and periodically syncronised. The users home folders are also periodically backed up and the machines are set up so backups can be restored to any of the three machines. In fact the machines all have three users. The first user installed with id 1000 is used primarily as an administrative convenience for organising backups and syncronisations for users from a 'static' situation when they are logged out.
I have suffered a problem for a long time that when using unison to syncronise between machines my LUKS encrypted common partition which is mounted as VAULT has been unmounted (closed) when unison is closed. The same goes for any remote login to users other than the when the user accessed remotely and the user on the machine were the same. This is bad news if another user was using VAULT or tries to access it.
I finally managed to get sufficient information from the web to undestand a little mor about how the mechanism (pam_mount) works to mount an encrypted volume when a user logs in remotely or locally. It keeps a running total of the logins in separate files for each user in /var/run/pam_mount and decrements them when a user logs out. When a count falls to zero the volume is unmounted REGARDLESS of other mounts which are in use with counts of one or greater in there files. One can watch count incrementing and decrementing as local and remote users log in and out. So one solution is to always keep a user logged in either locally or remotely to prevent the the count decrenting to 1 and automatic demounting taking place. This is possible but a remote user could easily be logged out if the remote machine is shut down or other mistakes take place. A local login needs access to the machine and is still open to mistakes. One early thought was to log into the user locally in a terminal then move it out of sight and mind to a different workspace!
The solution I plan to adopt uses a low level command which forms part of the pam_mount suite. It is called pmvarrun and can be used to incremnt or decrement the count. If used from the same user it does not even need root privilenges. So before I use unison or a remote login to say helios as user pcurtis I do a remote login from wherever using ssh then call pmvarrun to increment the count by 1 for user pcurtis and exit. The following is the entire terminal activity.
peter@helios:~$ ssh pcurtis@defiant
27 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable
Last login: Sat Oct 24 03:54:43 2020
pcurtis@defiant:~$ pmvarrun -u pcurtis -o 1
Connection to defiant closed.
The first time you do an ssh remote login you may be asked to confirm that the 'keys' can be stored. Note how the user and machine changes in the prompt
I can now use unison and remote logins as user pcurtis. to machine helios until helios is halted or rebooted.
I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.