Home Uniquely NZ Travel Howto Pauline Small Firms Search
Diary of System and Website Development
Part 32 (January 2020 ->)

March 2020

Opera Browser

I am increasingly using Opera as a browser as many sites no longer work with Firefox as it has a problem with Captcha validation.

Installing Chrome extensions in Opera

Chrome extensions can be installed if first the Opera "Install Chrome Extensions" extension is installed.

Install Chrome Extensions allows you to install extensions from Google Chrome Web Store
in your Opera browser. Note: You can install extensions only. Themes are not supported. To use this extension:

I used this to install "Zoom Redirector" which was not available directly at the time.

Zoom Redirector transparently redirects any meeting links to use Zoom's browser based web client.

Instead of having to search for the hidden "join from your browser" link, this addon will take you there automatically.

Other Opera Extension I use

Test of Mint 20 beta on Helios using LiveUSB

Thunderbird

Installing old versions from 6 debs from 18.04 version 52.and transfer of profiles seems to be OK

thunderbird_52.9.1+build3-0ubuntu0.18.04.1_amd64.deb **
thunderbird-gnome-support_52.9.1+build3-0ubuntu0.18.04.1_amd64.deb **
thunderbird-locale-en_52.9.1+build3-0ubuntu0.18.04.1_amd64.deb **
thunderbird-locale-en-gb_52.9.1+build3-0ubuntu0.18.04.1_all.deb *
thunderbird-locale-en-us_52.9.1+build3-0ubuntu0.18.04.1_all.deb
xul-ext-calendar-timezones_52.9.1+build3-0ubuntu0.18.04.1_amd64.deb
xul-ext-gdata-provider_52.9.1+build3-0ubuntu0.18.04.1_amd64.deb *
xul-ext-lightning_52.9.1+build3-0ubuntu0.18.04.1_amd64.deb **

Only the starred .debs were installed after a complete uninstall of the existing thunderbird. Double starred installed on Defiant

Warpinator

Useful for transfers but 19.3 needed updated version at http://packages.linuxmint.com/pool/main/w/warpinator/warpinator_1.0.5+ulyana_all.deb

Wine under Mint 20

There were various comments that wine did not work well under Mint but I found that Wine worked under Mint 20 for me. I successfully installed an old windows program under wine using a LiveUSB using the following steps, starting in the terminal – some may be redundant but at least it worked!

sudo dpkg –add-architecture i386
sudo apt update
sudo apt install wine wine32 wine-binfmt
wine –version # showed version 5.0-3 was installed
wine Warpinator/OldWindows32bitProgInstall.exe


Install program was run but no menu item at this point so:

  1. I located the installed program in .wine/drive_c/program Files/
  2. Right click, use Other program option and put wine in box and set as default for .exe thus creating association of wine with .exe files for the future.
  3. Run program - result wine and program were now in Mint menu.

wine-binfmt seems to be important to allow association but it still had to be done manually the first time.

Wine Fonts ( ttf-mscorefonts-installer )

The ttf-mscorefonts-installer package sould allow for easy installation of the Microsoft True Type Core Fonts for the web but using Synaptic seems to miss part of the ULA acceptance which is in two parts. So

Run sudo apt-get install ttf-mscorefonts-installer and accept the EULA, or else if you already have ttf-mscorefonts-installer installed and you didn't accept the EULA, then uninstall ttf-mscorefonts-installer and reinstall it like this:

sudo dpkg -P ttf-mscorefonts-installer
sudo apt install ttf-mscorefonts-installer

Use the Tab and Enter keys to accept the EULA in the Microsoft TrueType core fonts windows that pops up. The terminal will output a new message each time it finishes downloading a new font. It takes a while.

GoogleEarth Installation

Googleearth always used to be a nightmare to install but I found no problems in an install using a .deb in the following way

wget https://dl.google.com/dl/earth/client/current/google-earth-pro-stable_current_amd64.deb
sudo gdebi google-earth-pro-stable_current_amd64.deb

It appeared in the menu and worked straight off.

Leafpad

The Leafpad editor is no long available so I have had to modify my git installation which specified it as the default editor called during commit rebasing (to avoid conflicts when using the same editor for changes in the files. the following section in git.htm needs changing:

Git is now the most widely used source code management tool, with over 40% of professional software developers using Git as their primary source control system. Before you worry the whole installation and setup of Git is done in about 9 terminal commands you can cut and pasted from below. This assumes you have already set up an account on GitHub which is easy and well explained on their site and their is a repository you want to use..

sudo apt-get install git gitk leafpad meld

git config --global user.name "your_username_on_github"
git config --global user.email "yourname@your_git_email"

git config --global core.editor leafpad
git config --global diff.tool meld
git config --global merge.tool meld
git config --global color.ui auto
git config --global push.default simple

git config --global credential.helper 'cache --timeout=7200'

git clone https://github.com/username/repositoryname
cd repositoryname

The apt-get install command not only installs Git but its visualiser (gitk), a merging program (meld) and a simple editor (leafpad) which will only be used by Git.

The first two pieces of configuration are to provide a suitable name and email which is added to every commit you make. If you intend to use GitHub there are great advantages to having them the same and using your GitHub username rather than your full name.

The use of a simple text editor called leafpad avoids all sorts of problems in using the same editor for your editing of files and within Git. meld is a really good difference tool and is also set to be the default for hand crafting conflicts in a merge. The color.ui auto allows suitable terminal programs to display some information in colour. The push.default simple is to make sure that you only push your master to the remote repository by default - it should not be needed with the latest versions of Git but I started before that was the default so some of my early repositories seemed to have a different setting. I have put details of how to obtain the most recent version of Git in an Appendix.

The --global credential.helper 'cache --timeout=7200' means that Git will save your password in memory for some time after you have entered it the first time in a session - here I have set it to 2 hours.

in a couple of places

Sublime Text Editor

I have installed sublime as an extra editor for accessment - it looks a good and very powerful 'coding' editor but has it a big learning curve? Alternative is to reinstall gedit as a second editor if it will co-exist with xed.

One command to remember is Ctrl Shift p which is a command search. This is the way to get the top menu back!

Mousepad

This is based on Leafpad and looks to be the replacement although still under development

See https://ubuntuforums.org/showthread.php?t=1178974 In a terminal do:

sudo apt-get install apt-xapian-index
sudo update-apt-xapian-index -vf

Install Chromium

xed admin:///etc/apt/preferences.d/saiarcot895-chromium-beta.pref
apt remove --purge chromium-browser
sudo add-apt-repository ppa:saiarcot895/chromium-beta && apt update
apt install chromium-browser

Brackets HTML Editor and use of Flatpaks

It seemed a good idea to go back to trying Adobe Brackets but the PPA I had used previously has not been updated for some time so I made the mistake of using a Flatpak. I will not go into Flatpaks too much but they see a good idea but I had not realised what the overhead would be for the first major program. Installing Brackets had a 4 Gbyte overhead, more than many Linux installations, which results from their installing almost all their dependencies as well as the program, true it makes them independent but at a considerable cost.

Worse stil I discovered that uninstalling still left a library overhead of 3.6 Gbytes. I eventually found an article which explained how to remove the bloat at https://www.linuxuprising.com/2019/02/how-to-remove-unused-flatpak-runtimes.html . It is simple in a terminal to do:

flatpak uninstall --unused

This command should list all unused Flatpak runtimes, and offer to uninstall them from your system.

peter@defiant:~$ flatpak uninstall --unused
ID Branch Op
1. [-] org.freedesktop.Platform.GL.default 19.08 r
2. [-] org.freedesktop.Platform.VAAPI.Intel 1.6 r
3. [-] org.freedesktop.Platform.VAAPI.Intel 19.08 r
4. [-] org.freedesktop.Platform.ffmpeg 1.6 r
5. [-] org.freedesktop.Platform.openh264 2.0 r
6. [-] org.freedesktop.Sdk 1.6 r
7. [-] org.freedesktop.Sdk.Locale 1.6 r
8. [-] org.freedesktop.Sdk 19.08 r
9. [-] org.freedesktop.Sdk.Locale 19.08 r
10. [-] org.gtk.Gtk3theme.Mint-Y-Darker 3.22 r
11. [-] org.gtk.Gtk3theme.Mint-Y 3.22 rUninstall complete.
peter@defiant:~$

I have writen briefly about Brackets before but this time I could not make the killer feature to work, namely the ability to continuously update your work in a browser giving a close or arguably better than WSIWIG experience

Changing from an encrypted home folder.

The process is not very obvious but the article at https://askubuntu.com/questions/4950/how-to-stop-using-built-in-home-directory-encryption gave some clues as to what to do. Most of the solutions seemed overly complex but combined with my observations about doing fresh program installs whilst retaing the information in ones home folder led to a proceedure which works.

BIOS problems on Helios

My output from dmesg shows a huge number of errors. This is a problem which can existe when using recent kernels with an old BIOS. Many old BIOSes do not support the full set of current ACPI calls. To quote the Mint 20 Release Notes

Choosing the right version of Linux Mint

Each new version comes with a new kernel and a newer set of drivers. Most of the time, this means newer versions are compatible with a larger variety of hardware components, but sometimes it might also introduce regressions. If you are facing hardware issues with the latest version of Linux Mint and you are unable to solve them, you can always try an earlier release. If that one works better for you, you can stick to it, or you can use it to install Linux Mint and then upgrade to the newer release.

One option with ACPI problems is to try various BIOS options acpi=off , acpi=noirq , acpi=strict , pci=noacpi but these tend to reduce functionality of the bios and various things like the temperature sensors fail. They are limited to geting a system running enough to lok for solutions.

Errors in my output from dmesg look like this

.....
[180074.103311] ACPI Error: Aborting method \_SB.PCI0.LPCB.H_EC._Q50 due to previous error (AE_AML_OPERAND_TYPE) (20190816/psparse-529)
[180074.301840] ACPI Error: Needed type [Reference], found [Integer] 000000006398d7d6 (20190816/exresop-66)
[180074.301867] ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [Store] (20190816/dswexec-424)
[180074.301894] No Local Variables are initialized for Method [_Q50]
[180074.301900] No Arguments are initialized for method [_Q50]
[180074.301909] ACPI Error: Aborting method \_SB.PCI0.LPCB.H_EC._Q50 due to previous error (AE_AML_OPERAND_TYPE) (20190816/psparse-529)
[180074.503033] ACPI Error: Needed type [Reference], found [Integer] 000000001785f600 (20190816/exresop-66)
[180074.503044] ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [Store] (20190816/dswexec-424)
[180074.503055] No Local Variables are initialized for Method [_Q50]
[180074.503057] No Arguments are initialized for method [_Q50]
.....

These are filling the log files and my standard check of log files which is:

sudo du -hsxc --time /var/log/* | sort -rh | head -16 && journalctl --disk-usage && date

shows:

mint@mint:~$ sudo du -hsxc --time /var/log/* | sort -rh | head -16 && journalctl --disk-usage && date
764M 2020-06-19 03:28 total
382M 2020-06-19 03:28 /var/log/kern.log
345M 2020-06-19 00:00 /var/log/syslog.1
38M 2020-06-19 03:28 /var/log/syslog
112K 2020-06-17 15:53 /var/log/dmesg
88K 2020-06-17 20:29 /var/log/apt
68K 2020-06-19 03:28 /var/log/auth.log
44K 2020-06-18 03:51 /var/log/Xorg.0.log
28K 2020-06-17 20:29 /var/log/dpkg.log
20K 2020-06-19 02:53 /var/log/cups
12K 2020-06-19 00:00 /var/log/lightdm
12K 2020-06-19 00:00 /var/log/boot.log.1
4.0K 2020-06-17 15:53 /var/log/wtmp
4.0K 2020-06-17 15:53 /var/log/ubuntu-system-adjustments-start.log
4.0K 2020-06-17 15:53 /var/log/ubuntu-system-adjustments-adjust-grub-title.log
4.0K 2020-06-17 15:53 /var/log/private
Archived and active journals take up 80.0M in the file system.
Fri Jun 19 03:28:27 BST 2020

This is actually a serious problem as it not only fills the log files but causes an extra very large nuber of write activities which are not good for the life of an SSD which have a limited number of write cycles so it must be addressed.

Method 1 - Revert to earlier kernel to avoid ACPI errors

This may not be an option as earlier kernels may not support all of the features required for the underlying Ubuntu system or Mint itself. In this case I would probably need to go back to a kernel which is no longer supported.

Method 2 - avoid ACPI flooding log files by adding limits in logrotate

Logrotate will have to be sorted as per changes for TimeShift. A quick check shows the files seem unchanged so the description in System Housekeeping to reduce TimeShift storage requirements can be used directly. This does not completely solve the excessive number of write cycles to an SSD.

Method 3 to avoid ACPI flooding log files - stop syslog daemon

This may seem a bit dramatic but the log files are useless anyway when full of junk. It is however not quite as simple as just disabling the syslog daemon syslogd. Syslog has been replaced by rsyslog on numerous OS. So, on Debian > 5, Ubuntu > 11.2, Centos 6.x. Also rsyslog now uses socket activation under systemd. So whenever there is a log message coming in, rsyslog will be started on demand. This means as fast as you stop it it will be restarted! The unit is named syslog.socket so if you want to stop the rsyslog service and the socket activation, you will need to stop both units:

systemctl stop syslog.socket rsyslog.service

This is, of course only a temporary stop as it will be reactivated at boot so you also need to disable them at boot:

systemctl disable syslog.socket rsyslog.service

You can re enable them at boot by

systemctl enable syslog.socket rsyslog.service
Add the --now flag to start now as well as enable for the next boot

You can check the state with

systemctl status syslog.socket rsyslog.service

You can also list all enable units by

systemctl list-unit-files | grep enabled

Enabled means the system will run the service on the next boot. So if you enable a service, you still need to manually start it, or reboot and it will start.

dmesg to the rescue

Stopping the syslog daemon does not completely stop logging as dmesg is still available to look at the kernel ring buffer. The 'kernel ring buffer' is a memory buffer created by the kernel at boot in which to store log data it generates during the bootloader phase. A ring buffer is a special kind of buffer that is always a constant size, removing the oldest messages when new messages come in. The main reason the kernel log is stored in a memory buffer is to allow the initial boot logs to be stored, until the system has bootstrapped itself to the point where the syslog daemon can take over. The contents of the buffer at that point seems to be saved in /var/log/dmesg but fortunately it continues to be updated.

dmesg offers a number of powerful tools to view and manipulate the ring buffer which, combined with piping to grep, allow one to still investigate most problems even without the full set of system log files. In practice I am not sure I have ever had to use the log files as the answers have always been available through dmesg, perhaps once when I had a series of kernel panics following segfaults.

We are still left with a problem as dmesg is still flooded with ACPI error messages at a rate of 5 lines every 200 msecs so it rapidly 'cycles'. There is however a command to pass the input from kernel straight on to the terminal which is accessed by the --follow or -w options. If that is piped to grep the offending 5 lines can lines be removed by the less well known grep option -v .

grep -w -e string1 -e string2 -e string3 -e string4 -e etc

passes on everything but lines containing string1 etc

In my case the magic incantation is:

dmesg -w | grep -v -e Q50 -e SB.PCI0 -e exec-424 -e sop-66

This makes it easy to watch as devices are plugged in etc

Concluding remarks on ACPI issue

I have decided to disable logging through rsyslog at boot time and depend on use of dmesg in the knowledge that I can easily enable it again if I need full logs. This will save considerable wear on the SSD and reduce the requirement for large amounts of space in Timeshift images. There may be unintended consequences I have not yet discovered in, for example, access to histories from various utilities.

July 14th 2020

Updates to web site to reflect new Media Queries Approach to Responsive design

This was put sucinctly in the update I put in the News Flash page:

This started as some fixes to get round a bug in some browsers mobile devices and turned into the start of much more major enhancement to the site, in particular for touch screen devices. In essence, the existing responsive design works by reloading a modified version of the page when absolutely essential, for example when changed from portrait to landscape view on a mobile or a major reduction in window size. In some browsers this is relatively seamless as the information is cached locally and the page is left at the same point, in others such as Firefox the page is returned to the top which is disturbing especially if it was a brief and unintentional orientation change. Browsers and web standards have developed over the last 4 years and all currently supported versions of browsers allow an alternative method called Media Queries, a feature of CCS3, to provide an alternative way to implement a responsive design by only changing the 'styles' when, for example the screen width meets a certain criteria (A Media Query)

The changes are progressing and most of the pages accessible from the 'entry' pages (those accessible from the header bar) have been updated to the new format other than the UNZ Travel pages and the early newsletters (~20). Some of the pages showed errors with the latest HTML5 validator or had not been converted to HTML5 at all and needed correcting. Travel pages require changes to every block of pictures as well as to the headers and footers to benefit from the Media Queries approach so that will be a major undertaking. Approximately 160 pages are now updated and validated of which ~45 are travel pages out of a total of 500 pages with picture blocks - a long way to go. Progres updated as of 14 July 2020.

The number of pages and searches for incompatibilities has been done by a grep search through all the web pages by commands like:

:~$ grep -iRl "<script>inhibitReload()</script>" "/media/DATA/My Web Site"

where other searches have been for "media.css" and "hpop("

CopyQ application

Perhaps the most important tools has been use of the CopyQ clipboard manager – a desktop application which stores content of the system clipboard whenever it changes and allows to search the history and copy it back to the system clipboard or paste it directly to other applications. It has a sophisticated tab mechanism which enables me to store headers, footers and various other 'snippets' required for each of the major page types on separate tabs. CopyQ is available for all major operating systems and is in the Mint repositories. It is unusually well documented and offers a huge range of possibilities - I am only exploiting a few

It is normally accessed from the toolbar tray but I have also set it up so that it is opened by a system shortcut key of Ctrl Alt C (Preferences -> Shortcuts -> Global -> Show/Hide main window) and use the option to paste into the cusor position from CopyQ by a return.

So I can select the complete existiong header in an editor, open CopyQ by Ctrl Alt C select the tab and snippet, Enter and on to the next. The snippets as I am calling the items on the CopyQ tabs can be edited in situ by F2 (or an external editor) so it is easy to set the date in the footer block for the day or the bulk of the title in the header. Individual items on a tab or the clipboard history can be pinned in positin which also gives protection against accidental deletion. It is possible to configure a tree view of tabs in use. It is already obvious that one does not want to lose the hard work in setting up and there is an Export command (Export on file menu) which enable selective exporting of tabs.

Repeated edits to update picture blocks.

Background: There are three main formats for picture blocks in use. In all cases scripting is used to expand to the complex code required by writing on the fly as the page loads. This saves long strings with the same text for title, alt statements etc. All three have the same basic 'function parameters' - an 'image location', a 'title string' and a 'position' parameter (which becomes muti-use in the case of lightbox displays) and all expect images in triplicates in 4:3 or 3:4 aspect ratio. The small 'icon' image is always 160 x 120 or 120 x 160 pixels and ends in i the middle size is always 400 x 300 or 300 x 400 pixels and ends in w and is used for compact popup displays which are being phased out. The large images are used primarily for lightbox displays and the size is not baked in but historically were 600 x 450 and now 800 x 600 is the norm. Popups shrink this to be 600 x 450 pixels.

The first two formats were basically the same and used tables for layout and adapted the layout on the fly. Both depended an a reload to respond to window/screen size changes. Version two used a single script round the whole table and wrote the table details <table>, <tr> and <td> etc by function calls as well as the reponsivePictureblockSplit(n) call

Original type of block using table

<table class="pictureblock">
  <tr>
    <td class="pictureblock"><script>hpop('2011/qe11/img_3231', ' &copy; P Curtis 2011', 'center' )</script></td>
    <td class="pictureblock"><script>vpop('2011/qe11/img_3243', ' &copy; P Curtis 2011', 'center' )</script></td>
    <script>reponsivePictureblockSplit(2)</script>
    <td class="pictureblock"><script>hpop('2011/qe11/img_3244', ' &copy; P Curtis 2011', 'center' )</script></td>
    <script>reponsivePictureblockSplit(3)</script>
    <td class="pictureblock"><script>vpop('2011/qe11/img_3248', ' &copy; P Curtis 2011', 'center' )</script></td>
  </tr>
</table>

Single script block with table codes written in scripts

<script>
 tableCpictureblock();tr();
   tdCpictureblock();hpop('2018/qv18ev/',' &copy; P Curtis 2018', 'center' );tdE();
   tdCpictureblock();vpop('2018/qv18ev/',' &copy; P Curtis 2018', 'center' );tdE();
   reponsivePictureblockSplit(2);
   tdCpictureblock();hpop('2018/qv18ev/',' &copy; P Curtis 2018', 'center' );tdE();
   reponsivePictureblockSplit(3);
   tdCpictureblock();vpop('2018/qv18ev/',' &copy; P Curtis 2018', 'center' );tdE();
 trE();tableE()
</script>

New Solution without tables and depending on Media Queries to adapt the css dynamically with the need for reloading pages


<div class="picFrame">
  <div class=pic><script>hpic('2019/auqe19/',' &copy; P Curtis 2019', 'center' )</script></div>
  <div class=pic><script>vpic('2019/auqe19/',' &copy; P Curtis 2019', 'center' )</script></div>
  <div class=pic><script>hpic('2019/auqe19/',' &copy; P Curtis 2019', 'center' )</script></div>
  <div class=pic><script>vpic('2019/auqe19/',' &copy; P Curtis 2019', 'center' )</script></div>
</div>

The transformation of either type needs 5 repeated edits and deletion of n-1 responsivePictureblockSplit elements. Often pages will have both sorts.

Dreamweaver is capable of handling large numbers of open pages and I have used up to thirty for repeated edits. A certain ammount of checking and hand crafting is always needed.

So where are the priorities. Our world cruises are arguable the most import and comprise 33 pages between them it is logical to complete the other two holidays in 2017 for another 11 parts. New Zealand 2016 is important with the Wanaka air show and has 18 parts but 2017 is small at 6 parts and should be included. This would in total mean we completed the coverage of 3 years with 68 pages. So three multiple repeating edits of circa 25 pages each seems to be on the way forwards.

Updating Active Maps

This is the other area where changes are needed. Once the size has reduced to below the width of an Active Map it has to be replaced by an image which shrinks to fit.

The old code used code like this:

<div class="center">

  <div id="activeMap" >
    
<img src="2017/qv17xm_map.jpg" style="width:600px; height:450px;" usemap="#qv17xm_map1" alt="Map">
    <map name="qv17xm_map1">
      <area shape="rect" coords="165,10,515,58" href="qv17xm-p1.htm#soton" alt="Introduction and Embarkation at Southampton" title="Introduction and Embarkation at Southampton">
    </map>
  </div>

  <div id="shrinkableMap" >
    <img src="2017/qv17xm_map.jpg" style="max-width:95%; height:auto;" alt="Map">
  </div>

 <script>ResponsiveShowHide(isFourCols(),"activeMap","shrinkableMap")</script>
</div>

and the new code is

<div id="activeMap" class="showLarge center">
  <img src="2019/qe19ne_map.jpg" alt="Map" width="600" height="400" usemap="#qe19ne_map">
  <map name="qe19ne_map">
    <area shape="rect" coords="6,320,197,398" href="qe19ne-p1.htm#soton" alt="Southampton, UK" title="Southampton, UK">
  </map>
</div>

<div id="shrinkableMap" class="showMedium center">
  <img src="2019/qe19ne_map.jpg" style="max-width:95%; height:auto;" alt="Map">
</div>

So the changes are:

Which is a considerable simplification

Finding pages which need changes and assessing progress

I have made use of grep in many ways to locate pages which needed changes, track progress and generally tidy up.

Basic list of all pages containing HTML 4.01 Transitional in current directory not sub-directories

grep -il "HTML 4.01 Transitional" *.htm

Count of all pages containing HTML 4.01 Transitional in current directory

grep -il "HTML 4.01 Transitional" *.htm | grep -c htm

Note: The grep -c is counting the file names piped to it which contain htm ie all. In my case it is useful to instead count files with nz or qe

Count of all .htm pages containing both HTML 4.01 Transitional and media.css (eg. new header in old HTML 4.01 page such as old diary page) in current directory

grep -il "HTML 4.01 Transitional" `grep -l media.css *.htm` | grep -c htm

Count of all .htm pages containing both HTML 4.01 Transitional but not media.css in current directory

grep -il "HTML 4.01 Transitional" `grep -L media.css *.htm` | grep -c htm
At the end of all these various searches and pruning of old unlinked pages by copying them to an archive folder it was possible to know that there still 41 NZ travel pages but only 11 other travel pages without the new headers and footers. Although they were all 2003 or earlier it seemed worth updating them to have the new headers and footers and basic responsive response although they remained HTML 4.01 standard.

Looking at the remaining NZ Touring pages by

peter@defiant:/media/DATA/My Web Site$ grep -il "HTML 4.01 Transitional" `grep -L media.css nz*.htm` | grep -c htm
41

We find that there are still 41 NZ touring pages at that tiime.

Update 14th August 2020 on conversion to Next Generation pages: The total site has 700 pages, of which ~550 were in HTML5 and were 'Responsive' but using ' Event Driven' rewriting of pages. They and some others have been fully updated (~575). In addition there are still ~145 HTML 4.01 pages in early parts of the Diary, Legacy technical articles and pages with pre digital era pictures ie before 2003. All Diary, early Travel and some early Technical pages (~80) have gained the improved functionality of the new Headers, Footers and use Media Queries whilst remaining in HTML 4.01. There are no plans to convert or add headers to any more of the ~65 very early legacy pages which have little or no useful content.


October 2020

DroidCam

DroidCam turns your mobile device into a webcam for your PC. It can be used with chat programs like Skype, Zoom, Teams, or with live streaming programs like OBS.

Main Features:

Use with Linux (64 bit only)

The GNU/Linux client is a combination of a Video4Linux2 device driver and an executable app that will transfer the stream from the phone to the driver. Sound support is provided via the ALSA Loopback device. The authors assume you are somewhat familiar with the system and how to use the Terminal and only offer the following rather limited installation and usage instructions.

Install in Linux Mint

1. Ensure the following dependencies are installed

sudo apt-get install gcc make linux-headers-`uname -r`

In my case they were already installed

2. If droidcam is already installed, make sure it's not open.

3. Get the latest client and install:

cd /tmp/
wget https://files.dev47apps.net/linux/droidcam_latest.zip
echo "952d57a48f991921fc424ca29c8e3d09 droidcam_latest.zip" | md5sum -c --
#OK?
unzip droidcam_latest.zip -d droidcam && cd droidcam
sudo ./install

The install script will try to auto-sign the drivers if you have secure boot enabled. If the signing fails, you'll be promoted to manually take care of signing the driver by following ‘Secure Boot Module Signing' instructions for your distro (you can Google it).

If all goes well, you can ensure the video device is installed via

peter@helios:/$ lsmod | grep v4l2loopback_dc
v4l2loopback_dc 24576 0
videodev 225280 4 videobuf2_v4l2,v4l2loopback_dc,uvcvideo,videobuf2_common
peter@helios:/$

You should see v4l2loopback_dc in the output as above

4. Open up a V4L2 compatible program (VLC player, Skype, Cheese, etc) and you should see DroidCam listed as a video device (or it might be listed as /dev/video).

5. Sound support is also available. After the above installation succeeds, you can then run

sudo ./install-sound

in the same directory. This will load the Linux ALSA Loopback sound card which the Droidcam client will use for audio input.

6. Start the droidcam client via the Terminal, or create a launcher if you're using gnome. There is also a simpler cli client, droidcam-cli.

7. Check the connection post on how to connect. If all goes well, you should see the output in the chat application, and you're done!

Extra Notes

Video rotation: You can achieve portrait video by inverting the webcam. See the HD Mode section below.

Android USB connections: The client app will try to invoke adb automatically, provided adb is installed.
Debian-based Linux users can do: sudo apt-get install adb
Fedora/SUSE-based Linux users can do: sudo yum install android-tools

iOS USB Connections: The client will try to communicate with usbmuxd to detect and connect to your iOS device. Make sure usbmuxd is installed and running.

Sound support: In order to get the mic to show up in PulseAudio you can either run pacmd load-module module-alsa-source device=hw:Loopback,1,0 (you may need to adjust the last number), or by editing /etc/pulse/default.pa as described here. On some systems you need to do this after launching the droidcam client, and before you connect to the app.

HD Mode – Change webcam resolution

Removal of Droidcam

Close any running programs to make sure droidcam is not in use and run sudo /opt/droidcam-uninstall

My tests and verdict on Droidcam

Video link works to VLC but breaks in Cheese. Sound not installed or tested so far More to follow.

SSH login from Android - JuiceSSH

I recalled using an Android App a while ago to log into a laptop to unscramble a problem which prevented login so I did a search and found the App I had used was JuiceSSH. This was incredibly easy to set up and i could log straight into pcurtis@defiant with just a password as I expected and did ls to check it was working and also ran pmvarrun. The connection was easy to save with or without a password. I had to accept and save a host key as usual but ignored in Settings that they could be cleared. Looks ideal as a backup and perhaps for extracting files via the cloud.

Secure File Transfers to and from Android to Linux - AndFTP

Having cracked the problem of remote login from our android machines using juicySSH I wondered about file transfers and searches showed that the AndFTP App I already use for uploading to our web site can also be used for transfers using scp to/from our machines and a few minutes latter I found I could log in, list folders and make transfers. The only thing to note is that you swap between a local view and a remote view and when both are set up you can make the transfers after selecting the files - very easy.

8th October 2020

Working with eCryptfs - Draft Section for merging

Note: Most of this has been merged into other pages in particular to A Linux Grab Bag.

Un-encrypting a home folder encypted using eCryptfs

This section came abut from the work in updating the Lafite laptop to Mint 20 from Mint 19.2 by doing a fresh install into a 'spare' dual-boot partition. This naturally involved first backing up all the users home folders before the fresh install.

The procedures here for backing up an encrypted home folder and restoring depend a characteristic of the implementation of eCryptfs under Ubuntu/Mint which is regarded by many as a flaw.

Currently an encrypted home folder remains mounted after you log out and into another user. This makes it easy to make a tar backup of an encrypted home folder which has been used but is not currently in use as the user has logged out and hence is 'stable' (no files changing and no activity) by logging into another temporary user to access it. An alternative way (if the 'flaw' is removed) would be to log in as the user from consul (terminal) rather than via a GUI to mount the home folder then do the backup from a different user. A quick way to access a consul is via Ctrl Alt F1 which gives a standard user - password login which will mount the encrypted home folder. This has all been covered in greater depth earlier in the tar backup sections.

We now need to know how the basic home folder encryption works. If you examine /home you will find home folders for each user and you will also see an additional folder called .encrypts within which the are folders correspond to each encrypted user and within them are folders called .encryptfs and .Private . .Private has encrypted copies of every file and they are decrypted on the fly using configuration information held in .encryptfs . The actual users home folder just contains symlinks to these folders. The presence of valid symlinks and the associated folders seems to be all that is needed for the 'on-the-fly' encryption to be 'recognised'. This type of encryption is often known as 'stacked encryption' as the encryption is carried out on top of an existing file system. In contrast LUKS encryption is carried out on a whole partition or volume and is then formated with a file sytem.

Whatever we do the first action is to make a backup and the backup is normally made from the unencrypted version. So all we need to do is to make a backup tar archive is to login in to the user to mount the home folder, log out to keep the file system unchanging, and then log in from a different user and make the archive. The files will be decrypted as they are accessed and we now have a complete decrypted and archived copy with all permissions, owners and symlinks preserved. The typical command to create the archive is:

sudo tar cvpPzf --exclude=/home/*/.gvfs "/media/USB_DATA/mybackup1.tgz" /home/user1/

So to change to un-encrypted home folder we just need to replaced the home folder with a decrypted one. We can do this simply by renaming the users home folder (and preferably rename the user folder in .ecryptfs) and replace the folder with one extracted from the tar archive. It is best to reboot and log back into the alternative user to make sure the decryption routines are inactive before the replacement by extraction from the tar archive. The matching command to extract the folder to its original position is:

sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Following extraction the two symlinks can/should be removed. One can then log out of the temporary user and back into the user and use the unencrypted home folder. After testing all the backups can be be deleted.

Moving a user with an encrypted home folder to a different machine

A similar procedure can be used to transfer an [encrypted] user to a different machine or partition. If such transfers are planned you MUST ensure the users have the same id number which in practice means they are installed in the same order. Normal installations provides users starting with id 1000. id in a terminal provides a useful summary of information on all users and groups. The following is a typical outputof id :

peter@defiant:~$ id
uid=1001(peter) gid=1001(peter) groups=1001(peter),4(adm),24(cdrom),
27(sudo),30(dip),46(plugdev), 114(lpadmin),124(scanner),134(sambashare),
1000(pcurtis),1002(pauline)
peter@defiant:~$

Screenshot

Our machines all have three main users, a permanent administrative user to avoid having to create temporary users for this sort of task and for syncronising/backing-up,and our own two users. The admin user is always the first to be installed as user_id_1000 seems to have some slight extra privileges used by unison for synchronising then Peter and Pauline's users follow with ids 1001 and 1002 . In a family the user with id 1000 could be the Surname and the personal Users could use the Christian names. User_ id_1000 is not used for any sensitive information, email or browsing with passwords so is does no need to be encrypted

Reinstalling a user with an encrypted home folder in a dual boot system including those with other encrypted folders

Up to now my approach has been to decrypting users home folders before reinstalling as part of a dual boot system, moving between machines or transfering to a new machine then re-encrypting if appropriate. This section first covers more background about how my machines are set up, then my attempts to avoid decrypting every folder when doing a fresh install before looking at how best to proceed in the future.

All my systems have a home folder mounted in a separate partition and several can be dual booted as they have two partitions containing root folders for Linux different systems. This enable me to do major updates from say Mint 19.x to Mint 20 by a fresh install yet preserve the individual users home folders provided they are installed into the new system in the same order with only minor changes in configuration. There are also partitions for shared data mounted at /media/DATA and sensitive data at /media/VAULT. Currently all filesystems are EXT4.

The conmplexity is increase when encryption is used for an additional volume for sensitive data and my laptops now all have a LUKS volume mounted at login using a partition mounted at /media/VAULT. Each LUKS volume has 8 slots for different passwords which can mount it so can be mounted by up to seven users with different passwords. When the new system is installed the new users now have to not only be intalled in the same order but use the same password as before to enable the LUKS volume to be mounted.

It is also desirable to encrypt the home folders of selected users using encryptfs, either during inital installation or at a later stage. This poses an additional set of problems when upgrading which I avoided on the Defiant and the Helios by removing the encryption before reinstalling the new Mint 20 system. That worked but has disadvantages as the users home folders need to be re-encrypted. That is not difficult but has a major problem that needs a lot of workspace - up to 2.5 times the size of the users home folder of wasted space.

On the Lafite I removed the encryption on the primary user and fully expected I would be able to re-install using the same home partition and keep user 1000 but that did not turn out to work. When I went through the install from the LiveUSB and got to providing information on the user it had the button for using an encrypted home checked and greyed out so it had looked at the existing et up and found some of the users had encrypted home folders although the actual user I was adding was not encrypted.

This meant I had to back out and start again without mounting the existing partition and creating a new user. I then had to change the system to mount my old partition at /home at boot by adding it to /etc/fstab and rebooting, not something to inflict on a newbie. This left a few mbytes of wasted space in / but not enough to be worth chasing. This was all things I had done before but adds to the work and learning curve for someone new to Linux. I had already removed the encryption from user 1001 (peter) as well so I could then add peter as a user in 'Users and Groups' and only a little tidying of the configurations was needed to have two users transfered to Mint 20 and one left accessible by booting into Mint 19.2 as originally intended to allow time to fully configure the system and install all the programs without risk to the main user of the machine. Configuration includes such activities as reducing use of swap, setting up battery saving by timeouts for hard dries, automounting the VAULT using PAM mount and setting up Timeshift. These are all best completed before the main users are transfered.

This took a good part of a day elapsed time but not all was spent actually working on the installation. At this point I decided to try to add the final user 1002 with an ecryptfs home folder. I found it the password change was greyed out and I tried with and without the option of no password. Without a password it just failed and with a password it asked for by LUKS password for the /media/VAULT drive and let me in but without a password functionality was severely limited (no sudo for example) and again the GUI did not allow me to set a password. So I forced a reset to the old password again using the sudo in a terminal command

sudo passwd user1002

which just prompts twice for the new password for user_id-1002 without requesting the old password

and after a reboot I could then use sudo etc

It is essential the password you set is the same as the old password otherwise the unwrapping in ecryptfs will fail. NOTE: this is the only time sudo should be used with passwd when you have an encrypted home folder.

Conclusions

My overall conclusion is that it is simpler, safer and possibly quicker to unencrypt all the folders before the reinstallation then re-encrypt as required. IF you know how to modify fstab to automount folders and are short of space then the proceedure above may be worth trying but make the backups anyway. If All the home folders are encrypted the system may sort it all out (untested) but you must keep the installation order and passwords the same and you should still make the backups.

Changing the user password safely when you have an encypted home folder.

The normal GUI utility accessed via Menu -> Users and Groups does not work as it has the change password section greyed out. I believe this is a bug as that used to be the only recommended way when you had an encrypted home folder..

So recall that the encryption uses a wrapped key (passphrase) that you hopefully saved earlier. When you log in with an encrypted home folder your normal user login password is used to unwrap the key for subsequent on the fly decryption. The built in utilities use the PAM module when you change a password and under most circumstances PAM handles changing the user login password and the unwrapping password updating in synchronisation. Any utility which checks the old password as well as allowing you to set the new password should allow PAM to keep the passwords in step. Those that do not check the old password and depend on being root or using sudo to force changes or resets of the user password DO NOT change the passphrase for encryption - that would be a security problem and allowing any user who gained root privaledges to access the encrypted folder. So use of sudo passwd does not check the old password and synchronise the ecryptfs decryption and you're in yogurt.

So when you change your users password you must be logged in as the user and just use passwd without sudo.

passwd

That tells you which user it will change and ask for the users current pasword before asking for the new password and a repeat to verify it. I have checked that works and correctly changes the password a couple of times on one of my machines. https://askubuntu.com/questions/33730/will-changing-password-re-encrypt-my-home-directory has the best explanation I have found and some thoughts on swimming in yogurt if you have messed things up and have lost the linkage which I have not tried.

Block Devices (Disk and USB Drives) - blkid versus lsblk and /etc/fstab
Not sure if this has a home outside of the Diary

None of my previous documents have gone into any depth about the details of Block Devices, the main ones we use being Hard Disks, Solid State Drives and USB sticks. I have looked for a good definition of a block device and have not found one. Basically comes from the fact that almost all practical bulk storage devices are read and written in block of data typically 512 Kbytes or greater. It is obviously not practical to read or write random single bits or bytes from a spinning disk and even solid state drives have similar constraints. So in practice block devices means data storage devices. We have already discussed the need to divide up (partition) large storage devices so they can be used for different purposes, often with different filesytems. here we are discussing how to examine the partions in detail and how they are linked into (mounted into) the overall Linux file system when the system is booted.

The method Block Devices are mounted at boot is basically very simple and is initially set up during installation. Each mount point is determined by a single line in the file /etc/fstab. Initially the drive and partition were specified by something like /dev/sda1 to a place in the Linux directory structure but that had problems if the drive partitions were changed as the partition number cound change or even the drive if an additional drive was added. So now the UUID of the partition is used - that is a unique alphanumeric string which is part of the metadata in the filesystem so a formated drive can always be mounted at the same place even if partitions are moved or drives added and removed. That is where the utility blkid comes in as it provides information on all the drives identified at boot time including their device designation and UUID. It does not need root permissions to run unless you want to also see all drives mounted after boot.

There are a further utility which I recently discovered which is more powerful still and does not need root permissions. lsblk provides a simple listing of the drives and partitions in a terminal whilst adding the -f parameter provides a lot more information including the UUIDs

So looking at our simplest system on the helios with only an SSD with multiple partitions and one partition encrypted with LUKS we get with gparted which is the easiest way to modify partitions and provides the UUID for the selected partition via Partitions -> Properties

Screenshot

or with Gnome disks which shows the UUID for the selected partition

Screenshot

these correspond to blkid in a terminal which is definitive for the drives mounted at boot time and suggested in the /etc/fstab file

Screenshot

and another useful display is a my custom version of lsblk which gives all the information to easily link a UUID to a partition and reflects the situation after hotplug USB drives have been plugged in. use:

lsblk -o NAME,FSTYPE,LABEL,SIZE,FSUSE%,UUID,MOUNTPOINT

Screenshot

Now lets have a look at the system table which determines the mounting points when the machine starts by opening as root in xed so it can be changed using the following incantation in a terminal.

xed admin:///etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda2 during installation
UUID=651de848-8c52-4d24-aa5f-b5f81ffaa8ce / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda1 during installation
UUID=1CE7-37D3 /boot/efi vfat umask=0077 0 1
# /home was on /dev/sda3 during installation
UUID=a0cade01-4161-45a2-8427-24baa8e3c88b /home ext4 defaults 0 2
# /media/DATA was on /dev/sda5 during installation
UUID=d8e192cf-0c5a-4bd6-a3a9-96589a68cde5 /media/DATA ext4 defaults 0 2
# swap was on /dev/sda4 during installation
UUID=3745b328-a576-482d-b05b-ac7f65720a64 none swap sw 0 0

 

Before you leave

I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.


Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Content revised: 15th January, 2021