Home Uniquely NZ Travel Howto Pauline Small Firms Search
Diary of System and Website Development
Part 34 (January 2021 ->)

1 January 2021

Change from NTFS to ext4 partition for DATA on Defiant

This turned out to be even easier than expected although very long elapsed times for copy and synchronising. The stages were:

  1. Log into primary user 1000 (pcurtis)
  2. Check that everything on /media/DATA was synchronised onto LUKS_G drive. using terminal commands such as date && sudo du -h --max-depth 1 "/media/LUKS_G/My Video" | sort -rh and date && ls -Alh "/media/DATA/My Video"
    to check the requirements of the synchronisation. The date command is there because these are done on a regular basis and saved to give an audit trail on a regular basis.
  3. Edit /etc/fstab as root and comment out the line mounting /media/DATA as a NTFS partition
  4. Use gparted to format the partition as ext4 and then label as DATA - Two stages
  5. Find the UUID of the new partition using lsblk -f and/or properties in gparted
  6. Edit fstab to mount the ext4 dive at /media/DATA using parameters copied from /home and the UUID found above.
  7. Then and only then restart after which an empty /media/DATA should be present.
  8. Set user and group of /media/DATA to pcurtis (1000) and adm and permissions to 775
  9. Copy all the folders required back from from LUKS_G (takes hours)
  10. Check ownership, group and permissions and adjust if required including .Trash folders (Using standard pre-sync terminal commands)
  11. Run Unison from primary user pcurtis (1000) to confirm DATA is identical to LUKS_G.

Job done!

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda3 during installation
UUID=d0b396c2-ff7f-4087-9ddc-1fb56d4a679a / ext4 errors=remount-ro 0 1
# /home was on /dev/sda4 during installation
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
# UUID=2FBF44BB538624C0 /media/DATA ntfs nls=utf8,uid=pcurtis,gid=adm,umask=0000 0 0
UUID=82d55e72-bf32-4b4f-9f84-f0a97ce446b7 /media/DATA ext4 defaults 0 2

# swap was on /dev/sdb3 during installation
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0

Changes in red lead to:

peter@defiant:~$ lsblk -o NAME,FSTYPE,UUID,SIZE,MOUNTPOINT
NAME   FSTYPE    UUID                                   SIZE MOUNTPOINT
sda                                                   232.9G 
├─sda1 vfat      06E4-9D00                              500M 
├─sda2 ext4      2b12a18d-5f49-49b0-9e1a-b088fd9d7cc1    41G 
├─sda3 ext4      d0b396c2-ff7f-4087-9ddc-1fb56d4a679a    41G /run/timeshift/back
├─sda4 ext4      99e95944-eb50-4f43-ad9a-0c37d26911da 141.6G /home
└─sda5 crypto_LU ae0c3ea4-28b2-42e7-9804-f69a31567659   8.8G 
sdb                                                   931.5G 
├─sdb1 ntfs      269CF16E9CF138BF                       350M 
├─sdb2 ntfs      8E9CF8789CF85BE1                       118G 
├─sdb3 swap      178f94dc-22c5-4978-b299-0dfdc85e9cba  15.6G [SWAP]
├─sdb4                                                    1K 
├─sdb5 ext4      82d55e72-bf32-4b4f-9f84-f0a97ce446b7 698.3G /media/DATA
└─sdb6 crypto_LU 9b1a5fa8-8342-4174-8c6f-81ad6dadfdfd  99.2G 
  └─_dev_sdb6
       ext4      0b72c485-6f2a-4c86-b9b7-c03f01225edb  99.2G /media/VAULT
peter@defiant:~$ 

6 January 2021

Unintended consequences of changes to DATA partition to ext4

The main consequence has been that the Unison synchronisation databases in .unison have been changed and get rebuilt the next time unison is used. This is a slow activity taking many hours for each synchronisation to a different machine or backup disk as the whole ~600 Gbytes in use has to be read. At 60 Mb/sec that is 10,000 seconds or 3 hours which is optimistic for a hard drive - allow double that.

This has led me to run two users simultaneously. I have always been cautious about using the Switch user option rather than logging out of user before logging into different user but it has worked well when rebuilding these databases on Gemini as they could be set running by the primary user and then I could switch to my own user name. If you plan to switch users a lot you really need more than the 4 Gbytes of memory on Gemini as the Swap space was almost fully used for the activity as well as all the cache and buffer space. Once the databases are built even the full synchronisations take only a couple of minutes.

This did mean that the monthly backup took far longer than expected to synchronise between machines and also backup drives as added an extra synchronisation between drives for completeness. It also needed a lot more checking of changes as I had done some sorting and rationisation of the information stored on the DATA drive and moved it round so folders and data were deleted in some places and added in others.

Would I have made the change if I had realised - probably yes as ext4 is a better, more robust, file system than ntsf under Linux and I have found big transfers under ntfs seem to slow down to a crawl in nemo. It does make for greater consistency between my machines.

8 January 2021

Draft for Github Issue now actioned

LUKS encrypted removable USB drives not mounting in expected manner. #343

Reproducibility:

The problem most of the time on all my 4 machines running Mint 20 Cinnamon 4.6.7 or Mint 20.1 Cinnamon 4.8.4 Beta. All have 2 or more users and it occurs for all users (encrypted or otherwise) and with all 5 of my backup drives tested. Further information on sample machine at bottom of posting.

Expected behaviour

Most of the time the actual behaviour after the passphrase is given:

Other experiments

Literature searches:

=================== End of Draft Uploaded ====================================

Investigation of gnome-keyring

peter@defiant:~$ ps -efl | grep gnome-keyring
1 S peter 1419 1 0 80 0 - 78492 - Jan09 ? 00:00:03 /usr/bin/gnome-keyring-daemon --daemonize --login
0 S peter 103621 103163 0 80 0 - 2259 pipe_w 05:43 pts/0 00:00:00 grep gnome-keyring

peter@defiant:~$ grep -r gnome_keyring /etc/pam.d
/etc/pam.d/lightdm:auth optional pam_gnome_keyring.so
/etc/pam.d/lightdm:session optional pam_gnome_keyring.so auto_start
/etc/pam.d/common-password:password optional pam_gnome_keyring.so
/etc/pam.d/cinnamon-screensaver:auth optional pam_gnome_keyring.so
/etc/pam.d/lightdm-greeter:auth optional pam_gnome_keyring.so
/etc/pam.d/lightdm-greeter:session optional pam_gnome_keyring.so auto_start
peter@defiant:~$

peter@defiant:~$ cat /var/log/auth.log | grep gnome-key
peter@defiant:~$

1 February 2021

Monthly Activity Schedule and Checklist [WIP]

Whilst finalising my writing of Grab bag I decided that I should add a check list of monthly activities - mainly backing up but also various maintenance activities and checks. The idea was to test and validate the overall procedures at the start of December. It was intended to have sufficient background and supporting information to enable a normal user to be able to carry out the backing up etc to the level required for the procedures in the Linux Grab Bag page to be used to rebuild a machine when disaster strikes. I found that not to be as easy as I expected as there were a number of quirks (aka bugs) and potential problems that really needed to be addressed before a satisfactory and fool proof procedure and check list could be finalised. This led to considerable background work including developing some scripts for the routine activities and this section took several addition months to develop before it could be incorporated. In the meantime it was developed in parts 33 and 34 of the Diary.

I went back to basics and looked at what counted as critical. I worked much of my life in the Space Game on Satellites where reliability was paramount and one component in many of the reviews was called a FMECA (Failure Mode Effects and Criticality Analysis) and I used a similar approach to look at the overall system of back-ups and redundancy that was in place starting with the importance of particular information including the timeliness. This showed a couple of failure or loss points which would have very serious impacts. It also revealed there was a false sense of security as what seemed to be multiple redundancy in many areas was actually quite the opposite and allowed a single mistake or system error to propagate near instantaneously to all copies.

I am going to look at an example you possibly have. Passwords and Passphrases have become much more critical as hacks have increased so they need to be more complex and difficult to remember and should not be repeated. Use of password managers has become common and loss of the information in a password manager would certainly count as a critical failure and even a monthly backup could result in lose of access to many things, especially if you routinely change passwords as recommended. A quick check showed we have over 250 passwords in our manager of which I would guess 200 are current and a month of changes could easily be over 10. So you think its not that risky because they are present on several machines but that is an illusion as they are linked through the cloud but that means that a mistake on one machine could destroy lose that key on all machines and serious fault could your database on all machines. This makes the configuration and reliability of ones clouds systems to be another source of critical single point failures.

So by the end I decided that I should implement the following before coming back to complete a section on a backup schedule

  1. Backup a number of critical pieces of data on a daily basis independent of the cloud
  2. Investigate the robustness of pCloud and monitor its performance on a daily basis
  3. Convince myself the procedures for the remainder of the critical information were robust.
  4. Understand and work round problems in mounting encrypted drives
  5. Write some scripts to make various backup activities easier and more repeatable

I believe I have now reached that point and all the above have been completed and documented in earlier sections of the diary and have been tested and/or are in use.

The following has been transferred from Diary part 33 where it was originally written in late November and has been modified extensively to remove duplication.

Overview

The purpose of this section is to identify all the various areas which need to be backed up on a regular basis and put together a plan and implementation. Most of this has already been covered in

The intention was to generate an extra and explicit monthly activity list which would add routine house keeping activities to the activities needed to maintain the backups required to easily rebuild a system in the case of a disaster and to keep our various machines synchronised. The idea was to end up with a 'checklist' which could be run monthly and the whole would become another section or appendix in Grab Bag

One important extra area identified at an early stage was that there are fast changing pieces of information which are at risk because synchronised in the cloud. An error on a single machine could rapidly propagate and any sense of redundancy in being on multiple machines is illusory. The most important is the data file for Keepass as that contains the password information for everything. Alongside it is the todo list, not so important but still varying on a daily basis.

Other cloud based information is the calendar, address lists and emails. Emails are left on the server so pose less of a problem and Address books and Calendars are exist in two Google accounts reducing the risks and timescales less urgent. The Thunderbird and Firefox profiles also contain the contacts and calendars as well as bookmarks etc and are in the home folder so are also backed up once a month along with all the user configuration, Desktop etc.

So any plan for backup still have to also take into account data taken on a daily basis but synchronised on several machines via the cloud making it less robust. In addition I have had a number of problems with pCloud Sync.

My solution is to use a layered approach with a daily local backup on multiple machines within each month before the results are transferred to external drives on a monthly basis. So every day the todo.txt and Keepass2 files are automatically copied to a backup folder with a date appended. They are then pruned so only the most recent daily files are kept, then weekly and monthly. The backup folder for these small files lives in the users home folder and the home folder and all the daily copies are archived monthly as part of the users home folder.

In the end it took a couple of months of development before coming to this final action list and all the underpinning work in developing techniques and writing scripts to make life easier and more predictable is covered in the previous diary part 33

All the additional software and scripts have been installed on every machine and extensively tested but not written up in the various howto pages

Monthly Actions List

The following is an action list of Housekeeping Activities that should routinely be run at least monthly.

  1. Check Timeshift is working and there is plenty of space - Timeshift runs in the Background but once month it should be checked and excess manual snapshots pruned. Automatic snapshots are automatically pruned to keep 3 days, 3 weeks and 3 months of snapshots. They do not need to be backed up away from the machine - a reinstall is easier.
  2. Check that pCloud is working and AndroidSync, Phonebox and Shoebox are being Synced - by checking the dates and/or contents of the daily log files present in those folders.
  3. Check for updates in Applets, Desklets and Themes - now made easy by adding the Spices Update Applet.
  4. Check for system updates in Update Manager - apply and reboot machine then log back into normal user ( this mounts the users home folder if encrypted )
  5. Plug in Back-up USB drives, provide password and preferably tick the Forget the Password Immediately option rather than the default of Remember the Password until you logout and keep it mounted until you have completely finished. That is best done immediately after the reboot.
  6. Log out then log in to the prime user pcurtis (id 1000) ready to backup all the other users
  7. Adjust Ownership of DATA and VAULT and any back-up drives in use - Optimises sharing between users during routine use and essential for synchronisation using Unison between machines and backup drives. There is a script backup_users.sh which should be run as root by sudo sh backup_users.sh in terminal (also often a copy in ~/Desktop/Shoebox/Scripts/)
  8. Create Tar archives of home folders of users - this is done as part of the script backup_users.sh
  9. The prime user rarely changes but there is an additional script backup_pcurtis.sh run as root from any different user to backup the home folder of the prime user (id 1000) - run every few months.
  10. Synchronise DATA and VAULT between machines and Backup Drives using Unison from admin user id 1000. They need to be synchronised between machines and transferred to the [3] backup drives at least once month. In practice some synchronisation between machines is needed far more often. This is normally done from gemini which has more profile files after it has been backed up itself.
  11. Do not forget to stop and un-mount the backup drives before unplugging them - best done from the Removable Drives Applet.Keep one Backup drive off site for security and replace one in the Grab Bag
  12. Check for new versions of Mint which come approximately every 6 months, download and add a LiveUSB to The Grab Bag if and when you chose to update.

The above list is actually more than a simple check list as it has been arranged in a very specific order and specifies a number of reboot and login activities which avoid a number of features (a polite term for issues and bugs) to make the monthly activities quicker and easier.

Notes:

Scripts used for maintenance and housekeeping

During development of this action list it became clear that use of a few scripts would make life much easier and more predictable. They are covered in detail in an earlier part of the diary and the action list above depends on their 'availability'. They fall into two classes:

Notes on Implementation [WIP]

There are a number of anomalies in how Mint works which mean that the best way actually to proceed may not be what one initially thinks the logical way when dealing with encrypted drives. In particular there are two issues we have to consider that are taken into account above.

  1. Our backup drives are encrypted with LUKS and a bug in gnome-keyring means that they are best mounted after complete reboot before and changes of user or logouts are made. If not, the Forget the Password Immediately option rather than the default of Remember the Password until you logout must be used.
  2. Remote logins and use of unison to synchronise from another machine to a different user will un-mount VAULT, which is encrypted as a security measure. This can lose data if a user has open files on VAULT.

Both these issues are understood well enough to work out a procedure to avoid problems. Firstly I am going to divide the machines into two classes:

  1. A 'central machine' which carries out the synchronisations of 'data' to and from all the other machines and the hard backup drives. Although it is often useful to keep a couple of machines in step using a central machine is much easier for a monthly and total synchronisation. This, in my case, is a desktop, less powerful than the other machines but adequate for the job.
  2. A series of other machines, all multi-user, and capable of sharing tasks and, if necessary able to provide a similar environment to a user who would normally use a different machine whilst on-the-road or in case of a failure.

Initially I had hoped the majority of the monthly backup could be controlled from the central machine without major interruptions to a user on one of the other machines. The issues raised above mean that is only possible to an experienced users prepared to use a few risky work rounds. I am therefore adopting a low risk and easy to understand procedure above which avoids rather than works round the problems. The penalty is minimal but does involve a reboot of the 'peripheral machines before starting if every work round is to be avoided and the procedure above has been modified to reflect the change.

Latest Scripts used for maintenance and housekeeping

(Transfered from Diary part 33 for completeness)

During development of the action list it became clear that use of a few scripts would make life much easier and more predictable. They fall into two classes, one of which we have already covered:

It turns out that it is simplest for the user to combine the second pair of activities and backup_users.sh (run as root) currently does both.

This script below has been developed to the point that it completely automates the activities once one has logged ino the prime user and has one or more of my backup drives mounted. It now detects the machine name to use in the backup files so is machine independent and can easily edited to add extra drives or channge the list

#!/bin/sh
echo "This script is intended to be run as root from the prime user id 1000 (pcurtis) on $(hostname)"
echo "It expects one of our 6 standard backup drives (D -> I) to have been mounted"
#
echo "Adjusting Ownership and Group of DATA contents"
chown -R 1000:adm /media/DATA
test -d /media/DATA/.Trash-1001 && chown -R 1001:1001 /media/DATA/.Trash-1001
test -d /media/DATA/.Trash-1002 && chown -R 1002:1002 /media/DATA/.Trash-1002
#
echo "Adjusting Ownership and Group of VAULT contents"
test -d /media/VAULT && chown -R 1000:adm /media/VAULT
test -d /media/VAULT/.Trash-1001 && chown -R 1001:1001 /media/VAULT/.Trash-1001
test -d /media/VAULT/.Trash-1002 && chown -R 1002:1002 /media/VAULT/.Trash-1002
#
echo "Adjusting Ownership and Group of any Backup Drives present"
# Now check for most common 2TB backup Drives
test -d /media/LUKS_D && chown -R 1000:adm /media/LUKS_D
test -d /media/SEXT_E && chown -R 1000:adm /media/SEXT_E
test -d /media/SEXT4_F && chown -R 1000:adm /media/SEXT4_F
test -d /media/LUKS_G && chown -R 1000:adm /media/LUKS_G
test -d /media/LUKS_H && chown -R 1000:adm /media/LUKS_H
test -d /media/LUKS_I && chown -R 1000:adm /media/LUKS_I
echo "All Adjustments Complete"
#
echo "Starting Archiving home folders for users peter and pauline to any Backup Drives present"
echo "Be patient, this can take 10 - 40 min"
echo "Note: Ignore any Messages about sockets being ignored - sockets should be ignored!"
#
test -d /media/LUKS_D && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_D/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_D && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_D/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/SEXT_E && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT_E/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/SEXT_E && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/SEXT_E/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/SEXT4_F && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT4_F/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/SEXT4_F && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/SEXT4_F/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/LUKS_G && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_G/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_G && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_G/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/LUKS_H && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_H/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_H && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_H/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/LUKS_I && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_I/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_I && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_I/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
echo "Archiving Finished"
echo "List of Archives now present on any backup drives follows, latest at top"
test -d /media/LUKS_D && ls -hst /media/LUKS_D | grep "backup_"
test -d /media/SEXT_E && ls -hst /media/SEXT_E | grep "backup_"
test -d /media/SEXT4_F && ls -hst /media/SEXT4_F | grep "backup_"
test -d /media/LUKS_G && ls -hst /media/LUKS_G | grep "backup_"
test -d /media/LUKS_H && ls -hst /media/LUKS_H | grep "backup_"
test -d /media/LUKS_I && ls -hst /media/LUKS_I | grep "backup_"
#
echo "Summary of Drive Space on Backup Drives"
df -h --output=size,avail,pcent,target | grep 'Avail\|LUKS\|SEXT'
echo "Delete redundant backup archives as required"
exit
#
# 20th January 2021

Notes:

  1. The script sets the ownerships to be that of the prime user and the groups to be adm and then corrects the ownership of the recycle bin to be the current owner and group to enable trash to work correctly
  2. It creates archives for DATA and VAULT for both of the normal users (peter and pauline)
  3. The script has hard wired my most common 6 backup drives and will set the ownership etc to any and all that are mounted and backup in turn to each
  4. It list all the backup archives on the drives and the spare space available.

It is currently called backup_users.sh and is in the prime user's (id 1000) home folder of and needs to be made executable and run as root ie by

sudo ./backup_users.sh

You should also back up the prime user (id 1000) pcurtis in our case from a different user. The script can be modified to do this as below and is currently called backup_pcurtis.sh . The setting of permissions is not required so it is much shorter.

#!/bin/sh
echo "This script is intended to be run as root from any other user and backs up the prime user pcurtis (id 1000) on $(hostname)"
echo "It expects one of our 6 standard backup drives (D -> I) to have been mounted"
#
echo "Starting Archiving home folder for users the prime user pcurtis on $(hostname)"
echo "Be patient, this can take 5 - 20 min"
echo "Note: Ignore any Messages about sockets being ignored - sockets should be ignored!"
#
test -d /media/LUKS_D && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_D/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/SEXT_E && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT_E/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/SEXT4_F && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT4_F/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/LUKS_G && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_G/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/LUKS_H && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_H/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/LUKS_I && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_I/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
echo "Archiving Finished"
echo "List of Archives now present on any backup drives follows, latest at top"
test -d /media/LUKS_D && ls -hst /media/LUKS_D | grep "backup_"
test -d /media/SEXT_E && ls -hst /media/SEXT_E | grep "backup_"
test -d /media/SEXT4_F && ls -hst /media/SEXT4_F | grep "backup_"
test -d /media/LUKS_G && ls -hst /media/LUKS_G | grep "backup_"
test -d /media/LUKS_H && ls -hst /media/LUKS_H | grep "backup_"
test -d /media/LUKS_I && ls -hst /media/LUKS_I | grep "backup_"
#
echo "Summary of Drive Space on Backup Drives"
df -h --output=size,avail,pcent,target | grep 'Avail\|LUKS\|SEXT'
echo "Delete redundant backup archives as required"
exit
#
# 3 February 2021

 

5 February 2021

Using Adobe Connect with Linux

The Open University uses Adobe Connect for teaching and interviews. There is no client (App) for Linux but the specifications say that it ought to work with a HTML client in Chrome or Firefox. Unlike Zoom there is no free version although the use of the Mobile apps appears to give free access to an existing Adobe Connect System.

When we try to join a meeting in either Chrome or Firefox we just get a message to say that I need to have Flash installed. Flash has, of course, now been discontinued.

There are very few posts in the forums involving Linux but one at https://www.connectusers.com/forums/viewtopic.php?id=25740 gave the following suggestions

Try adding ?html-view=true (i.e. https://xxxx.adobeconnect.com/room/?html-view=true) to the URL and check again. I had a participant on Linux who ran into the problem as well but was able to call up the room in Firefox when changing the URL.

If there are existing queries I assume the ? is changed to an & to concatenate onto the end

I have tried that on their own examples and it does not solve the problem, possibly as they involve a number of extra queries which invoke extra stages which lose the html-view=true option.

I have, however, had more success with the OU where I had initial success after being given a 'bare' url of the form https://ou.adobeconnect.com/rfk1jqcsiuo8/ and that worked when changed to be https://ou.adobeconnect.com/rfk1jqcsiuo8/?html-view=true.

This gave me hope and I persevered. The way the ou access the pages is via a script which hides the url from sight before connecting but I could get the url from a connection from the android pad and then patch that to have the magic string on the end and that worked even when there were other query strings if I changed the ? to an &. and added it to the end. This was tried in both Chromium (which I had loaded as part of my tests) and Firefox.

There are still microphone and sound problems to sort out but it is looking more promising.

We used Firefox for a tutorial lasting 2 hours but a serious problem occured, namely there is a huge memory-leak in the Adobe Connect HTML5 client and we had to reload the tab 5 times during the 2 hours. This equated to a leak of circa 6 Gbytes per hour, possibly the largest I have ever seen.

To enable testing to continue I have increased the swap file size by 5 fold (see below) which can only be a temporary fix. There is nothing specific in the Adobe forums but such effects have occured in the past.

7th February 2021

Mounting of LUKS volumes - Bug/Feature of pam_mount also affects Switching Users

I have suffered a problem for a long time that when using unison to synchronise between machines my LUKS encrypted common partition which is mounted as VAULT has been unmounted (closed) when unison is closed. The same goes for any remote login to users other than the when the user accessed remotely and the user on the machine were the same. This is bad news if another user was using VAULT or tries to access it.

I have somewhat belatedly realised this also affects Switching between users

The following is what I previously wrote about the problem

I finally managed to get sufficient information from the web to understand a little more about how the mechanism (pam_mount) works to mount an encrypted volume when a user logs in remotely or locally. It keeps a running total of the logins in separate files for each user in /var/run/pam_mount and decrements them when a user logs out. When a count falls to zero the volume is unmounted REGARDLESS of other mounts which are in use with counts of one or greater in there files. One can watch count incrementing and decrementing as local and remote users log in and out. So one solution is to always keep a user logged in either locally or remotely to prevent the the count decrementing to 1 and automatic demounting taking place. This is possible but a remote user could easily be logged out if the remote machine is shut down or other mistakes take place. A local login needs access to the machine and is still open to mistakes. One early thought was to log into the user locally in a terminal then move it out of sight and mind to a different workspace!

The solution I plan to adopt uses a low level command which forms part of the pam_mount suite. It is called pmvarrun and can be used to increment or decrement the count. If used from the same user it does not even need root privileges. So before I use unison or a remote login to say helios as user pcurtis I do a remote login from wherever using ssh then call pmvarrun to increment the count by 1 for user pcurtis and exit. The following is the entire terminal activity required.

peter@helios:~$ ssh pcurtis@defiant
pcurtis@defiant's password:

27 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sat Oct 24 03:54:43 2020
pcurtis@defiant:~$ pmvarrun -u pcurtis -o 1
2
pcurtis@defiant:~$ exit
logout
Connection to defiant closed.
peter@helios:~$

The first time you do an ssh remote login you may be asked to confirm that the 'keys' can be stored. Note how the user and machine changes in the prompt

I can now use unison and remote logins as user pcurtis to machine helios until helios is halted or rebooted. Problem solved!

I tried adding the same line to the end of the .bashrc file in the pcurtis home folder. The .bashrc is run when a user opens a terminal and is used for general and personal configuration such as aliases. That works but gets called every time a terminal is opened and I found a better place is .profile which only gets called at login, the count still keeps increasing but at a slower rate. You can check the count at any time by:

pmvarrun -u pcurtis -o 0

I got round the problem that I had accidently closed my VAULT by switching user by using an ssh login to the user which remounted VAULT which was not unmounted when I exited. Fortunately I did not have any files open in VAULT at the time. I now have to think over the consequences as it is useful to be able to Switch users.

My current thinking is to do a an ssh login to the users one intends to switch to and use pmvarrun before the Switch as a bodge but this is not a long term solution. This bug seems to rule out use of Switch User if a pam-mounted folder is in use.

10th February 2021

Increasing Swap file size

We have been having memory leak problems with Adobe Connect which was using approximately an additional 6 gbytes/hour. As a temporary fix whilst investigating I looked at the existing Swap file size and found it was smaller than I would have expected on gemini. I had allowed the Mint Installer a free hand and it had allocated 2Gbyte file whilst I would normally have used a partition of at least twice the memory size.

There is a good tutorial on swap at https://itsfoss.com/create-swap-file-linux/. I have however used dd to create swap files as fallocate must be avoided on ext4 file systems for creating swap as it potentially creates sparse files - see https://askubuntu.com/questions/1017309/fallocate-vs-dd-for-swapfile/. dd can also be used to append to an existing file to increase the size.

I decided the safe way to proceed was to create a new Swap file of 8 Gbytes called /swapfile8G then use it to replace the existing file by renaming when everything was set up. Note that /etc/fstab only contains a filename and path rather than a UUID as would be the case with a swap partition so this is valid. The proceedure was

peter@gemini:~$ swapon --show
NAME TYPE SIZE USED PRIO
/swapfile file 2G 3M -2
peter@gemini:~$ sudo swapoff /swapfile
[sudo] password for peter:
peter@gemini:~$ swapon --show
peter@gemini:~$ sudo dd if=/dev/zero of=/swapfile8G count=8 bs=1G
8+0 records in
8+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 22.8708 s, 376 MB/s
peter@gemini:~$ sudo mkswap /swapfile8G
Setting up swapspace version 1, size = 8 GiB (8589930496 bytes)
no label, UUID=cbac2848-15d8-4821-bb8d-e972bbecf7e0
peter@gemini:~$ sudo chmod 0600 /swapfile8G
#
# Renamed /swapfile to /swappfile2G and /swapfile8G to /swapfile in nemo via Open as Root
#
peter@gemini:~$ sudo swapon /swapfile
peter@gemini:~$ swapon --show
NAME TYPE SIZE USED PRIO
/swapfile file 8G 0B -2
peter@gemini:~$

I have left the original swap so I can change back once this crisis is over!

The disadvantage of having a swap file is that it takes up space in the root partition and is then saved in timeshift - a double wammy and leaving insufficient space to make room for a dual install for Mint major version upgrades.

So I finally redused back to a 4G swap file on gemini and deleted all other sizes once the initial crisis was over. (13 Feb 2021)

11 February 2021

Adobe Connect - Further testing and experiences

There is a version checker here https://helpx.adobe.com/adobe-connect/connect-downloads-updates.html which showed the OU was due to upgrade on 20th February to version 11.2 so was still using 11 when we were using it.

Microphone, Speaker and video setup

The Adobe Connect pre-meeting test checks your computer and network connections, and helps you troubleshoot connection problems before a meeting begins. You can access the pre-meeting test at https://onlineevents.adobeconnect.com/common/help/en/support/meeting_test.htm or for an end to end test replace onlineevents by your our identifier eg: https://ou.adobeconnect.com/common/help/en/support/meeting_test.htm

These tests showed that the microphone and speakers had to be selected both in Sound Settings and in the AC set up. The microphone level could only be changed in the Sound Settings and unlike Zoom did not have an automatic adjustment. In the end we choose our microphone in the webcam as it had the best background level in Gemini but used our Sony Bluetooth headset for output. The headset should have a mic but it does not seem to work (or we have never found how to enable it!

12 February 2021

Prevo X 12 TWS Earpods from 7DayShop

These X 12 TWS earpods are similar to Apple Airpods in appearance and functionality and come with obscure instruction. Looking on the internet it seems that there are many versions of these with different charging arrangements. 7Dayshop sell two versions from Prevo.

The original AirPods Pro used the Apple H1 chip, a custom-made chip. The copycat versions of AirPods often use the Qualcomm TWS headphone chip QCC scheme which is also a noise reduction chip. The designation X 12 TWS is used with many implementations and the internal firmware can be upgraded so the functionality may differ.

The Gearbest site seems to have useful informatio and an introduction point for the similar i12 TWS is https://www.gearbest.com/blog/how-to/i12-tws-operation-instruction-how-to-use-the-i12-tws-earbuds-7725 another set of information is at /https://www.thephonetalks.com/i12-tws-manual/

Firstly one must realise that the earbuds are touch sensitive rather than having a physical button, the sensitive area is marked and secondly what happens with the taps is dependent on the software and Linux and our Android devices respond differently to some of the codes. The Android devices seem to be closest to what is in the various instructions in the box and those I have found on the internet. The earbud(s) it seems can be operated independently as well as together and have at least one microphone but so far I have not got it to operate although there is some indication one is expected under Android as te BT seittings have 'switches for sound, microphone and phone. The pictures below shows the main functions which my Android pad responds to for sound control.

  

In addition various sets of instructions indicate there is a 'Phone' mode where a single touch can answer, reject or end a call depending on which earbud is touched.

Longer touches also have various functions which can include activating a voice assistant (~3 secs) and turning on and off (~2 and 5 secs) .

Pairing seems to be largely automatic. They pair with each other if removed from the charger together and with existing pairs when turned on. I found with Linux I had to frequently remove and recreate the pairing which is a feature of my other BT headsets and some devices.

The X 12 TWS is capable of reporting its battery level under android.

Instructions for [i12] and x12 TWS:

Different versions of fimware may change the actions slightly between manufacturers even when using the same chips and actual actions depend on the software of, for example, the music player. Rythmbox under Linux does not control well.

Warning multiple taps may change settings you do not want - 4, 5 or 6 taps depending on model may change the language!

Using X12 TWS for Video conferencing

I have not found a way to use the microphone properly under Linux. there are a number of features of the way BT profiles are historically implimented which need fudging to use the [low quality mono] microphone used for phone calls in parallel with high quality audio output which are done in phones and some desktop systems but not Linux. See, for example, https://askubuntu.com/questions/354383/headphones-microphone-is-not-working and https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/508522 for some background.

Practical Experience with X 12 TWS

The earpod is fairly easy to pair both under Linux and Android and automatically reconnects when extracted from the charging box under android. In both cases it can easily be selected for listening to music and the audio quality is good enough to make listening to classical music a pleasure, it may not be the quality of by Sony Headphones but much less intrusive. Volume is easy to adjust once one has had some practice to determine the sensitivity and touch sensitive area.

The concept is however designed round use for phones with easy switching from music to enable one to answer a call, hang up and have the music continue which it does well on my phone. It even reads out the number which is calling!

The x12 does not seem to have good control over Rhythbox and stop/start and track control is not as good as with Andoid although volume control works fine.

17th February 2021

Boosting Microphone Levels

After mch experimenting with the X12 and my other microphones on the Sony DB-BTN200 headphones i started to look for ways to boost the microphone level. In the past I used alsamixer which is instaled by default but it made no difference so I installed pavucontrol which is the 'gold standard' control for pulse-audio as used by Mint. It enabled me to boost the level from my exixting Webcam and the Sony headphones enough to make them much more sensitive in general and also adjust for specific programs wich is just what I required.

20 february 2021

Changes to GitHub Authorisation

GitHub now requires that one uses a Personal Access Token (PAT) to access through the command line (or with the API). This is in addition to the existing username/password combination whic still required to log into Github on the web. The PAT is 40 hex digits long so is very secure. I suspect that two factor authorisation will be reuired soon to go with the username/password - it is currently only recommended.

It appears you can have several PATs with different authorisation levels and, as a security precaution, GitHub automatically removes personal access tokens that haven't been used in a year. See https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token but note the example token has far less options than now available and required - see my example below.

Creating a token

  1. Log in as usual to GitHub on the web
  2. In the upper-right corner of any page, click your profile photo, then click Settings.
  3. In the left sidebar, click Developer settings.
  4. In the left sidebar, click Personal access tokens.
  5. Click Generate new token.
  6. Give your token a descriptive name.
  7. Select the scopes, or permissions, you'd like to grant this token. To use your token to access repositories from the command line, select repo and workflow
  8. Click Generate token.
  9. Click to copy the token to your clipboard. For security reasons, after you navigate off the page, you will not be able to see the token again.

Once you have a token, you enter it instead of your password when performing Git operations over HTTPS.

NOTE: Personal access tokens can only be used for HTTPS Git operations. If your repository uses an SSH remote URL, you will need to switch the remote from SSH to HTTPS.

The PAT can be cached in the same way as the password it replaces and once created it can be edited on GitHub evenwhilst cached if you have missed a scope you nee. I missed Workflow and had to add it having been warned when I tried my first push.

What I have currently for my PAT called Github Access is:

6 March 2021

Adding Extra Fonts to Linux so they are accessible from Web browsers, Libre Office and programs running under Wine.

This is a draft update of a section of common text found in several places including ubuntu.htm and ouopen.htm which needs updating to take account of the latest versions of Windows, Ubuntu and Mint and is now part of a separate and much more comprehensive page - Fonts in Linux and on Web Sites

Mint (and Ubuntu) contain many extra fonts which can be installed using the synaptic package manager including the Microsoft Core Fonts which are not installed as standard as they are not Open Source. These fonts are used widely on the web and for older documents and include well known fonts such as Arial and Times New Roman. They can be installed using the ttf-mscorefonts-installer package (use the command line as you need to accept a licence agreement).

I also wanted to install some extra Fonts, namely the Nadianne True Type font I use for Invitations, Wine Labels etc., and the various Windings fonts which provide ticks and other symbols used by Pauline for marking Open University eTMA scripts. Nadianne is not a standard Windows font and originally I think came with a printer but the others are common to Windows and the defaults in Microsoft Office, hence the need to import them for marking of OU scripts.

This brings me to major issue in editing shared files originally created in Microsoft office. LibreOffice will do a good job of substituting fonts in the document with open source equivalents but a change of font will change the size and spacing of the text so the layout will be changed which may be unacceptable when the documents have to be converted back and returned. The worst problems seem to occur with drawings and mixed drawings and text and we have one example where equations have used some drawings overlaid and the meaning has been completely changed due to text slippage under brackets - that was obvious although the cause was not initially. Worse still text boxes may no longer be large enough to contain the text and the ends of text strings can be lost again changing the meaning completely in several cases we have seen. Combined with a double conversion from .docx to .doc and to LibreOffice which is used by many tutors, including ourselves, for marking one is no longer sure what one is seeing! This is not a satisfactory situation, one can just imagine the effects in complex technical commercial documents and agreements even if everybody is using Windows - thank you Bill for yet another setback to the progress of mankind. This means that one needs to add the common fonts used in Office such as Calibri which is the default font (starting at Office 2007) for Word, Powerpoint, Excel and Outlook.

There should be no licence issues in using them on a dual booted machine with a valid copy of Windows/Office or for viewing or editing existing documents created in Office. If you have doubts or wish to use them in documents you create a licence can be purchased from Microsoft. You can find the required fonts in c:\Windows\Fonts. In Windows this is a 'virtual' folder and contains links which Linux does not understand so you need to copy/paste the fonts you need under windows to a new folder for future use in Linux.

Having obtained any extra fonts you need they need to be installed in Linux. There is useful information at https://askubuntu.com/questions/3697/how-do-i-install-fonts but to summarise.

The fonts which are available to all users are stored in folders under /usr/share/fonts with truetype fonts in subfolder /usr/share/fonts/truetype in Ubuntu Linux so type in a terminal:

nemo admin:////usr/share/fonts/truetype

So I have created a new folder for my extra fonts which I call ttf-extra or ttf-all-extra by a right click -> create folder etc.

Drag the extra fonts into the ttf-extra folder from where they were stored

The folder and the files within it MUST have the permissions and owner set correctly to allow everyone to access it otherwise you may get some most peculiar errors in Firefox and some other programs. It should be OK if you use the procedure above but check just in case that they are the same as all the other folders and files.

If the fonts are only required by a single user then create, if not already present, a folder .fonts in your home directory and copying the fonts into it may be a better solution for a single user. It has advantages as they are retained through a system reinstall. That location is however depreciated and and I have changed to ~/.local/share/fonts which is now used by Mint and can hold individual fonts or folders of fonts.

Then alert Mint/Ubuntu that you added the fonts by typing the following in a terminal

sudo fc-cache -f -v

This rebuilds the font cache. You may also need to close and open programs needing the font or log out and back into a user. A reboot is the nuclear option. Recent experience shows that Mint seems to detect changes automatically.

You can check the fonts present by fc-list and the following gives an easy to read listing

fc-list -f '%{file}\n' | sort

Avoiding Duplicate Font files and use of Updated Font files.

The above procedure has ignored the issue of duplicated fonts. If you just pick up or install extra fonts without thought you will end up with duplicate fonts, this does not seem to crash anything but I can find nothing in the documentation covering how the font used is chosen.

I, and most people, start by installing and licencing the msttcorefonts font set using the ttf-mscorefont-installer package to add a number of important but not Open-source fonts for rendering web pages.

Andale Mono
Arial Black
Arial (Bold, Italic, Bold Italic)
Comic Sans MS (Bold)
Courier New (Bold, Italic, Bold Italic)
Georgia (Bold, Italic, Bold Italic)
Impact
Times New Roman (Bold, Italic, Bold Italic)
Trebuchet (Bold, Italic, Bold Italic)
Verdana (Bold, Italic, Bold Italic)
Webdings

The font files provided date back to 1998 and are missing modern hinting instructions and the full character sets but render adequately for web use and are small. However each version of Windows (and MS Office) has improved versions of many of the same fonts but just adding the font files indiscriminately from say C::\Widows\Fonts will end up with duplicates of Arial, Times New Roman and Verdana and many of the others.

Current Situation (March 2021)

My current solution is to have two sets of all additional fonts files I need over and above a basic install - the first has the extra font files needed when the msttcorefonts are installed and is a folder called ttf-extra and the second is self contained, is used without msttcorefonts and is called ttf-all-extra. My machines may use either approach depending on their requirements.

The set in ttf-extra comprises the extra fonts I need beyond those provided by msttcorefonts such as Nadianne, the extra files for fonts such as Webdings and those used as defaults in MS Office such as Calibri. The ttf-extra approach has been been in use for many years - mine contains ~53 truetype font files and has advantages on Linux only machines as you explicitely accept the licence agreement. The font files in ttf-extra are more compact as they date from the Windows XP / Office 2003 days and should render faster through less accurately than more recent versions.

The self contained set in ttf-all-extra is however more up-to-date. It starts with those in as the msttcorefonts set to which I added the files in ttf-extra containing the extra font files as above. I then updated this complete set of font files with the latest versions from Windows 10 Pro, removed any symbolic links and converted all files to lower case. This gives me a folder with ~ 90 truetype font files. I add the ttf-all-extra folder (which is backed up in My Programs) to either /usr/share/fonts/truetype or preferably to ~/.local/share/fonts depending on whether I want to make it available to all users or just a single user. The msttcorefonts folder is not needed and, if present, should be removed from /usr/share/fonts/truetype to avoid duplication with the fonts in ttf-all-extra.

To Do: Add Logic behind choice of per user for fonts.

10th March 2021

Commands to tidy up files

Convert files in current folder to lower case

for i in $( ls | grep [A-Z] ); do mv -i $i `echo $i | tr 'A-Z' 'a-z'`; done

From https://linuxconfig.org/rename-all-files-from-uppercase-to-lowercase-characters

files converting links to the linked files

Use cp with the -L, --dereference option to "always follow symbolic links in SOURCE"

cp -rL ttf-all-extras ttf--unlinked

From manual page

22 March 2021

More on substitution fonts

NOTE: This section is now part of a separate and much more comprehensive page - Fonts in Linux and on Web Sites

Amongst many rabbit holes I have been down whilst looking at font issues I have found a lot of interesting information. Perhaps the biggest issue every different system has is what to do about working with proprietory fonts used in 'documents' in the widest sense including web sites. Even the most common fonts used are actually proprietory examples being Times New Roman, Arial and Courier. These were the defaults in the early days of Windows and there use was licenced through Microsoft as part of Windows. This did not matter in the early days and nobody noticed when nearly all systems were Windows based but the situation is very different now where Apple and Google are big players and mobile devices dominate the commercial market place and Open Source solutions need to be addressed. This where Font Substitution comes in.


Carlito: Google's Carlito font, google-crosextrafonts-carlito is a modern, sans-serif font, metric-compatible with Microsoft Fonts Calibri font in regular, bold, italic, and bold italic. It has the same character coverage as Microsoft Fonts Calibri. Carlito is a default Calibri font replace in the LibreOffice Suite. It can be installed complete with the configuration to make it a substitution for Calibri from package fonts-crosextra-carlito

Caladea Google’s Caladea font, google-crosextrafonts-caladea is a modern, friendly serif font, metric-compatible with Microsoft Fonts Cambria font in regular, bold, italic, and bold italic. It has the same character coverage as Microsoft Fonts Cambria. Caladea is a default Cambria font replace in the LibreOffice Suite. It can be installed complete with the configuration to make it a substitution for Calibri from package fonts-crosextra-caladea.

 

Fixing problems with LibreOffice displaying inappropriate Bitmapped versions of fonts such as Calibri

lets first have a look at the magnitude of the problem, the following piece of text show what happens when the display reverts to a bitmap and the proper Truefont rendering using outlines. The problem does not show for all sizes of font, zoom and screen resolution. In my case it is obvious displaying Calibri at 6 and 11.5 point with no zoom in LibreOffice.

This is a problems mentioned on the Ubuntu and other forums. It occurs because certain font sizes and screen resolutions cause the rendering engine to revert back to using the embedded bitmap version which scales badly. It is not obvious why this occurs and even less obvious why use of an embedded version is allowed when bitmapped fonts are rejected by default. Fortunately the problem is known and the way to reject embedded bitmaps is simple and only involves adding a few lines of code. The simplest way is to add the code for rejection of embedded bitmaps to the code which is already present to reject simple bitmapped fonts.

The file in question is /etc/fonts/conf.avail/70-no-bitmaps.conf (which is actually symlinked into /etc/fonts/conf.d ). This symlinking is a simple way to allow the dozens of font configuration options to be simply enabled and disabled and are automatically brought into /etc/fonts/fonts.conf which should never be edited directly.

So /etc/fonts/conf.avail/70-no-bitmaps.conf has six lines added to become:

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
  <its:rules xmlns:its="http://www.w3.org/2005/11/its" version="1.0">
    <its:translateRule translate="no" selector="/fontconfig/*[not(self::description)]"/>
  </its:rules>
  <description>Reject bitmap fonts</description>
<!-- Reject bitmap fonts -->
  <selectfont>
    <rejectfont>
      <pattern>
        <patelt name="scalable"><bool>false</bool></patelt>
      </pattern>
    </rejectfont>
  </selectfont>
<!-- Also reject embedded bitmap fonts -->
  <match target="font">
    <edit name="embeddedbitmap" mode="assign">
      <bool>false</bool>
    </edit>
  </match>

</fontconfig>

You may need to regenerate the fonts cache by

sudo dpkg-reconfigure fontconfig
and certainly need to reopen programs such as LibreOffice Writer to see the changes.

A diagnostic trick is to open an editor such as mousepad like this:

FC_DEBUG=1024 mousepad
Which will list all the files that are being scanned for font configuration as it opens, there will be dozens. For some reason it does not work with xed! The following is the result when passed through grep.

peter@defiant:~$ FC_DEBUG=1024 mousepad | grep -i bitmaps
Loading config file from /etc/fonts/conf.d/70-no-bitmaps.conf
Loading config file from /etc/fonts/conf.d/70-no-bitmaps.conf done
Scanning config file from /etc/fonts/conf.avail/70-force-bitmaps.conf
Scanning config file from /etc/fonts/conf.avail/70-force-bitmaps.conf done
Scanning config file from /etc/fonts/conf.avail/70-yes-bitmaps.conf
Scanning config file from /etc/fonts/conf.avail/70-yes-bitmaps.conf done
peter@defiant:~$

The above solution applies to all users. Instead changes can be picked up from ~/.config/fontconfig/conf.d/70-no-embedded.conf for a single user and my file is just:

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<!-- Reject embedded bitmap fonts -->
  <match target="font">
    <edit name="embeddedbitmap" mode="assign">
      <bool>false</bool>
    </edit>
  </match>
</fontconfig>

This is a more consistent solution if you are adding fonts on a per user basis

Changing the default fallback subsitution fonts in linux

See http://eosrei.net/articles/2016/02/changing-default-fallback-subsitution-fonts-linux

LibreOffice Font substitution table

I have not used this but it is included for reference as it looks a useful feature

https://blog.documentfoundation.org/blog/2020/09/08/libreoffice-tt-replacing-microsoft-fonts/

27 March 2021

Google Fonts [WIP]

NOTE: This section is now part of a separate and much more comprehensive page - Fonts in Linux and on Web Sites

Google provide a huge number of fonts licensed under the Open Font License which one can use freely in ones products & projects - print or digital, commercial or otherwise. The only restriction seems to be that you can't sell the fonts on their own. Ubuntu/Mint packages some of these fonts as OpenSource replacements for the Microsoft "C" fonts series which provide most of the Windows and Office Default fonts.

Any of these fonts can be very easily added to Mint for use in LibreOffice or more widely. First go to https://fonts.google.com where you will find ~1000 fonts to start to choose from! There are many easy to use selection tools and it seems sensible to start sorting by popularity and reducing the selection by Catagories to serif, sans serif, monospace etc.

20th December 2021

Using the vi editor in Busybox

I need to edit a configuration file on the Raspberry Pi using access via SSH. The Pi uses Busybox to provide a very cut down number of Linux utilities which mostly have reduced functionality - vi is no exception. vi was THE original unix editor and almost every linux system has vi or its advanced version vim. It has a steep learning curve for those used to recent graphical editors but is very effective to make quick changes to files when you are experienced especially on a headless system such as a server or embedded system when only 'terminal' access is available.

General Tutorials on vi:

The version of vi in Busybox is cut down in some facilities but easier than I remember to use in other ways. All the basic operation seems to remain the same excepting many :set commands are missing.

Cut and paste via Ctrl C, V and X are not available in vi but in Busybox vi the cursor movement keys seem to work [almost] as normal and in Command and Insert modes. Right click cut and paste works internally and to other programs which is different to what I recall of vim. Center click and roll also seem to work if you have a mouse.

yank, put and delete are the internal equivalents of copy, paste & cut and work in Command mode and are more powerful than a newcomer to vi might expect as they are leveraged by various selectors for complete lines and multiples.

Use under Android. Escape is fundamental so you may need to use the hackers keyboard. JuicySSH seems to add some useful extras to the keyboard which may make its use possible. In most implementations including vi Ctrl [ is equivalent to Esc . The internal yank and put may be better than android's copy and paste. I have not tried a copy paste from another program into vi running through JuicySSH yet.

January 30th 2022

Useful Code Snipetts for Node-RED

Selecting fields and striping characters from strings using awk

This is just a very simple example of awk which uses an ability of awk to split an input into a series of fields $1 $2 etc, print them to the output and use a substring function when printing to strip characters from the start and/or finish. This is ideal for selecting information from an Exec Node for further processing, storage or display in Node-RED and is often combined with grep to select particular lines of output

Strip characters from start of string in specified field.

One can use substr() to do string slicing eg

$ echo "very undecided!" | awk '{print substr($2, 3); }'
decided!

Strip a character from the end of a string

awk '{if (NR>0) {print substr($2, 1, length($2)-1)}}'

NR is the number of Records so we can check we have records present - useful if we the input is piped from a previous test

length($2) will get us the length of the second field, deducting 1 from that to strip off the last character.

Example:

$ echo "very undecided!" | awk '{if (NR > 0) {print substr($2, 1, length($2)-1)}}'
undecided

The two can be combined and it is wise to check if it also works in busybox awk if you are testing the embedded version on a different machine.

peter@gemini:~$ busybox echo very undecided! | busybox awk '{if (NR > 0) {print substr($2, 3, length($2)-3)}}'
decided

Using Javascript to select fields

We can use javascript split(/\s+/) to divide multiple lines into an array splitting on any non printing space including line feeds and then work on array elements. It is often useful to remove leading and trailing spaces with trim() first. eg:

myArray = msg.payload.trim().split(/\s+/);
// Note array indices start at 0 so for second field use Array[1]
msg.payload = MyArray[1];

The function substring( start , end ) can be used to extract part of the string between indices start and end. end is not required and note the use of length, the length of the string

let text = "Hello world!";
let result = text.substring(3, text.length - 1);
// result is "lo World"

Using Grep in a Function Node

Grep is very well known and our most powerful tool for searching for PATTERNs in FILEs (or stdin). Here it is used with input piped from a terminal command to select only those lines (individual records) containing a string within a Function Node. What is less known is that one can do OR and AND equivalents. Grep is can be used on files and here we are using it with input on the stin

grep AND

There is no actual AND in grep but a simple way is to apply it multiple times

grep charstr1 | grep charstr2

you can also use the -E, --extended-regexp option which apparently interprets PATTERNS as extended regular expressions. I would not like to explain further!

grep -E 'charstr1.*charstr2'

Both versions work under Venus with Busybox but I have never used the second version

grep OR

Match lines containg any of a number of strings by

grep e charstr1 e charstr2 e charstr3

or

grep 'chrstr1\|chrstr2\|chrstr3'

Both versions work under Venus with Busybox

Other Grep options I often use

-i Ignore case
-v Select non-matching lines
-w Match whole words only
-r recursive (on files)

These all work in busybox

Rounding to 2 decimal places for display in Dashboard with associated text

The reduction from multiple decimal places before output is a common requirement.

//Example from a Function Node in Node-RED
var numb = msg.payload;
numb = Math.round(numb * 100) / 100;
msg.payload = "User Disk " + numb + "% full";
return msg;

Context variables

Note Inject, Change, Switch, and Trigger Node support Context Variables directly by dropdowns

Note 2 Use of Persistent Global Context variables ie saved to file requires modifications to the settings.js file in Node-RED

Set Persistent Global Context Variable example

var cpu_switch = true ;
global.set("cpu_switch", cpu_switch , "file");

Or one can use a Change Node

'Gate' on Global Context Variable in Function Node Example

// Set gate state from global context variable or initialize if undefined
var cpu_switch = global.get("cpu_switch" , "file") || false;
if(cpu_switch === true) { return msg }
return null

Note the format of global.get and global.set with single or double inverted commas

30th January 2022

Measuring Signal Quality on the Raspberry Pi

The usual utility for such measurements is ifconfig but that does not give any information on signal strength, quality or noise in the busybox version so we have to go to /proc/net/wireless for information.

$ cat /proc/net/wireless
Inter-| sta-|     Quality      |   Discarded packets        | Missed          | WE
face  | tus | link level noise | nwid crypt frag retry misc | beacon          | 22
wlan0: 0000   67.  -43.  -256    0    0     0    0     33941  0

So what do they mean? The important ones are under the Quality heading:

Signal Strength (quality level in /proc/net/wireless)

Basically, the higher the signal strength (level), the more reliable the connection and higher speeds are possible. The signal strength is specified in dBm (decibels related to one milliwatt). Note Decibels are a logrithmetic scale.

Values between 0 and -100 are possible, with more being better. So -51 dBm is a better signal strength than -60 dBm. For the Raspberry Pi 3B+ :

-30 dBm probably means the antenna is virtually in contact with the Wifi Router!
-50 dBm is considered an excellent signal strength.
-67 dBm is the minimum signal strength for reliable and relatively fast packet delivery.
-70 dBm is the absolute minimum signal strength for reliable packet delivery.
-90 dBm is very close to the basic noise.

On Corinna the Signal Strength is -50 dBm at 5m when tethered to my Samsung Galaxy M32 mobile phone through 4 wooden bulkheads.

Link Quality

A network can be received with a very good signal strength but not as good a link quality. In simple terms, it means how much of the data you send and receive will make it to the destination in good condition.

The quality indicator includes data like Bit Error Rate, i.e., the number of bit errors in received bits that have been altered due to noise, interference, distortion, or bit synchronization errors. Others are Signal-to-Noise and Distortion Ratio. It is measured in percentage or on a scale of up to 70. The figure in /cat/net/wireless is on a scale of 70 but it is important to understand the reading is somewhat subjective and depends on the hardware and the manufacturers figures and driver software. So, unlike signal strength, it is somewhat harder to say which values are still considered to be OK but may still be useful when looking for interference from, for example, a microwave cooker in a house, dimmers on lighting or motors on a boat and chosing positions for equipment.

Node-RED Code Snippets for Quality

Exec Node

cat /proc/net/wireless | grep wlan0

Function Node

myArray = msg.payload.trim().split(/\s+/);
// link_quality_percent = Math.round(myArray[2]*100/70);
// global.set("link_quality_percent", link_quality_percent , "file");
// msg.payload = link_quality_percent" + " %" ;
link_level_db = myArray[3]*1;
global.set("link_level_db", link_level_db , "file");
msg.payload = link_level_db + " dBm"
return msg;

March to August 2022

Kernel Changes required for Venus OS 2.8x for Raspberry Pi 4 v1.4 and higher.

Note: Most of the following is no longer necessary as the latest beta versions of Venus OS 2.90 now supports the Raspberry Pi 4 v1.4 and higher in both Normal and Large versions.

Version 2.90 is under rapid development and will obviously be the way to go. I am currently 'trialing' Venus OS 2.90~22 Large and have a clean system with Node-red installed but without any of my software transferred or tested.

This section has therefore been moved from the Venus Node-Red pages and replaced here for reference.

Whilst I was carrying out a number of experiments on interfacing to the gpio on my Raspberry Pi I began to feel slightly vulnerable as I only have the one which is at the heart of the solar and power system on Corinna so I started to look for a spare for experiments. Unfortunately the Raspberry Pi 3B+ which was recommended and all the software was written for is no longer available or is on months long delivery. I however found that a couple of people were working on the software for the latest Pi 4B - there was support for the first couple of revisions but the kernel in use at the time did not support the latest versions 1.4 and 1.5. The software from Victron in Venus OS v2.80 uses kernel 4.19 for the Pi 4 which is now very out of date although other versions have moved to 5.10 which seems to support most of the interfaces on the Pi 4 revs 1.4 and 1.5.

I therefore decided to try to buy a Raspberry Pi 4B which are also like finding Hen's Teeth. I eventually found a 2Gbyte RAM one in a starter kit with case, microSD, Micro HDMI cable and a 3A mains power supply at Pi Hut where I bought my last Pi 3B+ which was not a ridiculous price as would have had to buy some of the items in any case.

The new versions of the Victron Venus OS 2.90~nn which support the Raspberry Pi 4 in all versions including Large are available on the candidate feed at http://updates.victronenergy.com/feeds/venus/candidate/images/raspberrypi4/ . These are still beta versions and the source will change when version 2.90 is officially released.

The previous complex proceedures to update the kernel and other essential changes worked with a trial versions of the Venus OS normal and large and the micro HDMI cable allowed me to connect to a standard monitor for initial tests. The two most important sources of information at the start were below, although the links to software can not be relied on, versions come and go and the large version disappeared shortly after I collected mine. There are various suggestions and recipes on how to compile a suitable kernel and install it and you will now find a number of my posting and suggestions and assistance I was given.

There is also a very good introduction to compiling a kernel by the Raspberry Pi people themselves at

I have patched and built kernels in the distant past on Ubuntu to get round a hardware problem but I had no idea what was going to be involved on the Pi. The problem is that it is a highly integrated computer featuring a Broadcom system on a chip (SoC) with an integrated ARM-compatible central processing unit (CPU) and on-chip graphics processing unit (GPU). The ARM processor varies between models of the Pi and it is not helped by the fact that the Venus OS is directed towards real time embedded systems with little inbuilt software. The kernels try to avoid total changes as different ARM based processors are used by use of various software such a Device Trees with Overlays and various other software called during the initial boot sequence which very loosely does part of the job of the old CMOS BIOSes and the new UEFI system. I do not understand the details sufficiently to risk explaining further and making a fool of myself. The problems are compounded by the fact that, in practice, one has to work on a different computer with a different processor so everything has to be cross compiled. https://community.arm.com/oss-platforms/w/docs/525/device-tree gives an indication of how it is all configured.

To cut a long story short, the Victron Community threads above give enough hints and recipes, combined with a couple of days reading, to understand basically what I was trying to do and has allowed me to cross compile a compatible kernel and insert it into the Venus OS 2.80 designed for the earlier Pi 4B versions 1.1 and 1.2. That was extended to include large versions with Node-RED. Essentially this updates the kernel to the 5.10.y series which is the minimum level to support the latest processor and peripheral chips on the Pi 4B version 1.3 and higher motherboards. In most Linux distributions the kernels get minor version changes dozens of times during their life and major changes come every couple of years. Support for a very limited number of LTS (Long Term Support) kernels is 6 years. Some recent LTS kernels with there End of Life (EOL) are 4.19 EOL Dec, 2024, 5.4 EOL Dec, 2025, 5.10 EOL Dec, 2026, 5.15 EOL Oct, 2023. So an update from 4.19 to 5.10 as a base for the Venus OS is very sensible providing 4 more years support rather than go higher however one should note that the Raspberry Pi's own kernels have jumped to 5.15, we may yet find some new feature needs it but currently basing on kernel 5.10.y is sensible.

Linux is hosted on GIT and it is easy to select any previous version, all significant releases will have a branch and major changes and bugs will be back-ported during their supported life. I have cover the use of GIT elsewhere however to have the whole GIT tree for Linux with a history going back 10s of years is GBytes of download however it is only needed once as successive commits (patches) can be applied to it easily. The much quicker alternative is to start with a shallow depth of history and download frequently. Whatever one does the Cloning and stages of cross compellation takes several hours of elapsed time so one hopes it will not be too long before Victron take on the task. There is no formal support for The Pi but Victron have been extremely supportive of the Open Source Movement and people who wish to push to the limits or just tinker.

This has not cut the story as short as I would have liked so it is time to get to the nitty gritty and provide recipes for those with a significant (undefined) previous knowledge of Linux who are not frightened of use of the Console (terminal) and a suitable Linux machine. They are very much based on the work of @Johnny Brusevold and @bathnm and as I said before the stages are going to take many hours and need to currently be repeated every time you wish to upgrade rather than the short upgrade probably applied automatically. But it allows use of the Pi 4B so is a very important bridge at present and essential updates should be rare. There are two sources of a 5.10 kernel, The one provided by Raspberry Pi and one forked by @bathnm which has a number of the Venus patches applied. I followed the suggestion of @Johnny Brusevold and initially used the one from Raspberry Pi with the minimum modification to get Bluetooth working with Victron Software as he had provided a comprehensive recipe.

This all assumes you are using a Linux based machine preferably running a Debian based distribution. This could be a Raspberry Pi but it might be a little slow.

First time we need to install a few utilities on the Linux machine you will be using, some may already be present but some not, this will make sure they are all present:

sudo apt install git bc bison flex libssl-dev make libc6-dev libncurses5-dev crossbuild-essential-armhf

Now we clone the branch in the kernel used in the latest venus images namely 5.10.y from the main raspberry Pi linux

git clone --depth=1 --branch rpi-5.10.y https://github.com/raspberrypi/linux

The --depth=1 parameter restricts the depth of commits to the top level to save downloading a vast amount of information not required at the expense of being able to always merge the latest commits.

There have been changes by Victron in the Bluetooth device handler which we need to download and incorporate with latest Bluetooth file smp.c from Victron code

The file for their 5.10.y source which works is located here: https://github.com/victronenergy/linux/blob/fb01a308bf550ea244bcf2b465a01a0f19c6dd63/net/bluetooth/smp.c - place it in your home folder then copy into kernel code by.

cp -r smp.c linux/net/bluetooth/

cd linux

Tell the system the type of kernel we are making for the relevant ARM processor which is different to the earlier Pi

KERNEL=kernel7l

Start Cross compiling

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- bcm2711_defconfig

The next step is very slow, expect several hours on most machines. The -j 4 parameter will tell the compiler to use four cores (adjust if you have more or less).

make -j 4 ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- zImage modules dtbs
Now we write the venus image file to our microSD card. The card should contain a recent image for the Pi4 version 1.1 to 1.2 downloaded from here https://updates.victronenergy.com/feeds/venus/release/images/raspberrypi4/ . I use BalenaEtcher on my Linux machines to Flash the image. It is an appimage which does not need installing and it is available for Windoz and Macs as well as Linux. It is very quick and importantly checks the image.

You can now mount your new card - adjust the /dev/sdb1 and dev/sdb2 to match your SD card mount points which may differ, check and check again to prevent overwriting your own files:

mkdir mnt
mkdir mnt/fat32
mkdir mnt/ext4
sudo mount /dev/sdb1 mnt/fat32
sudo mount /dev/sdb2 mnt/ext4

More Cross Compiling

sudo env PATH=$PATH make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- INSTALL_MOD_PATH=mnt/ext4 modules_install

Now we copy the new kernel into place on our SD card

sudo rm mnt/ext4/boot/zImage
sudo cp arch/arm/boot/zImage mnt/ext4/boot/
sudo cp -r arch/arm/boot/dts/bcm2711-rpi-4-b.dtb mnt/fat32/
sudo rm -rf mnt/fat32/overlays/*
sudo cp arch/arm/boot/dts/overlays/{ads7846.dtbo,mcp2515-can0.dtbo,mcp2515-can1.dtbo,pitft28-capacitive.dtbo,pitft28-resistive.dtbo,rpi-display.dtbo,rpi-ft5406.dtbo} mnt/fat32/overlays/
sudo cp arch/arm/boot/dts/overlays/README mnt/fat32/overlays/

Get two more files from the raspberry pi site for part of the boot sequence and copy to the card

wget https://github.com/raspberrypi/firmware/raw/master/boot/fixup4.dat
wget https://github.com/raspberrypi/firmware/raw/master/boot/start4.elf
sudo cp -r {fixup4.dat,start4.elf} mnt/fat32/

and unmount your microSD card.

sudo umount mnt/fat32
sudo umount mnt/ext4

That got me a normal (not large with Node-RED) working Venus OS with everything I need working. Bluetooth, Wifi, Ethernet, USB, GPIO ports, HDMI for monitor etc.

Now the extra steps to change to a Venus OS large with Node-RED.

This stage is a little obscure because of the way the large versions of the Venus OS are created by installing a normal system then updating it to a large version. No full images for large versions are produced by Victron. I have not fully explored how the update is done internally but Victron supply .swu files for all the updates which contain compressed images which are installed in turn into one of two rootfs partitions. The update process also seems to end up with an ext4 file system which does not fill the partition and is read only, I am not sure why the partition is not filled but it potentially causes a problem if we make changes independently for the new kernel which no longer fit - only about 10% headroom is provided. What follows is again based on the notes of @Johnny Brusevold which I am expanding as I write up

So the first thing to do is to adjust the partitions on our target microSD which we have just generated which has a standard Venus OS without Node-RED. Victron provide a script to expand the filesystem which can be found on your machine at /opt/victronenergy/swupdate-scripts/resize2fs.sh which will also make it writable. There is an explanation at https://www.victronenergy.com/live/ccgx:root_access

Or you can do what I did which is to modify it on a separate Linux machine with gparted which has a graphical interface so you can adjust the sizes of the partitions to be more suitable and that also expands filesytems to fill the partitions. That is covered in another Appendix Venus OS and OS Large Partitioning and File Systems in great detail.

All the following assumes you are still in linux folder

The next stage extracts the update images and then mount as a loop device. cpio -iv extracts files from an archive in verbose mode then gzip -d does the next decompression which gives us the image itself that we require.

cpio -iv < ./venus-swu-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-large-XX.swu
gzip -d venus-image-large-raspberrypi4.ext4.gz

That is then mounted along with the two partitions on our repartitioned Venus OS small microSD card.

sudo mount /dev/sdb1 mnt/fat32
sudo mount /dev/sdb2 mnt/ext4s
sudo mkdir /mnt/virtual
sudo mount -o loop venus-image-large-raspberrypi4.ext4 /mnt/virtual

The following basically copies everything in the update .swu file over the ext4 partition replacing existing files (which we extended in size) apart from the lib/modules which contains the new kernel which we want to keep and likewise the compressed version zImage. The boot partition is not changed so I am not sure why we bothered to mount it!

sudo rm -rf /mnt/virtual/boot/zImage*
sudo rm -rf /mnt/virtual/lib/modules/*
sudo cp -r /mnt/virtual/* linux/mnt/ext4/

The following makes an image you can put onto other cards. You may need to adjust for different card sizes. You can skip to the unmounting if an image file is not required. I skipped this as I have my own procedures for making compressed images covered in Simple cloning of the microSD card as a Backup so it is untested by me.

sudo dd if=/dev/sdb of=venus-image-raspberrypi4v1.4-vX.XX_XX-Large.rootfs.img bs=1024 count=1627153

sudo pishrink.sh venus-image-raspberrypi4v1.4-vX.XX_XX-Large.rootfs.img

gzip -k9 venus-image-raspberrypi4v1.4-vX.XX_XX.rootfs.img

and unmount your microSD

sudo umount mnt/fat32
sudo umount mnt/ext4

Additional thoughts and inconsistencies:

I seem to have a mix of zimage and zImage which both seem to be compressed versions of the kernel. They are created because in many cases, it is faster to decompress than read a slow memory and it is often used on embedded systems. But which is used?

The .swu files can update most recent previous versions so must contain [almost] everything. So is it possible to do it the other way round and do a standard modification on .swu files rather than have to do the slow cross compiling for every update?

Anomalies including between Pi 3 implementations and Pi 4

Some of the GPIO pins seem to have been set up on the Pi 3 and not on my Pi 4 images

The RAM on my 2 Gbyte Pi4 is limited to 1 Gbyte - possibly wrong fixup files or overlays - or can we use a parameter to set such as add "total_mem=1024" or similar to /boot/config.txt which is alleged to reduce memory on larger Pis.

Summary Status of use of Pi 4

I feel more secure now I know I could get a working Pi 4 system as a fallback if the existing Pi 3B+ fails but I am far from ready to consider changing whilst the existing system is working - "If it ain't bust don't fix it" is always a good principle as is "Keeping layers of contingency".

 

23 August 2022

Use of the Busybox implementation of Cron in the Victron Venus OS

I have been looking for solution to being able to run shutdown, reboot etc from Node-RED as it requires to be run from root. This is easy when sudo is available as it can be set up so certain programs and users can run sudo and hence as root without passwords but busybox does not support sudo or a similar mechanism. So I have been looking at ways to run a job under root every 30 seconds or so which checks for a flag that shutdown, reboot etc is requested. I initially investigated cron under busybox.

Cron always seems a simple concept but there always seem to be subtle differences between versions and implementations so I had to start with a search for the control file location used by busybox in the Victron Venus OS and I found it in /data/etc/crontab. There are several common locations and the format has slight differences depending on position.

# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
5 2 * * * root /opt/victronenergy/swupdate-scripts/check-updates.sh -auto -delay
* * * * * root /data/every-minute.sh

You can see an existing job for software updates and my addition to run script every-minute.sh. There are two formats and this is the one where the user the job is run under is specified before the script. Once per minute is the fastest repeat rate for cron jobs. If one wants every 5 minutes the job specification changes to

# m h dom mon dow user command
*/5 * * * * root /data/every-five-minutes.sh

and I have checked it works. One thing which caught me was that I forgot the script file must have execute permissions, the crontab file only needs read access. You can look at the message log to see what is happening, I could not find a specific log

root@raspberrypi4:~# cat /data/log/messages | tail -20
.....
Aug 24 00:40:01 raspberrypi4 cron.info CROND[19127]: (root) CMD (/data/every-minute.sh)
Aug 24 00:41:01 raspberrypi4 cron.info CROND[19405]: (root) CMD (/data/every-minute.sh)
Aug 24 00:42:01 raspberrypi4 cron.info crond[866]: (*system*) RELOAD (/etc/crontab)
Aug 24 00:42:01 raspberrypi4 cron.info CROND[19685]: (root) CMD (/data/every-minute.sh)
Aug 24 00:44:01 raspberrypi4 cron.info CROND[20224]: (root) CMD (/data/every-minute.sh)
Aug 24 00:46:01 raspberrypi4 cron.info CROND[20779]: (root) CMD (/data/every-minute.sh)
.....

I confirmed that you did not need to run crontab after changes, they are automatically detected - see above where I changed crontab to run every 2 minutes to test everything and the rate halved after the automatic reload.

So how do we use the script running as root? My current mechanism is to read a file containing a single line 'command'. When a shutdown or restart is required the file is created containing the appropriate command line in a Node-RED Exec Node. The location is checked in script and if the file exists it is read and the command executed as root and the file immediately deleted to avoid the command being run a second time. The file may be deleted or overwritten from node-RED to cancel. There are always dangers with any mechanism which allows commands to be run as root without further checks or passwords - it could easily be modified to only allow a list of permitted commands. The current version is:

#!/bin/sh
# /data/every-minute.sh - Run as Cron job - Needs Execute permissions
#
FILE=/data/home/nodered/.node-red/scripts/cmdfile
# When testing
# echo "$(date) - every-minute.sh called" >> /data/crontab_log
#
if test -f "$FILE"; then
  cmd=$(cat "$FILE")
  echo "$(date) - Running $cmd" >> /data/crontab_log
  rm "$FILE"
  eval "$cmd" >> /data/crontab_log
fi
exit 0

This all worked and proved the concept but cron could not run faster than once per minute and was filling the system log files so I returned to my original idea of using a timer loop with a delay which could then run much faster and limited by the processor overhead which turned out to be smaller than I expected, running every second gave no noticable increase in overall processor load on the Pi 4B. The script is very similar to that for the Cron job but wrapped in an endless while loop including a sleep of 1 second.

#!/bin/sh
# /data/every-second.sh (run by rc.local) - Needs Execute permissions
#
FILE=/data/home/nodered/.node-red/scripts/cmdfile
while true
do
if test -f "$FILE"; then
   cmd=$(cat "$FILE")
   echo "$(date) - Running $cmd" >> /data/root_command_log
   rm "$FILE"
   eval "$cmd"
fi
sleep 1
done

27 September 2022

Investigation of use of sudo in Venus OS 2.91~1

Sudo was added to the final beta versions of the Venus OS at about v2.90~28. I continued to update from the candidate series so now 2.90 has been Released I am on 2.91~1.

The reason for adding sudo was to provide a mechanism for Node-RED to be able to make Exec calls that require root or adm privileges such as shutdown without running as root. Earlier versions ran as root but in 2.90 the system was changed so Node-RED ran as user nodered which broke my shutdown, reboot and GPIO code. I put in place various work-arounds but Victron also responded to my posts by adding sudo to their build of the Venus OS. The underlying Linux distribution is openembedded-core (currently Yocto Dunfell in v2.90). As the name implies it is optimised for embedded systems, normally headless with no keyboard or monitor, with minimal utilities and can be built with or without sudo. The Venus implementation is read-only but can be changed by running a script they provide.

The nodered user probably needs to be added to the sudo and adm groups but even then a password is needed when sudo is used. There are mechanisms for allowing exceptions so sudo to be used by specified users or groups for certain activities without use of a password and this is set up in the /etc/sudoers file. Any errors in this file can render a system unusable so it is essential to only edit it using a special program visudo. I have made such changes in the past for encryption programs such as truecrypt and also for my trial Node-RED running on my Linux Mint based desktop to access shutdown.

If you want to add an existing user to multiple secondary groups in one command, use the usermod command followed by -a to specify adding to existing groups and the -G option followed by the names of the groups separated by , (commas):

usermod -a -G group1,group2 username

In this case we are root and can run usermod and we want add the sudo and adm groups

root@raspberrypi4:~# usermod -a -G sudo,adm nodered
root@raspberrypi4:~# id nodered
uid=993(nodered) gid=993(nodered) groups=993(nodered),4(adm),20(dialout),27(sudo)
root@raspberrypi4:~#

Group adm is used for system monitoring tasks. Members of this group can read many log files in /var/log, and can use xconsole. id nodered shows the groups that nodered belongs to.

Now we need to modify the /etc/sudoers file. This should be done using visudo to ensure that the edit is correct and all tests should be done that way. Ultimately it may be safe to do a search and replace in the file using, for example, sed as the change will be very simple and previosly tested.

Replacing occurrences of one string with another in files in the current directory is a natural task for sed. This solution was written for GNU sed but seems to work with busybox sed.

sed -i -- 's/foo/bar/g' *

If we look at /etc/sudoers we find this section:

## User privilege specification
##
root ALL=(ALL) ALL

## Uncomment to allow members of group wheel to execute any command
# %wheel ALL=(ALL) ALL

## Same thing without a password
# %wheel ALL=(ALL) NOPASSWD: ALL

## Uncomment to allow members of group sudo to execute any command
# %sudo ALL=(ALL) ALL

so all we need to do is to uncomment and change the second %wheel to %nodered. I tested sed on my Linux Mint box with a copy of the sudoers file by:

peter@gemini:~$ busybox sed -i 's/# %wheel ALL=(ALL) NOPASSWD: ALL/%nodered ALL=(ALL) NOPASSWD: ALL/g' sudoers_test
The above is all one line.

peter@gemini:~$ busybox sed -i 's/# %wheel ALL=(ALL) NOPASSWD: ALL/%nodered ALL=(ALL) NOPASSWD: /sbin/shutdown,/usr/bin/svc/g' sudoers_test

What we have so far is to get back to a highly insecure situation and we need to reduce the scope of sudo to a few programs. The original purpose was to be able to shutdown and to reboot so we can restrict to use of shutdown and perhaps svc although allowing use of svc could open up some big security holes again. Anyway the next problem is to find the paths to them so the NOPASSWD: ALL can be replaced by paths to a list of executables.

The command which will find the paths for us

root@raspberrypi4:~# which svc
/usr/bin/svc
root@raspberrypi4:~# which shutdown
/sbin/shutdown
root@raspberrypi4:~#

The line we want in sudoers then becomes

%nodered ALL=(ALL) NOPASSWD: /sbin/shutdown,/usr/bin/svc

These were my test and diagnostic flows so I see the responses in the debug sidebar to the various exec nodes before changing in the operational flows.

The final step in the changes will be to make it persistent through updates by adding the sed code to my existing /data/rc.local file.
You will recall when the files /data/rcS.local or /data/rc.local exist, they will be called early (rcS) and late (rc) during startup. These scripts will survive upgrades and can be used by customers to start their own custom software.

The sed code to add as a single line is:

sed -i 's/# %wheel ALL=(ALL) NOPASSWD: ALL/%nodered ALL=(ALL) NOPASSWD: \/sbin\/shutdown,\/usr\/bin\/svc/g' /etc/sudoers

The file will only be changed once as the 'search' will not succeed on subsequent restarts or a mechanism can be put in place so the code in /data/rc.local is only run once. Note the escaping of /s in strings by \.

1 October 2022

Ruuvi Tags

The tages needed to be named to avoid duplication in Node-RED. In the Remote Console, highlight the temperature sensor in the "Device List" and press right arrow. On the next screen go down to "Device" and press right arrow again which gets you to key details about the device. Then go down to Name, press space, and then type in your custom name

6 October 2022

Rewrite of "Controlling relays from a Raspberry Pi GPIO" [WIP]

This Appendix was originally written for Venus OS 2.80 where Node-RED is run as root. It was used for a seasons boating and OS v2.80 is still in use on my operation system on Corinna in September 2022. Under Venus OS v2.90 NodeRED runs as a separate user called nodered rather than as root so some changes were needed to the code to what follows is actually more compact and avoids scripts. The original Appendix can be found here.

The addition of relays is very important as some of the less smart Victron Devices have inputs for remote control switches. It always surprises me that after 50 years or more of solid state electronic relays are still in common use, in fact their use surprised me over 50 years ago when I started designing equipment for the research satellites!

There are several different ways to access GPIOs from programs, but sysfs is a simple one that is supported by the Linux kernel which makes the devices visible in the file system. Sysfs is a pseudo filesystem provided by the Linux kernel that makes information about various kernel subsystems, hardware devices, and device drivers available in user space through virtual files. GPIO devices appear as part of sysfs so one can experiment from the command line without needing to write any code. For simple applications you can use it this way, or by putting the commands in shell scripts. Before continuing, I should mention that this interface is being deprecated in favour of a new GPIO character device API. The new API addresses a number of issues with the sysfs interface. However, it can't be easily be used from the file system like sysfs, so the examples here will use sysfs, which is still going to be supported for some time. In any case, as far as I can tell, the Venus OS only uses sysfs for GPIO control. A basic introduction to controlling the gpio using sysfs is described in https://raspberry-projects.com/pi/command-line/io-pins-command-line/io-pin-control-from-the-command-line

So what is meant by a pseudo filesystem? There are many devices which are accessible via a mechanism which looks like a folder structure - I found about 60 in /sys/class/. If we looked at /sys/class/gpio we find a list like this where a number of GPIOs are already in use.

root@raspberrypi4:~# ls /sys/class/gpio
export gpio16 gpio17 gpiochip0 gpiochip504 unexport
root@raspberrypi4:~#

This corresponds to a set of folders for the gpios which are in use and we also find two other items which behave like files, export and unexport. Only a small number of the possible gpio ports are in use and to make say GPIO23 available we need to send 23 to that 'pseudo file' export. We will now find an extra folder gpio23 as below:

root@raspberrypi4:~# echo 23 > /sys/class/gpio/export
root@raspberrypi4:~# ls /sys/class/gpio/gpio23
active_low device direction edge power subsystem uevent value
root@raspberrypi4:~# echo "out" > /sys/class/gpio/gpio23/direction
root@raspberrypi4:~# chmod 777 /sys/class/gpio/gpio23/*
root@raspberrypi4:~# echo 1 > /sys/class/gpio/gpio23/value
root@raspberrypi4:~# echo 0 > /sys/class/gpio/gpio23/value

I have then sent "out" to direction to make it an output and 1 to value to turn it on and 0 to turn it off. All quite simple. The use of export and unexport gives a crude mechanism to avoid problems if several programs or users try to access the same GPIO. I have not tried all the various parameters but active_low is useful as some common relays boards are activated with a low input. The setting of permissions allows latter versions of Node-RED which run as user nodered access.

It is quite possible that your system already has some GPIOs already in use. A fresh install of Venus OS 2.90 on a Raspberry Pi 3 or 4 do not seem to have any allocated. However some are in use by Victron Controllers and if you use extensions such as that by Kevin Windrem they will have added some extras. https://github.com/kwindrem/RpiGpioSetup/blob/main/FileSets/gpio_list has a 'standard' list of current pins used in the Venus OS including proposed extension to 6 (2) relays outputs, 5 inputs and an extra graceful shutdown pin.

Following is a summary:

# relays are active HIGH those which exist in my 2.80 large have a *

# Relay 1 Pin 40 / GPIO 21 *
# Relay 2 Pin 11 / GPIO 17
# Relay 3 Pin 13 / GPIO 27
# Relay 4 Pin 15 / GPIO 22
# Relay 5 Pin 16 / GPIO 23
# Relay 6 Pin 18 / GPIO 24
# Digital input 1 Pin 29 / GPIO 05 *
# Digital input 2 Pin 31 / GPIO 06 *
# Digital input 3 Pin 33 / GPIO 13 *
# Digital input 4 Pin 35 / GPIO 19 *
# Digital input 5 Pin 37 / GPIO 26 *
#### Graceful shutdown input
#### Note this input is NOT added to the available I/O used by Venus OS !!!!
# Pin 36 / GPIO 16

An essential check is via gpioinfo, a command built into the kernel as part of the new GPIO character device API. The output below is from Venus OS 2.90 on my Pi 4 - note none of the GPIOs are used here apart from one I am testing. The output has been 'filtered' to just show GPIOs

root@raspberrypi4:~# gpioinfo 0 | grep GPIO
line 4: "GPIO_GCLK" unused input active-high
line 5: "GPIO5" unused input active-high
line 6: "GPIO6" unused input active-high
line 12: "GPIO12" unused input active-high
line 13: "GPIO13" unused input active-high
line 16: "GPIO16" unused input active-high
line 17: "GPIO17" unused input active-high
line 18: "GPIO18" unused input active-high
line 19: "GPIO19" unused input active-high
line 20: "GPIO20" unused input active-high
line 21: "GPIO21" unused input active-high
line 22: "GPIO22" unused input active-high
line 24: "GPIO23" "sysfs" output active-high [used]
line 23: "GPIO24" unused input active-high
line 25: "GPIO25" unused input active-high
line 26: "GPIO26" unused input active-high
line 27: "GPIO27" unused input active-high

There is some documentation from kernel documentation at https://www.kernel.org/doc/Documentation/gpio/sysfs.txt and https://embeddedbits.org/new-linux-kernel-gpio-user-space-interface/ has an introduction to the differences between the new api and the old depreciated sysfs mechanism. Current documentation (https://github.com/victronenergy/venus/wiki/bbb-gpio) implies that io access in the Venus OS is implemented in the kernel as sysfs control: /sys/class/gpio.

There is a good interactive pinout guide for the Raspberry Pi showing the pins which are used by convention for certain purposes and those available for general purposes at https://pinout.xyz/# This shows that most GPIOs have multiple uses. Many of the optional uses are configured by Device Tree overlays. Device Trees are well beyond what I understand enough to cover here but they makes it possible to support many hardware configurations with a single kernel and without the need to explicitly load or blacklist kernel modules. On Raspberry Pi, Device Tree usage is controlled from /boot/config.txt. By default, the Raspberry Pi kernel boots with device tree enabled but only a small set of GPIOs are configured for special purposes - eg use of pins for a serial port. gpioinfo seems to be the way to find out the current status.

Once you have chosen and set up the GPIO folders as above it was trivial to control them from within Node-RED with echo commands in an Exec Node in version v2.8x. It is not quite so simple in v2.90 as nodered no longer runs as root so you have to set permisions as well during the set up of the GPIO which is why there is the mysterious chmod line in my example above as a reminder.

There is a catch to this in as much as one is making modifications in the root file system which is overwritten by every firmware update so we come to the second part - making the set up persistent through restarts and Firmware Version Updates

A very important feature of the Venus OS is the ability to run programs in the user area during the initialisation process described at https://www.victronenergy.com/live/ccgx:root_access which describe the hooks which are built in to allow a script to be run as root whilst residing on /data and hence surviving a version update. To quote

If the files /data/rcS.local or /data/rc.local exist, they will be called
early (rcS) and late (rc) during startup. These scripts will survive upgrades
and can be used by customers to start their own custom software.

This is used by several of my updates for v2.90 and should be executable. The GPIOs are now all set up in advance within /data/rc.local. I made sure all of the code could be run after every restart without problems. My rc.local file sets up a number of activities including adjusting various permisions, copying files which are overwritten during updates and implementing sudo for resticted use by Node-RED. The relevant content of /data/rc.local is:

#!/bin/sh
# Contents of /data/rc.local - needs execute permissions.
#
# Background: When the files /data/rcS.local or /data/rc.local exist,
# they will be called early (rcS) or late (rc) during startup. These scripts will
# survive upgrades and can be used by customers to start their own custom software.
#
# This sets up gpio access for an additional 4 relay outputs
# accessible by node-red in addition to those specified using gpio_list
# Node-red now runs without root access as user nodered
# requiring the setting of permissions to allow access to specific parts of sysfs
#
echo 22 > /sys/class/gpio/export
echo "out" > /sys/class/gpio/gpio22/direction
chmod 777 /sys/class/gpio/gpio22/*
echo 23 > /sys/class/gpio/export
echo "out" > /sys/class/gpio/gpio23/direction
chmod 777 /sys/class/gpio/gpio23/*
echo 24 > /sys/class/gpio/export
echo "out" > /sys/class/gpio/gpio24/direction
chmod 777 /sys/class/gpio/gpio24/*
echo 25 > /sys/class/gpio/export
echo "out" > /sys/class/gpio/gpio25/direction
chmod 777 /sys/class/gpio/gpio25/*
#
# Replace the file which provides support for Victron relays
# as it is over-written during OS updates.
cp /data/gpio_list /etc/venus/gpio_list
chmod 744 /etc/venus/gpio_list
#
# Log when rc.local actioned
echo "$(date) rc.local actioned" >> /data/rc_local_log
chmod 666 /data/rc_local_log
#
exit 0

The GPIO section of rc.local creates the required entry for each GPIO and sets the permissions so it can be switched by Node-RED by single commands in an Exec Node. eg. echo 1> /sys/class/gpio/gpio22/value will switch a relay on.

Warning - check you do not have an existing rc.local file, if you do it may be possible to add the above section or turn it into a script and call it from rc.local. The versions of Kevin Windred's SetupHelper I have looked at only use rcS.local .

As I said above the set up of GPIO ports to control Relays from Node-RED is only part of what my rc.local does and I may cover other uses including sudo with Node-RED in a future article.

Basically the same code has been controlling some of my less clever Victron devices via relays on my narrowboat for 6 months and finally logged up 3300 hours under Venus OS 2.80 without a reboot prior to my updating to version 2.90.

Relays in Victron Devices

I have also set up two relays accessible by the Victron relay node by changing /etc/venus/gpio_list to contain:

# In /data/gpio_list (then copied by re.local to /etc/venus/gpio_list)
#- in digital_input_1
#- in digital_input_2
#- in digital_input_3
#- in digital_input_4
16 out relay_1
17 out relay_2
#- out mkx_rst
#- out vebus_standby

Note: Having changed the file the relays still need to be switched to manual control in the Remote Console

Both ways work fine with Venus OS v2.90~24 Large and higher. The first needs no loads after an update whilst the second is limited in the number of relays and needs a file to be reloaded which can be automated in the rc.local script as you will see above.

I should also mention that there is a comprehensive SetupHelper by kwindrem with associated RpiGpioSetup package which should allow Node-RED to access some additional relays but I had not explored this option when writing this section as I already had everything I needed.

17 October 2022

Plans for updating Pi 3B+ from Venus OS 2.80 to 2.92

This will have to be a fresh download and flash of a MicroSD card as there were changes in the basic disk structure at Venus OS beta v2.90~12. The OS is now read only and Node-RED is no longer running as root but as user nodered.

I have done the basic 2.90 fresh install several times on the my Raspberry Pi 4B rev 1.5 several times but there will be a few differences when doing this on the 'operational' Pi 3B+ which is on Corinna which is connected to 4 Victron devices via VE.direct USB cables and two more are controlled via relays.

Log of progress

Downloaded release version from http://updates.victronenergy.com/feeds/venus/release/images/raspberrypi2/

Flashed and verified using BalenaEtcher (run as Appimage)

Flows exported from Pi 4B dev system which has all changed needed for 2.90 implemented

Uploaded to pCloud the latest versions of:

so I could access from Pad, phone or laptop on Corinna.

Managed to easily change MicroSD by releasing Pi 3B+ in its standard case from its mounting plate without removing any wires - it is held down by rubber bands for that reason.

Connected by Bluetooth in VictronConnect on Pad and used settings (wheel in top cright corner) to scan and set up Wifi network to my phone hotspot.

Access Remote Console using link and/or Host back on home page of VictronConnect

Did preliminary setting up in Remote Console on Pad

Changed to Laptop and checked

Set up Filezilla (SFTP, Host, Normal Login, root, password)

Note that I have set my system up so many activities are run within /data/rc.local (as root) during startup including to:

  1. Set up GPIOs
  2. Fix permissions to allow nodered to read the boot log to determine the current OS version running - chmod 666 /data/log/boot
  3. Run timer loop in /data/every-second.sh
  4. Replace /etc/venus/gpio_list from /data/gpio_list
  5. Create folder /data/home/nodered/.node-red/log and set permissions to 777 and owner and group to nodered
  6. Add code for use of sudo without a password for user nodered (uses sed)

So the extra actions required areare:

Do two reboots from remote consoleto enable rc.local to complete set up of sudo

Now set up node-RED

After which Firmware updates should not need additional activities over the usual. But Note that after any update the Superuser password needs to be re-entered using the Remote Console and the Raspberry Pi now also needs to also be rebooted from the remote console after the password entry before sudo is fully activated.

Handling old Kodak PhotoCD files

Eventually found pcdtojpeg program at https://pcdtojpeg.sourceforge.io/Installing.html which has instuctions on how to compile for Ubuntu:

Other Operating Systems
pcdtojpeg can be compiled for just about every operating system that has the following:
1.A C++ compiler that conforms to C99 (e.g., GCC). Specifically, stdint.h must be implemented.
2.The IJG’s libjpeg library, written by Thomas Lane. This is standard on virtually all *nix and Linux distributions.

In addition, by default pcdtojpeg uses the pthreads library for multithreading. However, it not a requirement. If your system either doesn’t support pthreads, or you don’t want multithreading, you can compile pcdtojpeg without pthreads support by defining the mNoPThreads symbol.

The process that you will need to follows will vary depending of your operating system, but will be similar to the following example, which is for Ubuntu Linux:
Note: Unlike almost all other Linux distributions, Ubuntu does not by default install GCC. If you haven’t yet done so, you need to use the Synaptic Package Manager (or apt get) to install the “build-essential” package prior to trying to compile pcdtojpeg.
1.Download pcdtojpeg.
2.Unzip the distribution file. Usually, you can do that by double clicking on the zip file you downloaded
3.Open a terminal in the “src” directory of the unzipped distribution file.
4.Compile the source code by using the following command:
g++ main.cpp pcdDecode.cpp -ljpeg -lpthread -o pcdtojpeg
5.Copy the resulting pcdtojpeg executable file to any convenient location.
6.In order to compile without the pthread library, use the following command:
g++ main.cpp pcdDecode.cpp -DmNoPThreads -ljpeg -o pcdtojpeg
Once you have done this, pcdtojpeg is ready to go. Using pcdtojpeg is described on the Usage page.

It did not compile even though build-essentials was present and complained about missing jpeglib.h

From post https://askubuntu.com/questions/842849/configure-error-jpeglib-h-not-found

That said,jpeglib.h is provided by libjpeg-turbo8-dev. So install it using:

sudo apt-get install libjpeg-turbo8-dev

Hi Ron, I tried the command but it says The following packages have unmet dependencies: libjpeg-turbo8-dev : Depends: libjpeg-turbo8 (= 1.1.90+svn733-0ubuntu4.4) but 1.3.0-0ubuntu2 is to be installed
Oct 29, 2016 at 7:36
tried that too..here is the output Reading package lists... Done Building dependency tree Reading state information... Done libjpeg-turbo8 is already the newest version.
Oct 29, 2016 at 11:02
after this I again tried sudo apt-get install libjpeg-turbo8-dev. Still gives the same error –

libjpeg-turbo8-dev doesn't work anymore. It is changed to:

sudo apt-get install libjpeg62-turbo-dev

I installed libjpeg-turbo8-dev and it then worked

Synaptic History

Commit Log for Sun Nov 6 15:06:32 2022
Installed the following packages:
libjpeg9 (1:9d-1)

Installed the following packages:peter
libjpeg-turbo8-dev (2.0.3-0ubuntu1.20.04.3)
libjpeg8-dev (8c-2ubuntu8)

Run by

@gemini:~$ ./pcdtojpeg img0054.pcd

files in same folder

Results spectacular with files 3072 x 2048 and clearly resolving lines of 2 pixels width from Canon 35mm SLR camera with 200ASA print film - .jpg file sizes ~4.5 - 5.5 Mbyte ~10% less than .pcd files

22 November 2022

Patching Node-Red-Contrib-Victron from latest master

Dirk-Jan Faber's instructions were:

What I usually do when testing is replacing `src/*` files with the new ones (in the `/usr/lib/node_modules/victronenergy/node-red-contrib-victron/` directory), without using npm for it.

So first go to github - https://github.com/victronenergy/node-red-contrib-victron and select on left master or the branch or tag you want to test then on right there a big green button marked code which has a dropdown menu including Download Zip. You can then extract that somewhere so you can access the src folder.

Before going further one should stop the node-red-venus service using a terminal opened with SSH by

svc -d /service/node-red-venus

I use Filezilla for the transfer and I first rename the existing src folder to srcbak the copy the src folder and subfolders from the zip extract. I usually give it r/w access for everyone but it is probably not required.

I then start up node-red again by

svc -u /service/node-red-venus
I then reboot a couple of times for luck from the remote console

This seems to work fine for testing but you need to wait for the full upgrade which will have the updated documentation etc.

 

Before you leave

I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.

Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Content revised: 30th October, 2022