Home Pauline Howto Articles Uniquely NZ Small Firms Search
Diary of a Homepage
Part 28 (November 2016 ->)

25 November 2016

DroidEdit - an Code Editor for Android

I have been looking once more for a good text/code editor for my Android machines now i have a Samsung Galaxy Tab S2 8" which now has the screen size and resolution to make it possible to look at writing some of the content for our web site without a laptop.

The advantages and features I have now discovered are:

Overall it seems to be a very practical option for writing and coding - very similar to gedit or xed in usability provided it is used on a machine with a big screen and one with adequate power if one wants syntax highlighting on big files.

There is another option anWriter which has code completion and seems faster and previews .htm files but does not have word wrap yet which is a show stopper or shortcut keys and is only planned in paid version.

 

26 November 2016

Video Editing Revisited (Kdenlive Versions 15.12.3 and 16.08.2)

I have started producing some videos from the Sony Camera taken earlier this year and from the new Panasonic Lumix TZ80 camera I bought to replace the existing still and complement the video cameras. This has meant that I have had to relearn all about Kdenlive which has changed significantly (for the better) since I last used it a couple of years ago. The last time I did any serious work with videos was in 2014 when I did a serious archiving of all my existing recordings which were on VHS tapes (mostly commercial recordings), Video8 tapes, and MiniDV tapes as well as the more recent digital recordings from the Sony Video Camera bought in September 2013 which have all been recorded in in H264 720p MP4 format although it is capable of 1080p.

All the earlier recordings were converted to .mp4 files, the analog ones as a single big file and the digital MiniDV ones as individual clips within a folder. The MiniDV files were also concatenated into a single file for easy playing prior to editing.

Folder and filename structures in use for source files:

Analog Video8 recordings from Sony Cameras.

Mostly singe file per tape captured under Windows and saved as .mpg (mpgv) 720x576 frame rate 25 and MPEG audio (mpga) for compatibility. These are stored in named folders under My Video/Video8 Capture/ eg My Video/Video8 Capture/1995 New Zealand/NZ95-1.mpg . Total 390 Gbytes. These suffer from edge artifacts which need to be trimed off.

These have been transcoded, de-intelaced and cropped of artifacts by Handbrake to .m4v files (H264 MPEG-4 AAC audio mp4a) typical resolution 704x578 for a display of 690x552 after triming which are typically <50% of the size and clean in appearance and are in stored in My Video/Video8 Videos/ typicaly My Video/Video8 Videos/NZ95-1.m4v suitable for viewing or as an input to an editor. 117 files total 185 Gbytes

JVC and Sony MiniDV from 2000 to 2012

Mini DV tapes are converted by Handbrake and are in form of My Video/DVnnn/DVnnn.yyyy.mm.dd_hh-mm-ss.m4v. There are 105 miniDV tapes labeled and in folders DV001 to DV105 for a total of 151 Gbytes

Sony Digital HDR-CX280 from 2013 ->

More recent Digital Video from Sony Camera and SDH cards are in My Video/yyyy/mm/dd/MAHnnnnn.MP4 where nnnnn goes from 00001 to 03007. Most of the folders also contain Sony 160x120 icon files with the .THM extension

Other less common sources of Digital Video 2008 - 2015

Canon A720 IS Camera Video (640x480) are also in My Video/yyyy/mm/dd/ and the files are MVI_nnnn.AVI tyoe MJPEG 640x480 frame rate 30 with Mono PCM sound.

Vivitar Underwater Camera are in My Video/yyyy/mm/dd/ and the files are CLIPnnnn.AVI type MPEG-4 (XVID) 640x480 frame rate 30 Mono MS ADPCM sound.

New Video Sources from 2016 ->

Panasonic Lumix TZ80 Camera - bought partially for its video capability with EVF (eye level Electronic ViewFinder) integrated into My Video/yyyy/mm/dd/P0nnnnn.MP4 where nnnnn is part of same series as .JPG picture files.

Canon Camera SX610 - 720i HD and 1080i Full HD both at 30 frames/sec None integrated so far and probably little use due to different format but destined for My Video/yyyy/mm/dd/MVInnnn.MP4 where nnnnis part of picture series.

Samsung Phones None integrated so far but destined for My Video/yyyy/mm/dd/

The following is a part of the structure:

Kdenlive

Kdenlive has become even better as a Video Editor since I last used it as version .9xx. Version 15 seemed very stable after I can converted my existing files whilst 16.08 currently seems a bit more prone to crashes although the automatic backups seem to have avoided any significant lose of work. Lets look at what one needs to be able to do to edit a Video and see how Kdenlive does:

Minimum Video Editing Requirements

Desirable additions:

 

The panel above comes from my earlier work in Ubuntu on the Take largely writen before my shift from video cameras using MiniDV tapes to Cameras using solid state storage nor did it take into account my earlier Video8 analog tapes. Again DVDs are almost a thing of the past and few laptops have a DVD reader and DVD players are no longer an essential in every home entertainment system. In exchange the importance of the internet and mobile devices has increased as has the push to wards higher quality 'HD' and 'Full HD' video and onwards to 4k video. So although Kdenlive could cover the first point and the last they are no longer prime requirements, in fact the first step with MiniDV tape input is best split out and done in stages by specialised programs which allow the input scenes to be converted into a more efficient format more suitable for editing and outputing. Note: The latest version of Kddenlive acknowledge that and the DVD abilities are no longer integrated.

The rest of the list can easily be done by Kdenlive, much is intuitive but there are many shortcuts or tricks to make the process very efficient. There are also some facilities which would be very useful even if they do not all currently work fully including a mechnism to split a file into scenes based on content as well as time codes which would be extremely useful for the Video8 tapes where the scenes have to be manually split for editing.

Video Project and Rendering Profiles

Video8 Videos

This is probably the most demanding test of the video processing culminating in kdenlive

They were initially captured via an analog input using Pinnacle Studio DVplus software running under Windows Vista in circa 2007. This was before I converted completely to Linux and the Studio DVplus/Studio 10 were good editors at the time when I was initially working on my MiniDV videos. The format used was similar to that of MiniDV tapes as the intention in both cases was to create DVDs which use a subset of MPEG-2 at 720x576 with a pixel aspect ratio of 16x15 to give an aspect ratio at 4x3 at a display rate of 25 frames per second, interlaced. The audio is MP2 (MPEG Audio Layer II).

These files were latter (2014) transcoded using Handbrake to H264 reducing down to circa 1.8 Gbytes per hour. At the same time the size was cropped to 590x552 which, with a pixel aspect ratio of 16x15, retains the aspect ratio at 4x3 to a high degree of accuracy. The video was de-interlaced in Handbrake so it is progressive and the conversion to H264 has also led to a conversion of the PCM audio to AAC at 160 kb/sec stereo. The end result of this is that the Project Settings should use a SD/DVD Profile at 25 frames/sec., or one could create a special non standard profile for the 590x552 size.

The following screen grab shows the input clip properties and the project settings I have chosen to match.

and the settings for rendering are:

The rendering rate on my Defiant is circa 24 mins per hour of input video with the above settings and the resulting video occupies 1030 Mbytes/hr is considerably lower than the original 13 Gbytes per hour or the intermediate 1.8 Gbytes/hr and a very good match to the original Video8 video in quality. I did not experiment with the video quality setting but 18 is probably on the low side. Using 23 gives about 470 Mbytes/hr and may not impact the quality greatly if you want to minimise the sizes. NOTE: I have noticed one needs to change Auto to Force Progressive to force a progressive output.

Sony Camera in MP4 format

The Sony information states this is 1280 x720 pixels at 25 frames sec progressive quality , H.264 6M with AAC audio 384K (~3 Gbytes/hr). I have run a number of experiments on the settings in the rendering profile and a quality setting of 18 copes close to the incoming video bitrate of 6000 kb/sec and gives approximately 5500 kb/sec. The incoming stream has an audio bitrate of 128 kb/sec rather than the 384 quoted by Sony so the output bitrate has been selected to be the same. The measurements were made by using the internal properties screen for the incoming stream and the output using the same screen on a rendered video added back into the project bin. I found that the audio bitrate did not seem to be set correctly so I edited the ab= value to be 128k and also set up profiles with ab=160k and 192k. I did various tests of rendering speed and the medium setting on encoder speed seemed to give reasonable speeds and a slight improvement in video bandwidth. I am set to 8 threads but the setting seems to be ignored as changing to one thread makes no difference in the cores in use as displayed in my system monitoring.

Panasonic TZ80

Rendering with quality set to 18 and 128 kb audio gives at medium rendering speed a video bandwidth of 14050 kb/s and 6.5 Gbytes/hr data when the rendering is using the medium speed preset. Medium speed is 165% real time and 85% with the faster rendering preset. Faster at 18 gives video bandwidth almost the same at 13935 kb/sec and a fractionally smaller file size. Quality 19 gives a video bandwith of 12000 kb/sec. Quality 18 gives a video bandwith very equivalent to the 18 for the 720i from the Sony camera as (1920x1080)/(1280x720) = 2.25 so is probably a sensible standard to use but with the faster rendering preset as there is extra quality in hand. Quality 16 is required to fully match the Full HD input video stream at 20000 kb/sec if one has really fast action.

Background on presets etc in libav and impact on Profile settings

Rate Control and Quality

Constant Rate Factor is simple to use and targets a quality level and tries to maintain it over the encoding. The -crf value range is from 0 to 53 (or 63 in 10bit mode) and maps to the same range of the quantizer, ideally it would provide the same perceptual quality of the constant quantizer rate-control but in less space by discarding information human eyes would not perceive. As rule of thumb every increase by 5 halves the bitrate, 0 being lossless encoding. I have experimented in the range of 16 to 22 to match the video inputs and 18 seems a good compromise. Note the Kdenlive slider settings only give 15, 23, 30, 38 and 45 so the use of the 'More Options' box is essential and it is best to create a new profile with better options.

Presets

The encoder presets are ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo. There is the option of additional tunings which are film, animation, grain, stillimage, psnr, ssim, fastdecode, zerolatency. Only a subset is available through Kdenlive.

Each preset is supposed to be about twice as slow as the previous but this does not seem to be the case when used through Kdenlive.

Mixed Inputs

The Profile needs to be that of the most common input and the rendering should be the same. The new Canon SX610 can be used for video although that is not the intention. It records at 720i and 1080i in H264 (MP4) but at 30 frames/sec. I have done some experiments and Kdenlive seems to be quite happy to work with mixed inputs and render at H264 25i. I have checked the output is transcoded to 25 frames/sec and runs at the correct rate! I have not checked the audio with a quality input.

Audio Rates

I have gererally rendered at the audio bitrate of the input stream but will increasee that if I add a high quality audio track to 192kb/s or 384kb/s to match typical CD and MP3 sources.

Kdenlive - Some Basic Editing.

You can now reliably add multiple selected clips into the Project Bin, 200 now seems to be no problem.

You can drag (multiple) clips from the Bin to the timeline, again several hundred.

You can make and edit titles which then live in the Project Bin and can be saved for further (re)use in a folder - I use titles as the folder name and keep them in the project folder.

There are three tools for working in the timeline:

Remember to return to the standard Select after using the other tools..

There are now (since 15?) three modes which allow for three point editing, ie inserting into the timeline and making space or cuts to do so.

You can use the mouse wheel in the timeline ruler, timeline and zoom bar at bottom write to control the display

There are many useful shortcut keys to make accurate movements down to a frame at a time. See https://userbase.kde.org/Kdenlive/Manual/Useful_Information/Shortcuts

Snap

Before proceeding further I should mention snap which is crucial to the effective use of kdenlive - it should be on but can be toggled in the little toolbar at the very bottom right.

When this feature is on, dragging, for example, the beginning of one clip near to the end of another will result in the end of the first clip snapping into place to be perfectly aligned with the beginning of the second clip. As you move the two ends near to each other, as soon as they get within a certain small distance, they snap together so there is no space and no overlap. Note that this occurs even if the clips are on different tracks in the timeline. This occurs with almost every type of drag or positioning operation on clips, cursors, guides effects, transitions etc., and is how everything ends up accurately lined up when using the mouse.

Moving through your project

The timeline cursor shows your current position in the project. The positions of the cursors on the timeline ruler and Project Monitor are always in sync. Position can be moved in following ways:

Selecting a Zone and use of Zones

A zone can be selected in a clip or in the project and is required for certain activities such as setting up a region for preview video. The zone is displayed at the top of the timeline under the timecodes and in the project monitor. As far as I can tell the zone can only be selected in the project monitor using the Set Zone In (I) and Set Zone Out (O) buttons or in the timeline by shortcut keys I and O.

Also see https://userbase.kde.org/Kdenlive/Manual/Monitors#Creating_Zones_in_Project_Monitor

Adjusting Volume - Intro to use of Keyframes

The is a good tutorial from kdenlive at https://kdenlive.org/project/editing-audio-volume-with-keyframes/

 

Complete list of Kdenlive Keyboard Shortcuts

You can also see the full list and set up new shortcuts by Settings -> Configure Shortcuts. I have (re)set up two extra shortcuts :

Stabilising the Video

Select a Clip in the Project Bin -> Clip Jobs -> Stabilise ceates a 'clip' with .mlt added which you can drag to the timeline. There are a huge nuber of parameters and there is information in the Kdenlive Stabilisation Help

I have only used the defaults and it does seem to help a lot on flying videos despite the camera Image Stabilisation already present. The examples in the above reference are almost unbelievablely effective.

Now one needs to see if it is working. The best way is to set up a split screen of some sort by putting the stabilised and unstabilised videos below each other and using the Composite and Transform between them. In the simplest case you can just use the opacity setting to give a shadow of the old or you can add Position and Zoom Effects to each and scale to 50% and shift one sideways so you have a side by side comparison. You may need to use preview rendering to get a smooth view.

This has unfortunately shown that the stabilisation has had some unfortunate results in places on my flying aircraft videos so needs to be used with extreme care.

Kdenlive Effects - Motion Freeze

This is very useful if you have some video where parts are impossible to stabilise but you want to retain the audio track or just to turn it into a still [under a title].

This effect causes the video to freeze. If you add the effect and leave both check boxes unchecked, the clip will be frozen for its entire length. To change this, check either the Freeze Before or Freeze After setting and move the Freeze At slider to the time where you what the freeze to start or end. If Freeze Before is selected, the video will be frozen at the start and then start moving when it hits the Freeze At time. If Freeze After is selected, the video will be moving at the start and then freeze when it hits the Freeze At time. The audio in the video plays for the entire length, i.e. the Freeze effect does not alter the audio.

Preview Rendering

This is a new and very useful feature in Kdenlive 16.08. Again it is not implemented in an intuitive way but it is essential if one wants to look smoothly at complex effects in high resolution videos.

Automatic Scene Split

This only partially works which is a pity as it would be very useful for my early analog videos. It does seem to do something but the most I could make work was the analysis which seemed to generate split positions in frame number and possibly a confidence figure.

Copy Paste between Kdenlive Projects

This is an important addition in version 15 of Kdenlive but it is implemented in a complex way which is difficult to find and understand. I feel I have to refer to the Paper in the Kdenlive Toolbox on the Library which makes it reasonably clear. In Summary

Kdenlive Folder Defaults and Suggestions for Configuration.

The latest version of Kdenlive are quite different in many ways tothe previous versions I used which are versions .9x and many of the helpfiles and tutorials do not seem to have caught up which has meant that to some extent, I have had to experiment to findout the latest file usage and default folder locations. Kdenlive version 16.12 now locates its cache files in the Project Folder in a folder with a random 12 number name.

The subfolders are audiothumbs, videothumb and preview . Preview is a new feature.

These new caches can be cleared from within Kdenlive by Project -> Settings -> Cache Data and those still required will be regenerated if you re-open the project.

I have also found .cache/kdenlive/proxy/ which may contain the proxy files but I suspect they are now stored stored in a sub-folder within the project folder but I can not confirm that until I test it by using a proxy but Project -> Settings -> Cache Data has an option to clear it wherever it is! The old cache could always be deleted and would again be recreated and re-populated if required

There is now aConfig file which is .config/kdenlivec.

If you find that you settings are in a complete mess (such as being locked into full screen mode which happeded to me), it is possible to delete this file and you should be back to a basic new system but you will have to reconfigure everything including shortcuts, default folders etc.

So my Project Folders for the latest versions of Kdenlive 16.12. or higher is called Kdenlive and lives on my DATA partition (/media/DATA/Kdenlive) which contains sub-folders namely:

A folder I have created called Library to hold the kdenlive library - see below for setting default locations.

Folders for each Project with each containing:

Project files (.kdenlive) - these have multiple copies as I keep saving with increasing version numbers for security

A folder I have chosen to create called titles containing all title definitions (.kdenlivetitle) used in the project.

A folder I have chosen to create called pictures containing all custom pictures used in the project such as extracted frames and all other pictures not in My Pictures folder structure.

A folder I have chosen to create called rendered to contain rendered video. I also use a higher level folder called Rendered Video for more general rendered video.

The kdenlive project now also generate folders in the project folder - it always seems to generate one called .backup and I also have some called proxy and one called selections but they may be from an earlier version as well as the random numeric named folder containing the cache data.

The kdenlive project may also generate files, for example files with a .mlt extension with information on how to stabilise a video clip and, in this particular example, they are in the same folder as the video clip being stabilised.

I have deleted the folders called thumbs which no longer seemed to be used as it is replaced by folders in .cache/kdenlive in ones home folder

One needs to set up to use these default folder locations by Settings -> Configure Kdenlive -> Environment -> Default Folders where I have

The following shows a typical part of my folder structure where MRF is a project folder and Warbirds, Mediteranean_Wonders_2014 and Lisbon are further project folders. Library is the folder containg library items and is the only folder other than project folders directly beneath Kdenlive.

How robust is the structure above to changes in location and moves between machines and syncronisation between machines.

The folder Kdenlive can live anywhere as it contains all project specific files, the cache is a defined location in the home folder and all the 'input' files are defined to be in absolute locations in /media/DATA/My Videos and /media/DATA/My Pictures

If the input files are missing kdenlive does a search and/or asks for their new locations - I have checked that works for Video files. It may also be possible to create links (not tested)

My belief is that it should be possible to synchronise /media/DATA/Kdenlive, and /media/DATA/My Videos and /media/DATA/My Pictures between machines and everything should continue to work although it may take a while to recreate the thumbnails and other cached files. The project and kdenlive settings would also ideally need to be the same.

4 July 2017

Mint Consul Use

One can switch to console with CTRL+ALT+F1, log in, and type commands as usual to get out of a lockup of some form or kill a problem program using killall ie killall cinnamon-screensaver. Use CTRL+ALT+F7 or CTRL+ALT+F8 to get back to your session.

14th August 2017

Find text string in all files in current folder and subfolders.

This is very useful to search for use of programs etc when using Git

grep -rl "searchstring" .
where

Note the . (dot) at the end which could also be any /path

More interesting options at https://askubuntu.com/questions/55325/how-to-use-grep-command-to-find-text-including-subdirectories

2nd April 2018

Using Office 365 email under Linux

The Open University has started to provide access to Microsoft Office 365 to Students as well as Tutors. This has a big advantage even to a Linux user in that it provides an academic email address ie one ending in .ac.uk which is a standard way to check before firms provide academic discounts.

The way it works is that one first goes to the Office 365 Site office365.com where there should be a sign in link at the top right. Entering a email addess of the form ouusername@ou.ac.uk takes one to a OU specific login page and then one is into the online Office 365 which has a list of Apps you can access so -> Outlook and you have webmail access to your emails. Note that the forwarding option does not seem to work.

POP3 and STMP access to Office 365 Emails Accounts

Webmail access is useful but it is much better to be able to access from Thunderbird like all ones other accounts under linux or to use an Android phone or tablet. The setup is found in settings (the wheel at top right) -> Settings -> Mail (under your app settings) -> Accounts -> POP and IMAP which gives one all the information you need:

POP Setting

Server name: outlook.office365.com
Port: 995
Encryption method: TLS

IMAP Setting

Server name: outlook.office365.com
Port: 993
Encryption method: TLS

SMTP setting

Server name: outlook.office365.com
Port: 993
Encryption method: TLS

You only need this information and although there are some other tick boxes I left everything alone and it may well be that you never need to sign into Office 365 but I suggest you do as it may be required to initiate everything.

We now have the information to set up Thunderbird as usual when adding a new POP3 or IMAP account which I will not go into in detail although I have put some screen shots below to show what has been set or changed fro defaults

8 May 2018

Use of systemd (Part of checks to see if applets work on other distributions)

The applets need to access various system functions such as suspend and some of the applets need programs running the background as daemons ie the vnstat daemon. The following is a very brief introduction to systemd sufficient to act as a reminder for me and is based on https://wiki.archlinux.org/index.php/Systemd

Basic systemctl usage

The main command used to introspect and control systemd is systemctl. Some of its uses are examining the system state and managing the system and services. See systemctl(1) for more details.

Tip: You can use all of the following systemctl commands with the -H user@host switch to control a systemd instance on a remote machine. This will use SSH to connect to the remote systemd instance.

Analyzing the system state

Show system status using:

$ systemctl status

List running units:

$ systemctl

or:

$ systemctl list-units

List failed units:

$ systemctl --failed

The available unit files can be seen in /usr/lib/systemd/system/ and /etc/systemd/system/ (the latter takes precedence). List installed unit files with:

$ systemctl list-unit-files

Using units

Units can be, for example, services (.service), mount points (.mount), devices (.device) or sockets (.socket).

When using systemctl, you generally have to specify the complete name of the unit file, including its suffix, for example sshd.socket. There are however a few short forms when specifying the unit in the following systemctl commands:

If you do not specify the suffix, systemctl will assume .service. For example, netctl and netctl.service are equivalent.
Mount points will automatically be translated into the appropriate .mount unit. For example, specifying /home is equivalent to home.mount.
Similar to mount points, devices are automatically translated into the appropriate .device unit, therefore specifying /dev/sda2 is equivalent to dev-sda2.device.

See systemd.unit(5) for details.
Note: Some unit names contain an @ sign (e.g. name@string.service): this means that they are instances of a template unit, whose actual file name does not contain the string part (e.g. name@.service). string is called the instance identifier, and is similar to an argument that is passed to the template unit when called with the systemctl command: in the unit file it will substitute the %i specifier.

To be more accurate, before trying to instantiate the name@.suffix template unit, systemd will actually look for a unit with the exact name@string.suffix file name, although by convention such a "clash" happens rarely, i.e. most unit files containing an @ sign are meant to be templates. Also, if a template unit is called without an instance identifier, it will just fail, since the %i specifier cannot be substituted.

Tip: Most of the following commands also work if multiple units are specified, see systemctl(1) for more information.
Tip: The --now switch can be used in conjunction with enable, disable, and mask to respectively start, stop, or mask immediately the unit rather than after the next boot.

Start a unit immediately:

# systemctl start unit

Stop a unit immediately:

# systemctl stop unit

Restart a unit:

# systemctl restart unit

Ask a unit to reload its configuration:

# systemctl reload unit

Show the status of a unit, including whether it is running or not:

$ systemctl status unit

Check whether a unit is already enabled or not:

$ systemctl is-enabled unit

Enable a unit to be started on bootup:

# systemctl enable unit

Enable a unit to be started on bootup and Start immediately:

# systemctl enable --now unit

Disable a unit to not start during bootup:

# systemctl disable unit

Mask a unit to make it impossible to start it:

# systemctl mask unit

Unmask a unit:

# systemctl unmask unit

Show the manual page associated with a unit (this has to be supported by the unit file):

$ systemctl help unit

Reload systemd, scanning for new or changed units:

# systemctl daemon-reload

Power management

polkit is necessary for power management as an unprivileged user. If you are in a local systemd-logind user session and no other session is active, the following commands will work without root privileges. If not (for example, because another user is logged into a tty), systemd will automatically ask you for the root password.

Shut down and reboot the system:

$ systemctl reboot

Shut down and power-off the system:

$ systemctl poweroff

Suspend the system:

$ systemctl suspend

Put the system into hibernation:

$ systemctl hibernate

Put the system into hybrid-sleep state (or suspend-to-both):

$ systemctl hybrid-sleep

I found the above information when looking at how to set up vnstat on systems which did not do so automatically like Mint.

Setting up vnstat on systems such as Arch Linux (WORK IN PROGRESS and untested)

vnStat is a lightweight (command line) network traffic monitor. It monitors selectable interfaces and stores network traffic logs in a database for later analysis.

Install the vnstat package

Distribution dependent

Configuration

Start/Enable the vnstat.service daemon. For systemd systems the above indicates that one does

$ systemctl enable --now vnstat.service

Pick a preferred network interface and edit the Interface variable in the /etc/vnstat.conf accordingly. To list all interfaces available to vnstat, use vnstat --iflist.

To start monitoring a particular interface you must initialize a database first. Each interface needs its own database. The command to initialize one for the eth0 interface is:

# vnstat -u -i eth0

Usage

Query the network traffic:

# vnstat -q

Viewing live network traffic usage:

# vnstat -l

To find more options, use:

# vnstat --help

=================================================

or use these instructions based on https://www.howtoforge.com/tutorial/vnstat-network-monitoring-ubuntu/

Monitoring Network Traffic or Bandwidth Usage is an important task in an organisational structure or even for developers. It is sometimes required to monitor traffic on various systems which share the internet bandwidth. There might be situations where network statistics are required for decision making in the networking areas or use the logged information on the network traffic for analysis tasks.

vnStat and vnStati are command line utilities which are very useful tools that help a user to monitor, log and view network statistics over various time periods. It provides summaries on various network interfaces, may it be wired like "eth0" or wireless like "wlan0". It allows the user to view hourly, daily, monthly statistics in the form of a detailed table or a command line statistical view. To store the results in a graphical format, we can use the vnStati to obtain and provide visual display of statistics in the form of graphs and store them in the form of images for later use.

This deals with the procedure to install and use vnStat and vnStati. It also details the options and usage methods required to view and store the type of information you want. vnStat does most of the logging and updating , where as vnStati is used to provide a graphical display of the statistics.

Installing vnStat and vnStati

System dependent

vnStat setup and running

Once the installation is complete, vnStat has to be setup or configured as it does not start on its own. vnStat has to be told explicitly which interfaces have to be monitored. We then start the vnStat daemon called "vnstatd", which starts vnStat and monitors for as long as it is not stopped explicitly.

The first thing to do here is tell vnStat the network interfaces to monitor. Here we look at a wired interface "eth0" and a wireless interface "wlan0". Type the following commands in the terminal.

vnstat -u -i eth0

This above command activates monitoring that interface. The first time you run this command on any interface you will get an error saying 'Unable to read database "/var/lib/vnstat/eth0" '. Ignore this as you will also see it says it has set up a suitable database for the future!

Similar to above we also set the wireless network interface using the command:

vnstat -u -i wlan0

To view all the network interfaces available in your system, use the command:

vnstat --islist

Once you know all the interfaces that you want to be monitored, use the command above with that interface name to monitor the traffic on it.

Once the above steps are complete, we can now start the vnStat daemon. To do this, we use the following command:

END OF WORK IN PROGRESS

7 August 2018

Adding an SSD to the Defiant

Timeshift can also be used for 'cloning' as you can chose what partition you restore to. For example I have recently added an SSD to the Defiant and I just created the partition scheme on the SSD , took a fresh snapshot and restored it to the appropriate partition on the SSD. It will not be available until Grub is alerted by a sudo update-grub after which it will be in the list af operating systems available at the next boot. Assuming you have a separate /home it will continue to use the existing one and you will probably want to also move the home folder - see the previous section on Moving a Home Folder to a dedicated Partition or different partition (Expert level) for the full story.

Warning about deleting system partitions after cloning.

When you come to tidy up your partitions after cloning You Must do a sudo update-grub after any partition deletions before any reboot. If Grub can not find a partition it expects it hangs and you will not be able to boot your system at all and you will drop into a grub-recovery prompt.

I made this mistake and used the proceedure in https://askubuntu.com/questions/493826/grub-rescue-problem-after-deleting-ubuntu-partition by Amr Ayman and David Foester which I reproduce below

grub rescue > ls
(hd0) (hd0,msdos5) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) (hd1) (hd1,msdos1)
grub rescue > ls (hd0,msdos1) # try to recognize which partition is this
grub rescue > ls (hd0,msdos2) # let's assume this is the linux partition
grub rescue > set root=(hd0,msdos2)
grub rescue > set prefix=(hd0,msdos2)/boot/grub # or wherever grub is installed
grub rescue > insmod normal # if this produced an error, reset root and prefix to something else ..
grub rescue > normal

For a permanent fix run the following after you successfully boot:

sudo update-grub
sudo grub-install /dev/sdX

where /dev/sdX is your boot drive.

It was not a pleasant activity and had far too much trial and error so make sure you do update-grub.

12th August 2018

New Overall Backup Philosophy for Mint

My thoughts on Backing Up have evolved considerably over time and now take much more into account the use of several machines and sharing between them and within them giving redundancy as well as security of the data. They now look much more at the ways the backups are used, they are not just a way of restoring a situation after a disaster or loss but also about cloning and sharing between users, machines and multiple operating systems. They continue to evolve to take into account the use of data encryption.

So firstly lets look at the broad areas that need to be backed up:

  1. The linux operating system(s), mounted at root. This area contains all the shared built-in and installed applications but none of the configuration information for the applications or the desktop manager which is specific to users. Mint has a built in utility called TimeShift which is fundamental to how potential regressions are handled - this does everything required for this areas backups and can be used for cloning. TimeShift will be covered in detail in a separate section.
  2. The users home folders which are folders mounted within /home and contain the configuration information for each of the applications as well as the desktop manager which is specific to users such as themes, panels, applets and menus. It also contains all the Data belonging to a specific user including the Desktop, the standard folders such as Documents, Video, Music and Photos etc. It will probably also contain the email 'profiles' if an SSD is in use. This is the most challenging area with the widest range of requirements so is the one covered in the greatest depth here.
  3. Shared Data. The above covers the minimum areas but I have an additional DATA area which is available to all operating systems and users and is periodical syncronised between machines as well as being backed up. This is kept independent and has a separate mount point. In the case of machines dual booted with Windows it uses a files system format compatible with Windows and Linux such as NTFS. The requirement for easy and frequent syncronisation means Unison is the logical tool for DATA between machines with associated synchonisation to a large USB hard drive for backup. Unison is covered in detail elsewhere in this page.

Email and Browsers. I am going to also mention Email specifically as that has specific issues as it needs to be collected on every machine as well as pads and phones and some track kept on replies regardless of source. All incoming email is retained on the servers for months if not years and all outgoing email is copied to either a separate account accessible from all machines or where that is not posible automatically such as android a copy is sent back to the senders inbox. Thunderbird has a self contained 'profile' where all the local configuration a filing sytem for emails is retained and that profile along with the matching one for the firefox browser need to be backed up and depends where they are held. The obvious places are the DATA area allowing sharing between operating systems and users or in each users home folder which offers more speed if an SSD is used and better security if encryption is implemented.

Physical Implications of Backup Philosophy - Partitioning

I am not going to go into this in great depth as it has already been covered in other places but my philosophy is:

  1. The folder containing all the users home folders should be a separate partition mounted as /home. This separates the various functions and makes backup, sharing and cloning easier.
  2. There are advantages in having two partitions for linux systems so new versions can be run for a while before commiting to them. A separate partition for /home is required if different systems are going to share it.
  3. When one has an SSD the best speed will result from having the linux systems and the home folder using the SSD especially if the home folders are going to be encrypted.
  4. Shared DATA should be in a separate partition mounted at /media/DATA. If one is sharing with a Windows system it should be formatted as ntfs which also reduces problems with permissions and ownership with multiple users. DATA can be on a separate slower but larger hard drive.
  5. If you have an SSD swaping should be minimised and the swap partition should be on a hard drive if it is available to maximise SSD life.

The Three Parts to Backing Up

System Backup - TimeShift - Scheduled Backups and more.

TimeShift which is now fundamental to the update manager philosophy of Mint and backing up the linux system very easy. To Quote "The star of the shown Linux Mint 19, is Timeshift. Thanks to Timeshift you can go back in time and restore your computer to the last functional system snapshot. If anything breaks, you can go back to the previous snapshot and it's as if the problem never happened. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot (thus canceling the effects of the regression) and you still have the ability to apply updates selectively (as you did in previous releases)." The best information I hve found about TimeShift and how to use it is by the author.

TimeShift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded. This ensures that your files remains unchanged when you restore your system to an earlier date. Snapshots are taken using rsync and hard-links. Common files are shared between snapshots which saves disk space. Each snapshot is a full system backup that can be browsed with a file manager. TimeShift is efficient in use of storage but it still has to store the original and all the additions/updates over time. The first snapshot seems to occupy slightly more disk space than the root filesystem and six months of additions added another approximately 35% in my case. I run with a root partition / and separate partitions for /home and DATA. Using Timeshift means that one needs to allocate an extra 2 fold storage over what one expects the root file sytem to grow to.

In the case of the Defiant the root partition has grown to about 11 Gbytes and 5 months of Timeshift added another 4 Gbyes so the partition with the /timeshift folder neeeds to have at least 22 Gbytes spare if one intends to keep a reasonable span of sheduled snapshots over a long time period. After three weeks of testing Mint 19 my TimeShift folder has reached 21 Gbytes for a 8.9 Gbyte system!

This space requirements for TimeShift obviously have a big impact on the partition sizes when one sets up a system. My Defiant was set up to allow several systems to be emplyed with multiple booting. I initially had the timeshift folder on the /home partition which had plenty of space but that does not work with a multiboot system sharing the /home folder. Fortunately two of my partitions for Linux systems plenty big enough for use of TimeShift and the third which is 30 Gbytes is accceptable if one is prepared to prune the snapshotss occasionally.

Cloning your System using TimeShift

Timeshift can also be used for 'cloning' as you can chose what partition you restore to. For example I have recently added an SSD to the Defiant and I just created the partition scheme on the SSD , took a fresh snapshot and restored it to the appropriate partition on the SSD. It will not be available until Grub is alerted by a sudo update-grub after which it will be in the list af operating systems available at the next boot. Assuming you have a separate /home it will continue to use the existing one and you will probably want to also move the home folder - see the previous section on Moving a Home Folder to a dedicated Partition or different partition (Expert level) for the full story.

Warning about deleting system partitions after cloning.

When you come to tidy up your partitions after cloning You Must do a sudo update-grub after any partitio deletions before any reboot. If Grub can not find a partition it expects it hangs and you will not be able to boot your system at all and you will drop into a grub-recovery prompt.

I made this mistake and used the proceedure in https://askubuntu.com/questions/493826/grub-rescue-problem-after-deleting-ubuntu-partition by Amr Ayman and David Foester which I reproduce below

grub rescue > ls
(hd0) (hd0,msdos5) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) (hd1) (hd1,msdos1)
grub rescue > ls (hd0,msdos1) # try to recognize which partition is this
grub rescue > ls (hd0,msdos2) # let's assume this is the linux partition
grub rescue > set root=(hd0,msdos2)
grub rescue > set prefix=(hd0,msdos2)/boot/grub # or wherever grub is installed
grub rescue > insmod normal # if this produced an error, reset root and prefix to something else ..
grub rescue > normal

For a permanent fix run the following after you successfully boot:

sudo update-grub
sudo grub-install /dev/sdX

where /dev/sdX is your boot drive.

It was not a pleasant activity and had far too much trial and error so make sure you do update-grub.

Users - Home Folder Archiving using Tar.

Tar is a very powerful command line archiving tool round which many of the GUI tools are based which should work on most Linux Distributions. In many circumstances it is best to access this directly to backup your system. The resulting files can also be accessed (or created) by the archive manager accessed by right clicking on a .tgz .tar.gz or .tar.bz2 file. Tar is an ideal way to backup many parts of our system, in particular one's home folder. The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system.

The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system. After it is complete the file should be moved to a safe location, preferably a DVD or external device. If you want to do a higher compression method the command "tar cvpjf mybackup.tar.bz2" can be used in place of "tar cvpzf mybackup.tgz". This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file.

You can access parts of the archive using the GUI Archive Manager by right clicking on the .tgz file - again slow on such a large archive.

Tar, in the simple way we will be using it, takes a folder and compresses all its contents into a single 'archive' file. With the correct options this can be what I call an 'exact' copy where all the subsiduary information such as timestamp, owner, group and permissions are stored without change. Soft, symbolic links, and hard links can also be retained. Normally one does not want to follow a link out of the folder and put all of the target into the archive so one needs to take care.

We want to back up each users home folder so it can be easily replaced on the existing machine or on a replacement machine. The ultimate test is can one back up the users home folder, delete it (safer is to rename) and restore it exactly so the user can not tell in any way. The home folder is, of course, continually changing when the user is logged in so backing up and restoring should be really be done when the user is not logged in, ie from a different user, a LiveUSB or from a consul.

Firstly we must consider what is, arguably, the most fundamental decision about backing up, the way we specify the location being saved when we create the tar archive and when we extract it - in other words the paths must usually restore the folder to the same place. If we store absolute locations we must extract in the same way. If it is relative we must extract the same way. So we will always have to consider pairs of commands depending on what we chose.

Method 1 has absolute paths and shows home when we open the archive with just a single user folder below it. This is what I have always used for my backups and the folder is always restored to /home on extraction.

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Method 2 shows the users folder at the top level when we open the archive. This is suitable for extracting to a different partition or place but here the extraction is back to the correct folder.

cd /home && sudo tar cvpzf "/media/USB_DATA/mybackup2.tgz" user1 --exclude=user1/.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackup2.tgz" -C /home

Method 3 shows the folders within the users folder at the top level when we open the archive. This is also suitable for extracting to a different partition or place and has been added to allow backing up and restoring encrypted home folders where the encrypted folder may be mounted to a different place at the time

cd /home/user1 && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" -C /home/user1

These are all single lines if you cut and paste.

Archive creation options: The options used when creating the archive are: create archive, verbose mode (you can leave this out after the first time) , retain permissions, -P do not strip leading backslash, gzip archive and file output. Then follows the name of the file to be created, mybackup.tgz which in this example is on an external USB drive called 'USB_DATA' - the backup name should include the date for easy reference. Next is the directory to back up. Next are the objects which need to be excluded - the most important of these is your back up file if it is in your /home area (so not needed in this case) or it would be recursive! It also excludes the folders (.gvfs) which is used dynamically by a file mounting system and is locked which stops tar from completing. The problems with files which are in use can be removed by creating another user and doing the backup from that user - overall that is a cleaner way to work. If you want to do a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Archive Restoration uses options - extract, verbose, retain permissions, from file and gzip. This will take a while. The "-C / ensures that the directory is Changed to a specified location, in case 1 this is root so the files are restore to the original locations. In case two you can chose but is normally /home. Case 3 is useful if you mount an encrypted home folder independently of login using ecryptfs-recover-private --rw which mounts to /tmp/user.8random8

Deleting Files: If the old system is still present note that tar only overwrites files, it does not deleted files from the old version which are no longer needed. I normally restore from a different user and rename the users home folder before running tar as above, when I have finished I delete the renamed file. This needs root/sudo and the easy way is to right click on a folder in Nemo and 'open as root' - make sure you use a right click delete to avoid going into a root deleted items folder.

Higher compression: If you want to use a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Deleting Archive files: If you want to delete the archive file then you will usually find it is owned by root so make sure you delete it in a terminal - if you use a root browser then it will go into a root Deleted Items which you can not easily empty so it takes up disk space for ever more. If this happens then read http://www.ubuntugeek.com/empty-ubuntu-gnome-trash-from-the-command-line.html and/or load the trash-cli command line trash package using the Synaptic Package Manager and type

sudo trash-empty

Alternative to multiple methods The tar manual contains information on an option to strip given number of leading components from file names before extraction namely --strip-components=number

To quote

For example, suppose you have archived whole `/usr' hierarchy to a tar archive named `usr.tar'. Among other files, this archive contains `usr/include/stdlib.h', which you wish to extract to the current working directory. To do so, you type:

$ tar -xf usr.tar --strip=2 usr/include/stdlib.h

The option --strip=2 instructs tar to strip the two leading components (`usr/' and `include/') off the file name.

If you add the --verbose (-v) option to the invocation above, you will note that the verbose listing still contains the full file name, with the two removed components still in place. This can be inconvenient, so tar provides a special option for altering this behavior:

--show-transformed-names

This shoulds allow an archive saved with the full path information to be extracted with the/home/user information striped off. I have only done partial testing but invocations of the following sort seem to work:

Method 4 This should also enable one to restore to an encrypted home folders where the encrypted folder may be mounted to a different place at the time by encryptfs-recover-private --rw

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" --strip=2 --show-transformed-names -C /home/user1

Archiving a home folder and restoring

Everything has really been covered above so this is really just a slight expansion of the above for a specific case.

This uses Method 1 where all the paths are absolute so the folder you are running from is not an issue. This is the method I have always used for my backups so it is well proven. The folder is always restored to /home on extraction so you need to remove or preferably rename the users folder before restoring it. If a backup already exists delete it or use a different name. Both creation and retrieval must be done from a temporary user.

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Cloning between machines and operating systems using a backup archive (Advanced)

It is possible that you want to clone a machine, for example when you buy a new machine. It is usually easy if you have the home folder on a separate partition and the user you are cloning was the first user installed and you make the new username the same as the old. I have done that many times. There is however a catch which you need to watch for and that is that user names are a two stage process. If I set up a system with the user peter when I install that is actually just an 'alias' to a numeric user name 1000 in the linux operating system. I then set up a second user pauline who will correspond to 1001. If I have a disaster and reinstall and this time start will pauline who is then 1000 and peter is 1001. I get my carefully backed up folders and restore the folders which now have all the owners etc incorrect as they use the underlying numeric value apart, of course, where the name is used in hard coded scripts etc.

You can check all the relevant information for the machine you are cloning from in a terminal by use of id:

pcurtis@mymachine:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),6(disk),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),29(audio),30(dip),44(video),46(plugdev),104(fuse),108(avahi-autoipd),109(avahi),110(netdev),112(lpadmin),120(admin),121(saned),122(sambashare),124(powerdev),128(mediatomb)

So when you install on a new machine you should always use the same username and password as on the original machine and then create an extra user with admin (sudo) rights for convenience for the next stage. Change to your temporary user, rename the first users folder (you need to be root) and replace it from the archived folder from the original machine. Now login to the user again and that should be it. At this point you can delete the temporary user. If you have multiple users to clone the user names must obviously be the same and, more importantly, the numeric id must be the same as that is what is actually used by the kernel, the username is really only a convenient alias. This means that the users you may clone must alaways be installed in the same order on both machines or operating systems so they have the same numeric UID.

So we first make a backup archive in the usual way and take it to the other machine or switch to the other operating system and restore as usual. It is prudent to backup the system you are going to overwrite just in case.

So first check the id on both machines for the user by use of

id user

If and only if the ids are the same

On the first machine and from a temporary user:

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

On the second machine or after switching operating system and fro a temporary user:

mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Moving a Mint Home Folder to a different Partition (Expert level)

I have recently added a SSD to the Defiant computer and this covers part of the moving of the Linux System and home folder /home from the existing hard drive to the faster SSD. This uses the same techniques as used for backing up the users home folders and is also appropriate for a move of a home folder to a partition.

The problem of exactly copying a folder is not as simple as it seems - see https://stackoverflow.com/questions/19434921/how-to-duplicate-a-folder-exactly. You not only need to preserve the contents of the files and folders but also the owner, group, permissions and timestamp. You also need to be able to handle symbolic links and hard links. I initially used a complex proceedure using cpio but am no longer convinced that covers every case especially if you use wine where .wine is full of links and hard coded scripts. The stackoverflow thread has several sensible options 'exact' copies. I also have a well proven way of creating and restoring backups of home folders exactly using tar which has advantages as we would create a backup before proceeding in any case!

When we back up normally we use tar to create a compressed archive which can be restored exactly and To the Same place, even during cloning we are still restoring the user home folders to be under /home.If you are moving to separate partition you want to extract to a different place, which will become /home eventually after the new mount point is set up in file system mount point list in /etc/fstab. It is convenient to always use the same backup proceedure so you need to get at least one user in place in the new home folder. I am not sure I trust any of the copy methods totally for my real users but I do believe it is possible to move a basic user (created only for the transfers) that you can use to do the initial login after changing the location of /home and can then use to extract all the real users from their backup archives. The savvy reader will also realise you can use Method 2 above to move them directly to the temporary mount point

You can create this basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, Type Administrator ... -> Add -> Click Password to set a password otherwise you can not use sudo

An 'archive' copy using cp is good enough in the case of a very basic user which has recently been created and little used, such a home folder may only be a few tens of kbytes in size:

sudo cp -ar /home/basicuser /media/whereever

The -a option is an archive copy preserving most attributes and the -r is recursive to copy sub folders and contents.

So the proceedure to move /home to a different partition is to:

Example of changing file system table to auto-mount different partition as /home

I will give an example of the output of blkid and the contents of etc/fstab after moving my home folder to the SSD drive highlighting the changes in red. Note this is under Mint 19 and the invocation for the text editor.

pcurtis@defiant:~$ blkid
/dev/sda1: LABEL="EFIRESERVED" UUID="06E4-9D00" TYPE="vfat" PARTUUID="333c558c-8f5e-4188-86ff-76d6a2097251"
/dev/sda2: LABEL="MINT19" UUID="e07f0d65-8835-44e2-9fe5-6714f386ce8f" TYPE="ext4" PARTUUID="4dfa4f6b-6403-44fe-9d06-7960537e25a7"
/dev/sda3: LABEL="MINT183" UUID="749590d5-d896-46e0-a326-ac4f1cc71403" TYPE="ext4" PARTUUID="5b5913c2-7aeb-460d-89cf-c026db8c73e4"
/dev/sda4: UUID="99e95944-eb50-4f43-ad9a-0c37d26911da" TYPE="ext4" PARTUUID="1492d87f-3ad9-45d3-b05c-11d6379cbe74"
/dev/sdb1: LABEL="System Reserved" UUID="269CF16E9CF138BF" TYPE="ntfs" PARTUUID="56e70531-01"
/dev/sdb2: LABEL="WINDOWS" UUID="8E9CF8789CF85BE1" TYPE="ntfs" PARTUUID="56e70531-02"
/dev/sdb3: UUID="178f94dc-22c5-4978-b299-0dfdc85e9cba" TYPE="swap" PARTUUID="56e70531-03"
/dev/sdb5: LABEL="DATA" UUID="2FBF44BB538624C0" TYPE="ntfs" PARTUUID="56e70531-05"
/dev/sdb6: UUID="138d610c-1178-43f3-84d8-ce66c5f6e644" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="56e70531-06"
/dev/sdb7: UUID="b05656a0-1013-40f5-9342-a9b92a5d958d" TYPE="ext4" PARTUUID="56e70531-07"
/dev/sda5: UUID="47821fa1-118b-4a0f-a757-977b0034b1c7" TYPE="swap" PARTUUID="2c053dd4-47e0-4846-a0d8-663843f11a06"
pcurtis@defiant:~$ xed admin:///etc/fstab

and the contents of /etc/fstab after modification

# <file system> <mount point> <type> <options> <dump> <pass>

UUID=e07f0d65-8835-44e2-9fe5-6714f386ce8f / ext4 errors=remount-ro 0 1
# UUID=138d610c-1178-43f3-84d8-ce66c5f6e644 /home ext3 defaults 0 2
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
UUID=2FBF44BB538624C0 /media/DATA ntfs defaults,umask=000,uid=pcurtis,gid=46 0 0
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0

In summary: there are many advantages in having ones home directory on a separate partition but overall this change is not a proceedure to be carried out unless you are prepared to experiment a little. It is much better to get it right and create one when installing the system.

Cloning into a different username - Not Recommended but somebody will want to try

I have also tried to clone into a different username but do not recommend it. It is possible to change the folder name and set up all the permissions and everything other than Wine should be OK on a basic system. The .desktop files for Wine contain the user hard coded so these will certainly need to be edited and all the configuration for the wine programs will have been lost. You will also have to change any of your scripts which have the user name 'hard coded'. I have done it once but the results were far from satisfactory. If you want to try you should do the following before you try to log in the first time after replacing the home folder from the archive. Change the folder name to match the new user name and the following commands set the owner and group to the new user and gives standard permissions for all the files other than .dmrc which is a special case.

sudo chown -R eachusername:eachusername /home/eachusername
sudo chmod -R 755 /home/eachusername
sudo chmod 644 home/eachusername/.dmrc

This needs to be done before the username is logged into the first time otherwise many desktop settings will be lost and the following warning message appears.

Users $Home/.dmrc file is being ignored. This prevents the default session and language from being saved. File should be owned by user and have 644 permissions.
Users $Home directory must be owned by user and not writable by others.

It this happens it is best to start again remembering that the archive extraction does not delete files so you need to get rid of the folder first!

==================================================================

==================================================================

12th August 2018

Encrypting an existing users home folder.

It is possible to encrypt an existing users home folder provided there is at least 2.5 times the folder's size available in /home - a lot of waorkspace is required and a backup is made.

You also need to do it from another users account. If you do not already have one an extra basic user with admin (sudo) priviledges is required and the user should be given a password otherwise sudo can not be used.

You can create this basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, and set Type to Administrator provide username and Full name... -> Create -> Highlight User, Click Password to set a password otherwise you can not use sudo.

Logout and Login in to your new basic user.

Now you can run this command to encrypt a user:

sudo ecryptfs-migrate-home -u user

You’ll have to provide your user account’s Login Password. After you do, your home folder will be encrypted and you should be presented with some important notes In summary, the notes say:

  1. You must log in as the other user account immediately – before a reboot!
  2. A copy of your original home directory was made. You can restore the backup directory if you lose access to your files. It will be of the form user.8random8
  3. You should generate and record the recovery passphrase (aka Mount Passphrase).
  4. You should encrypt your swap partition, too.

The highlighting is mine and I reiterate you must log out and login in to the users whose account you have just encrympted before doing anything else.

Once you are logged in you should also create and save somewhere very safe the recovery phrase (also described as a randomly generated mount passphrase). You can repeat this any time whilst you are logged into the user with the encrypted account like this:

user@lafite ~ $ ecryptfs-unwrap-passphrase
Passphrase:
randomrandomrandomrandomrandomra
user@lafite ~ $

Note the confusing request for a Passphrase - what is required is your Login password/passphrase. This will not be the only case where you will be asked for a passphrase which could be either your Login passphrase or your Mount passphrase! The Mount Passphrase is important - it is what actually unlocks the encryption. There is an intermediate stage when you login into your account where your account login is used to used to temporarily regenerate the actual mount passphrase. This linkage needs to updated if you change your login password and for security reasons this is not done if you change your login password in a terminal using passwd user which could be done remotely. If you get the two out of step the mount passphrase may be the only way to retrieve your data hence the great importance. It is also required if the system is lost and you are using backups.

The documentation in various places states that the GUI Users and Groups utility updates the linkage between the Login and Mount passphrases but I have found that the password change facility is greyed out in Users and Groups for users with encrypted home folders. In a single test I used just passwd from the actual user and that did seem to update both and everything kept working and allowed me to login after a restart.

Mounting an encrypted home folder independently of login.

A command line utility ecryptfs-recover-private is provided to mount the encrypted data but it currently has several bugs when used with the latest Ubuntu or Mint.

  1. You have to specify the path rather than let the utility search.
  2. You have to manually link keychains with a magic incantation which I do not understand at all namely sudo keyctl link @u @s after every reboot. A man keyctl indicates that it links the User Specific Keyring (@u) to the Session Keyring (@s). See https://bugs.launchpad.net/ubuntu/+source/ecryptfs-utils/+bug/1718658 for the bug report

The following is an example of using ecryptfs-recover-private and the mount passphrase to mount a home folder as read/write (--rw option), doing a ls to confirm and unmounting and checking with another ls.

pcurtis@lafite:~$ sudo keyctl link @u @s
pcurtis@lafite:~$ sudo ecryptfs-recover-private --rw /home/.ecryptfs/pauline/.Private
INFO: Found [/home/.ecryptfs/pauline/.Private].
Try to recover this directory? [Y/n]: y
INFO: Found your wrapped-passphrase
Do you know your LOGIN passphrase? [Y/n] n
INFO: To recover this directory, you MUST have your original MOUNT passphrase.
INFO: When you first setup your encrypted private directory, you were told to record
INFO: your MOUNT passphrase.
INFO: It should be 32 characters long, consisting of [0-9] and [a-f].

Enter your MOUNT passphrase:
INFO: Success! Private data mounted at [/tmp/ecryptfs.8S9rTYKP].
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
Desktop Dropbox Pictures Templates
Documents Videos Downloads Music Public
pcurtis@lafite:~$ sudo umount /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$

The above deliberately took the long way rather than use the matching LOGIN passphrase as a demonstration.

I have not bothered yet with encrypting the swap partition as it is rarely used if you have plenty of memory and swoppiness set low as discussed earlier.

Once you are happy you can delete the backup folder to save space. Make sure you Delete it (Right click delete) if you use nemo and as root - do not risk it ending up in a root trash which is a pain to empty!

Feature or Bug - home folders remain encrypted after logout?

In the more recent versions of Ubuntu and Mint the home folders remain mounted after logout. This also occurs if you login in a consul or remotely over SSH. This is useful in many ways and you are still protected fully if the machine is off when it is stolen. You have little protection in any case if you are turned on and just suspended. Some people however logout and suspend expecting full protection which is not the case. In exchange it makes backing up and, in particular, restoring a home folder easier.

Backing up an encrypted folder.

A tar archive can be generated from a mounted home folder in exactly the same way as before as the folder stays unencrypted when you change user to ensure the folder is static. If that was not the case you could use a consul (by Ctrl Alt F2) to login then switch back to the GUI by Ctrl Alt F7 or login via SSH to make sure it was mounted to allow a backup. Either way it is best to logout at the end.

Another and arguably better alternative is to mount the user via encryptfs-recover-private and backup using Method 3 from the mount point like this:

sudo ecryptfs-recover-private --rw /home/.ecryptfs/user1/.Private

cd /tmp/ecryptfs.8S9rTYKP && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

Restoring to an encrypted folder - Untested

Mounting via encryptfs-recover-private --rw seems the most promising way but not tested yet. The mount point corresponds to /home (see example above) so you have to use Method 3 to create and retrieve your archive in this situation namely:

cd /home/user1 && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=user1/.gvfs
# or
cd /tmp/ecryptfs.8S9rTYKP && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupmethoduser13.tgz" -C /tmp/ecryptfs.randomst

These are all single lines if you cut and paste. The . (dot) means everything at that level goes into the archive.

Solving Problems with Dropbox after encrypting home folders

The following day to when I encrypted the last home folder I got a message from Dropbox say that they would only support EXT4 folders under Linux from 3 months time and encryption would not be supported. They also noted the folders should be on the same drive as the operating system.

My solution has been to move the dropbox folders to a new EXT4 partition on the SSD. What I actually did was to make space on the hard drive for a swap partition and move the swap from the SSD to make space for the new partition. It is more sensible to have the swap on the hard drive as it is rarely used and if it is it ends to reduce the life of the SSD. Moving the swap partition need several steps and some had to be repeaed for both the operating systems to avoid errors in booting. The stages in summary were:

  1. Use gparted to make the space by shrinking the DATA partition by moving the end
  2. Format the free space to be a swap partition.
  3. Right click on the partition to turn it on by swapon
  4. Add it in /etc/fstab using blkid to identify the UUID so it will be auto-mounted
  5. Check you now have two swaps active by cat /proc/swaps
  6. Reboot and check again to ensure the auto-mount is correct
  7. Use gparted to turn off swap on the SSD partition - Rt Click -> swapoff
  8. Comment out the SSD swap partition in /etc/fstab to stop it auto-mounting
  9. Reboot and check only one active partition by cat /proc/swaps
  10. Reformat the ex swap partition to EXT4
  11. Set up a mount point in /etc/fstab of /media/DROP; set the label to DROP
  12. Reboot and check it is mounted and visible in nemo
  13. Get to a root browser in nemo and set the owner of media/DROP from root to 1000, group to adm and allow rw access to everyone.
  14. Create folders called user1, user2 etc in DROP for the dropbox folders to live in. It may be possible to share a folder but I did not want to risk it.
  15. Move the dropbox folders using dropbox preferences -> Sync tab -> Move: /media/Drop/user1
  16. Check it all works.
  17. Change folders in KeePass2, veracrypt, jdotxt and any others that use dropbox.
  18. Repeat from 15 for other users.

Dropbox caused me a lot of time-wasting work but it did force me to move the swap partition to the correct place.


Before You Leave

I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.

Home Pauline Howto Articles Uniquely NZ Small Firms Search
Copyright © Peter and Pauline Curtis
Content revised: 12th August, 2017