|The Use of Solid State Drives in Linux
This page covers the use of Solid State Disks (SSDs) under Linux, with specific details of setting up under Ubuntu and Linux Mint. SSDs offer great performance advantages and have become increasingly affordable nd many new machines are using them. They are however quite different in their operation to Hard Disks and a number of detailed changes need to be done to get the best from them and to maintain their speed of operation.
This page was initially extracted from my page on our Helios Ultrabook which employes a Samsung Evo 250 Gbyte SSD although I will try to make it more general and cover adding a SSD to an existing machine which will be dual booted with Windows.
It currently starts with a checklist which doubles as an index and then explores each item in more detail. An introduction to how an SSD works and its differences from a conventional hard drive is planned for the future.
Only four of the above are likely to need addressing in a dedicated Linux system using Ubuntu or Mint with a single SSD and provided by Chillblast - they are in bold. The first two are factors in the initial partitioning of the SSD and the other two are less urgent and can be carried out during the setting up procedure.
Firstly, the SATA controller mode needs to be set to AHCI mode in the BIOS. This whole area is full of AS (Alphabet Soup). The BIOS is the Basic Input Output System which is within Read Only Memory (ROM) built onto the motherboard and allows the computer to access enough peripherals to start and access the operating system on Disk. SATA stands for Serial ATA (Advanced Technology Attachment ) and the Advanced Host Controller Interface (AHCI) specifies the operation of Serial ATA (SATA) host bus adapters. This specification how data is exchanged between the host system memory and attached storage devices. The AHCI gives software developers and hardware designers a standard method for detecting, configuring, and programming SATA/AHCI adapters. AHCI is separate from the SATA standards, but it exposes SATA's more advanced capabilities (such as hot swapping, native command queuing and TRIM) so host systems can use them. These advanced capabilities are required in the BIOS to fully support an SSD. You will sometimes also see this as referred to as ACPI (Advanced Configuration and Power Power Interface (but strictly this is the Programming interface and mostly the option you are looking for in the BIOS will be called ACHI. I checked back on a 2007 BIOS on my MSI Wind and the ACPI option was already available.
BIOS entry is via F2 during boot sequence in many modern machines including my Defiant and Helios. The Bios has to be navigated by the cursor keys and there are several different basic BIOSes but even then they are always customised by the computer manufacturers. You just have to explore and see what you have. It was already set correctly on the Helios as delivered as it had been on the Defiant but it something it is essential to check. If you have an old Windows system changing the SATA mode may cause Windows to stop booting because the driver is not loaded by default as Windows checks the mode during installation. As I have not had to change this myself I can only direct you to https://support.microsoft.com/en-us/kb/922976 for how to load the driver before you change the mode.
If you are setting up a new machine and partitioning the drive from scratch then the alignment is very likely to be correct as the most common partitioning programs including gparted, that I recommend, align the partitions correctly by default as does the partitioning built in to most installers including for Ubuntu and Mint. Even so it important to understand and if it is a existing machine preloaded with Windows that you are reconfiguring one needs to check and adjust if necessary.
Partition alignment is essential for optimal performance and longevity as SSDs are based on flash memory, and thus differ significantly from hard drives. While reading remains possible in a random access fashion on pages of typically 4KiB, erasure is only possible for blocks which are much larger, typically 512KiB, so it is necessary to align the absolute start of every partition to a multiples of the erase block size. The Defaults in the installers in Mint/Ubuntu do this to a 1MiB boundary (but may be fooled if the drive is already formatted incorrectly) Partition alignment is essential for optimal performance and longevity as SSDs are based on flash memory, and thus differ significantly from hard drives. While reading remains possible in a random access fashion on pages of typically 4KiB, erasure is only possible for blocks which are much larger, typically 512KiB, so it is necessary to align the absolute start of Every partition to a multiples of the erase block size. Defaults in Mint/Ubuntu do this to a 1MiB boundary (but may be fooled if the drive is already formatted incorrectly) and this may also happen with gparted.
Firstly we need to use a utility called fdisk to see what we have. It needs to be run as root as it is also capable of changing as well as just listing information. The following table is from my Defiant where the Partitioning was done using Gparted and assumes that the SSD is dev/sda. I have highlighted the start values which needs to be exactly dividable by 2048 (1Mib =2048*512) for the normal sector size of 512.
$ sudo fdisk -lu /dev/sda
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F884763F-6905-493E-8DB7-E1E84B242AE3
Device Start End Sectors Size Type
/dev/sda1 4096 1130495 1126400 550M EFI System
/dev/sda2 1130496 66666495 65536000 31.3G Linux filesystem
/dev/sda3 115818496 246890495 131072000 62.5G Linux filesystem
/dev/sda4 467916800 488396799 20480000 9.8G Linux swap
/dev/sda5 246890496 467916799 221026304 105.4G Linux filesystem
/dev/sda6 66666496 115818495 49152000 23.4G Linux filesystem
Partition table entries are not in disk order.
The partitions are all exactly divisible by 2048 as required. If that had not the been the case then gparted could have been used to re-align them see: http://lifehacker.com/5837769/ which is for written with Windows in mind but explains a strange logic for using a double move. Moves and resizing a big partition is however SSLLoooww and can take hours so check first.
Only a limited number of file systems currently support the TRIM operation. In practice this means one should format / /home and /media/data to EXT4 for a Linux system. The ntfs-3g package that is used for ntfs support in Mint does not support TRIM although there are rumoured to be versions under development. Dual booted systems sharing an ntfs partition currently have to depend on the Windows for TRIM support. I am not Dual Booting on this machine and if I add a SSD to my Defiant I will put or keep my shared partition on a Hard Drive.
TRIM, for those of you who aren't aware, is a vital part of the overall garbage-collection without SSD performance will slow down over a few weeks, or in some cases days. SSDs are not like a magnetic disk where data can just be overwritten. The block must first be completely erased then the bits required can be written. While reading remains possible in a random access fashion on pages of typically 4KiB, erasure is only possible for blocks which are much larger, typically 512KiB
TRIM is the mechanism which the operating system uses to inform the SSD which blocks of data are no longer in use and can be wiped by the Internal Garbage collection in the SSD ready for writing. If empty blocks do not exist then there has to be collection of data, clearing of space and rewriting which cam amplify the number of reads and writes considerably especially when the drive gets full. SSDs always perform their own internal garbage collection, but the TRIM command makes it much more efficient by allows the drive to avoid 'write amplification' by not re-writing data that's actually been cleared for deletion. Together, TRIM and the subsequent more efficient garbage collection maintain overall drive performance while minimizing wear-and-tear.
The original version of the TRIM command is a non-queued command and consequently incurs a major execution penalty if used carelessly, e.g., if sent after each filesystem delete command. The non-queued nature of the command requires the driver to first wait for all outstanding commands (in particular writes) to be finished, issue the TRIM command, then resume normal commands. Trim can take a lot of time to complete depending on the firmware in the SSD and may even trigger an immediate garbage collection cycle before it is required. This penalty can be minimized in solutions that periodically do a batched trim, rather than trimming upon every file deletion, and scheduling such batch jobs for times when system utilization is minimal.
A single partition can be TRIMed in the terminal by:
sudo fstrim -v /
The -v is verbose and the / specifies the root partition.
There is a better which is to call fstrim-all which will trim all mounted file systems which support it and where the SSDs has not been blacklisted in some way. This is usually put in a cron job so it is run every week or so. For example Mint has a file /etc/cron.weekly/fstrim containing:
# call fstrim-all to trim all mounted file systems which support it
# This only runs on Intel and Samsung SSDs by default, as some SSDs with faulty
# firmware may encounter data loss problems when running fstrim under high I/O
# load (e. g. https://launchpad.net/bugs/1259829). You can append the
# --no-model-check option here to disable the vendor check and run fstrim on
# all SSD drives.
Note: The file has to be executable to be run.
This is the file to modify if you need to inhibit the automatic TRIM or move to /etc/cron.daily if you want it to be more frequent
Mint 19 is set up to do an automatic trim every week which seems to be a bit to long a time if one inspects the result of occasional manual trim by: sudo fstrim -av you may well see quite large numbers
Earlier version of Ubuntu/Mint used cron to schedule the trims but systemd timers are used in Mint 19. When a service is controlled by a timer there are two linked 'units' which are files with the same name and the service has an extention of .service and the timer an extension of .timer. Only the timer has to be enabled in Systemd for the service to be run as prescribed in the timer. Both are part of the linux system in the case of fstrim and are found in /usr/lib/systemd/system/ and should not be edited directly as systemctl provides built-in mechanisms for editing and modifying unit files if you need to make adjustments.
This section is losely based on https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units , https://wiki.archlinux.org/index.php/Systemd/Timers#Timer_units and the manual page systemd.timer.5
The correct way to change a system service or timer is to add an drop-in (override) file. The edit command, by default, will open a drop-in file for the unit in question, in this case fstrim.timer:
sudo systemctl edit fstrim.timer
This will be a blank file that can be used to override or add directives to the unit definition. A directory will be created within the /etc/systemd/system directory which contains the name of the unit with .d appended. For instance, for the fstrim.timer, a directory called fstrim.timer.d will be created.
Within this directory, a drop-in file will be created called override.conf. When the unit is loaded, systemd will, in memory, merge the override with the full unit file. The override's directives will take precedence over those found in the original unit file.
The override.conf file will be opened in the terminal by sudo systemctl edit fstrim.timer in an unfriendly line editor called nano. Fortunately we oly have to paste a few lines into an empty file so the following shold be sufficient.
To remove any additions you have made, delete the unit's .d configuration directory and drop-in file from /etc/systemd/system. For instance, to remove our override, we could type:
sudo rm -r /etc/systemd/system/fstrim.timer.d
After deleting the file or directory, you should reload the systemd process so that it no longer attempts to reference these files and reverts back to using the system copies. You can do this by typing:
sudo systemctl daemon-reload
So if we look inside the current fstrim.timer using systemctl cat fstrim.timer we find:
pcurtis@defiant:~$ systemctl cat fstrim.timer
Description=Discard unused blocks once a week
We now edit it so we can add a suitable override with
sudo systemctl edit fstrim.timer
and paste in the following to change the title and timing from weekly to daily:
Description=Discard unused blocks once a day
Tip: Timers are cumulative, so in the override one needs to reset the existing timers by an initial OnCalender= otherwise your OnCalendar setting gets added to the existing one.
In theory we do not need reload the systemd process if we use edit by a sudo systemctl daemon-reload but I did it anyway and when I repeated the cat I got.
pcurtis@defiant:~$ systemctl cat fstrim.timer
Description=Discard unused blocks once a week
Description=Discard unused blocks once a day
When I checked the status of the fstrim service after about 24 hours I got:
pcurtis@defiant:~$ systemctl status fstrim.service
Loaded: loaded (/lib/systemd/system/fstrim.service; static; vendor preset: enabled)
Active: inactive (dead) since Sat 2018-08-25 03:08:38 BST; 11h ago
Main PID: 31821 (code=exited, status=0/SUCCESS)
Aug 25 03:08:37 defiant systemd: Starting Discard unused blocks...
Aug 25 03:08:38 defiant fstrim: /home: 965.9 MiB (1012805632 bytes) trimmed
Aug 25 03:08:38 defiant fstrim: /: 1.7 GiB (1760002048 bytes) trimmed
Aug 25 03:08:38 defiant systemd: Started Discard unused blocks.
I have gone through this at some length as it is a useful technique with wider applicability and I had to spend a lot of time before I understood how systemd timers worked and how to correctly change one.
I initially had inhibit automatic TRIM because I had a conflict with the Samsung 850 Evo SSD and queued TRIM which was not blacklisted in the kernel I was using. I commented out the fstrim-all in /etc/cron.weekly/fstrim as it is easier than moving the file. Making it non executable should also work but not renaming it.
If you want to check if queued TRIM has been disabled in the kernel then do:
dmesg | grep -i trim
Overprovision, namely reserving some areas of disk unformatted is also desirable for similar reasons as TRIM, namely maintaining speed and decreasing disk writes as far less garbage collection is required. A certain amount of Overprovision is already reserved by manufacturers (~7% ) but it there is significant benefit if it is increased by another 10 to 20%. When I looked at my requirements I decided I could afford to leave the space for an extra systems root partition of 24 Gbytes to do comparative tests like I did on the Defiant.
When tests are complete this will be removed providing 10% overprovision which should be adequate to maintain speed and decrease write amplification between TRIM operations carried out on a weekly basis. Note: One should TRIM the partition before returning it to empty unformatted space.
The changes that are described here are desirable for all disk drives and I have already implemented them on all my systems. They are even more important when it comes to SSDs. A primary way to reduce disk access is to reduce the use of Swap Space which is the area on a hard disk which is part of the Virtual Memory of your machine, which is then a combination of accessible physical memory (RAM) and the Swap space. Swap space temporarily holds memory pages that are inactive. Swap space is used when your system decides that it needs physical memory for active processes and there is insufficient unused physical memory available. If the system happens to need more memory resources or space, inactive pages in physical memory are then moved to the swap space therefore freeing up that physical memory for other uses. This is rarely required as most machines have plenty of real memory available. If Swapping is required the system tries to optimise this by making moves in advance of their becoming essential. Note that the access time for swap is much slower, even with an SSD, so do not consider it to be a complete replacement for the physical memory. Swap space can be a dedicated Swap partition (normally recommended), a swap file, or a combination of swap partitions and swap files. The hard drive swap space is also used for Hibernating the machine.
It is normally suggested that the swap partition size is the same as the physical memory, it needs to be if you ever intend to Hibernate (Suspend to disk by copying the entire memory to a file before shutting down completely). It is easy to see how much swap space is being used by using the System Monitor program or by using one of the system monitoring applets. With machines with plenty of memory like my Defiant and helios which have 8 Gbytes you will rarely see even a few percent of use if the system is set up correctly which brings us to swappiness.
There is a parameter called Swappiness which controls the tendency of the kernel to move processes out of physical memory and on a swap disk. See Performance tuning with ''swappiness'' As even SSD disks are much slower than RAM, this can lead to slower response times for system and applications if processes are too aggressively moved out of memory and also causes wear on solid state disks.
Reducing the default value of swappiness will improve overall performance for a typical Ubuntu desktop installation. There is a consensus that a value of swappiness=10 is recommended for a desktop and 60 for a server with a hard disk. I have been using a swappiness of 10 on my two MSI U100 Wind computers for many years - they used to have 1 Gbyte of RAM and swap was frequently used. In the case of the Defiant I have 8 Gbytes of memory and Swap (aka Block Cache) is much less likely to be used. The consensus view is that optimum value for swappiness is 1 or even 0 in these circumstances. I have set 1 at present on both the Defiant and the Helios with an SSD.
To check the swappiness value
For a temporary change (lost on reboot) with a swappiness value of 1:
sudo sysctl vm.swappiness=1
To make a change permanent you must edit a configuration file:
sudo gedit /etc/sysctl.conf
Search for vm.swappiness and change its value as desired. If vm.swappiness does not exist, add it to the end of the file like so:
Save the file and reboot.
There is another parameter which also has an influence on perceived speed as it influences the inode/dentry cache which is a layer above the block cache, caches directory entries and other filesystem-related things that cost even more to look up than just block device contents. This is even more obscure (especially the name of vfs_cache_pressure) but there are some tests in t https://freeswitch.org/confluence/display/FREESWITCH/SSD+Tuning+for+Linux which indicate that the default (100) can be halved to give an improvement in perceived performance. This is done by adding another line to /etc/sysctl.conf so for the Defiant I have:
I have not implemented it on the Helios as the SSD already gives a huge improvement and extra tuning is unlikely to give any perceivable enhancement.
Hibernation (suspend to disk) should be inhibited as it causes a huge amount of write actions, which is very bad for an SSD. If you are dual booting make sure Windows also has hibernation inhibited - in any case it is catastrophic if both hibernate to the same disk. Ubuntu has inhibited Hibernation but Mint overrules this and I prefer to change it. An easy way is to, in a terminal, do:
sudo mv -v /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla /
Note this is a single line and is best copied and pasted it into the terminal.
It moves the settings file that enables hibernation, to the main directory / (root) rendering it ineffective. The new location is a safe storage, from which you can retrieve it again, should you ever wish to restore hibernation. Thanks to https://sites.google.com/site/easylinuxtipsproject/mint-cinnamon-first for this idea. Note:I have not checked and have no views on any of the other information on that page
One needs to reboot before this is active. After the reboot Hibernation should now no longer be one of the options when you close the computer. Applets which try to hibernate will demand root access.
File fragmentation in Linux can be completely ignored if there is 20% or more free space on a partition and defragmentation is never done automatically. Defragmentation must be avoided because the many write actions it causes will wear an SSD rapidly. Also make sure a dual booted system does not wear your SSD by defragmentation. It is best to always have 20% spare space in a any partition, SSD (or Hard Disk) even if you have Overprovisioned. Recent versions of Windows inhibits any automatic defragmentation if it detects a drive is an SSD.
There are other changes to the file access which can reduce the number of 'writes' such as noatime. This seems to be an area full of misinformation and papers often end up with making suggestions which are already defaults or have undesirable consequences so I will add some thoughts.
Current versions of the Linux kernel support four mount options, which can be specified in /etc/fstab:
During discussions of the defaults back in 2009 Linux kernel developer Ingo Molnár called atime "perhaps the most stupid Unix design idea of all times,"adding: "Think about this a bit: 'For every file that is read from the disk, let's do a ... write to the disk! And, for every file that is already cached and which we read from the cache ... do a write to the disk!'" He further emphasized the performance impact thus:
atime updates are by far the biggest I/O performance deficiency that Linux has today. Getting rid of atime updates would give us more everyday Linux performance than all the pagecache speedups of the past 10 years, combined.
What nobody seems to realise is that as of version 2.6.30, (2009), the Linux kernel defaulted to relatime, resulting in not updating atime values on most file reads. The behaviour behind the relatime mount option offers sufficient performance for most purposes and does not break any applications.
Initially, relatime only updated atime if atime < mtime or atime < ctime; that was subsequently modified to update atimes that were 24 hours old or older.
noatime will still be better but only by a fraction of the 30 - 40% difference between atime and noatime as most of the gain comes from the relatime default already in place. noatime is probably still worth while if you have a SSD but not a priority. I will add a copy of my /etc/fstab if I change to noatime and have fully tested it with Dropbox and Unison although they should not break. Until then assume it is not worth doing!
lazytime may be worth trying at some point but I have read it may need an update to the initramfs using dracut to work on the root partition - I will until it becomes well tested and becomes the default. If you insist and this is Not Tested then be prepared to install dracut and
dracut -f /boot/initramfs-$(uname -r).img $(uname -r)
The scheduler for disk access may need to be optimised for hard drives rather than SSDs. The default for Ubuntu/Mint is 'deadline' which is a good option, it is said that 'noop' may be better if only SSDs are in use. See here for further details. (No Action currently planned other than a check the scheduler is unchanged in Mint 17.3). you can check what you have by:
$ cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
and the one in  is in use.
I would be very pleased if visitors could spare a little time to give me some feedback - it is the only way I know who has visited, if it is useful and how I should develop it's content and the techniques used. I would be delighted if you could send comments or just let us know you have visited by Sending a quick Message to me