GPG on Yubikey for git

OpenPGP and GNU Privacy Guard

One of the advantages of electronic files over paper hard copies is you can encrypt electronic files so that they are only accessible by authorized people. If they fall into the wrong hands, it doesn’t matter. Only you and the intended recipient can access the contents of the files.

The OpenPGP standard describes a system of encryption called public-key encryption. The GNU Privacy Guard implementation of that standard resulted in gpg, a command-line tool for encrypting and decrypting in accordance with the standard.

The standard outlines a public-key encryption scheme. Although it is called “public-key”, there are two keys involved. Each person has a public key and a private key. Private keys, as the name suggests are never revealed nor transmitted to anyone else. Public keys can be safely shared. in fact, public keys must be shared for the scheme to work.

When a file is encrypted, the sender’s private key and the recipient’s public key are used in the encoding process. The file can then be delivered to the recipient. They use their private key and the sender’s public key to decrypt the file.

Public and private keys are generated as a matched pair and tied to a particular identity. Even if you don’t transmit sensitive material to other people, you may use them on your own computer to add an extra layer of protection to private documents.

Related: How to Encrypt and Decrypt Files With GPG on Linux

The encryption uses world-class algorithms and cryptographic functions. Without the appropriate public and private keys, you simply can’t get into encrypted files. And, should you lose your keys,  that goes for you too. Generating new keys won’t help. To decrypt your files you need the keys that were used in the encryption process.

Needless to say, backing up your keys is of paramount importance, as is knowing how to restore them. Here’s how to accomplish these tasks.

The .gnupg Directory

Your keys are stored in a directory called “.gnupg” in your home directory. This directory will also store the public keys of anyone that has sent encrypted files to you. When you import their public keys, they are added to an indexed database file in that directory.

Nothing in this directory is stored in plain text, of course. When you generate your GPG keys you’re prompted for a passphrase. Hopefully, you’ve remembered what that passphrase is. You’re going to need it. The entries in the “.gnugp” directory cannot be decrypted without it.

If we use the tree

utility to look at the directory, we’ll see this structure of subdirectories and files. You’ll find tree

in your distribution’s repositories if you don’t already have it on your computer. tree .gnupg

The directory structure of the .gnupg directorystru

The contents of the directory tree are:

  • openpgp-revocs.d: This subdirectory contains your revocation certificate. You’ll need this if your private key ever becomes common knowledge or otherwise compromised. Your revocation certificate is used in the process of retiring your old keys and adopting new keys.
  • private-keys-v1.d: This subdirectory stores your private keys.
  • pubring.kbx: An encrypted file. It contains public keys, including yours, and some metadata about them.
  • pubring.kbx~: This is a backup copy of “pubring.kbx.” It is updated just before changes are made to “pubring.kbx.”
  • trustdb.gpg: This holds the trust relationships you have established for your own keys and for any accepted public keys belonging to other people.

You should be making regular, frequent backups of your home directory anyway, including the hidden files and folders. That will back up the “.gnupg” directory as a matter of course.

But you may think that your GPG keys are important enough to warrant a periodic backup of their own, or perhaps you want to copy your keys from your desktop to your laptop so that you have them on both machines. You’re you on both machines, after all.

Determining Which Keys to Back Up

We can ask gpg to tell us which keys are in your GPG system. We’ll use the --list-secret-keys options and the --keyid-format LONG options. gpg –list-secret-keys –keyid-format LONG

Listing the GPG key details to the terminal window

We’re told that GPG is looking inside the “/home/dave/.gnupg/pubring.kbx” file.

None of what appears on screen is your actual secret key.

  • The “sec” (secret) line shows the number of bits in the encryption (4096 in this example), the key ID, the date the key was created, and “[SC].” The “S” means the key can be used for digital signatures and the “C” means it can be used for certification.
  • The next line is the key fingerprint.
  • The “uid” line holds the ID of the key’s owner.
  • The “ssb” line shows the secret subkey, when it was created, and “E.” The “E” indicates it can be used for encryption.

If you have created multiple key pairs for use with different identities, they’ll be listed too. There’s only one key pair to back up for this user. The backup will include any public keys belonging to other people that the owner of this key has collected and decided to trust.

Backing Up

We can either ask gpg to back up all keys for all identities, or to back up the keys associated with a single identity. We’ll back up the private key, the secret key, and the trust database file.

To back up the public keys, use the --export  option. We’re also going to use the --export-options backup options. This ensures all GPG-specific metadata is included to allow the files to be imported correctly on another computer.

We’ll specify an output file with the --output option. If we didn’t do that, the output would be sent to the terminal window. gpg --export --export-options backup --output public.gpg

Exporting the public GPG keys

If you only wanted to back up the keys for a single identity, add the email address associated with the keys to the command line. If you can’t remember which email address it is, use the --list-secret-keys option, as described above. gpg --export --export-options backup --output public.gpg [email protected]

Exporting the public GPG keys for a single identity

To back up our private keys, we need to use the --export-secret-keys option instead of the --export option. Make sure you save this to a different file. gpg --export-secret-keys --export-options backup --output private.gpg

Exporting the private GPG keys

Because this is your private key, you’ll need to authenticate with GPG before you can proceed.

Note that you’re not being asked for your password. What you need to enter is the passphrase you supplied when your first created your GPG keys. Good password managers let you hold information like that as secure notes. It’s a good place to store them.

Providing the GPG passphrase to export the private keys

If the passphrase is accepted, the export takes place.

To back up your trust relationships, we need to export the settings from your “trustdb.gpg” file. We’re sending the output to a file called “trust.gpg.” This is a text file. It can be viewed using cat. gpg --export-ownertrust > trust.gpg cat trust.gpg

Exporting the GPG trust relationships

Here are the three files we’ve created. ls -hl *.gpg

The three files created by the exporting commands

We’ll move these over to another computer, and restore them. This will establish our identity on that machine, and allow us to use our existing GPG keys.

If you’re not moving the keys to another computer and you’re just backing them up because you want to be doubly sure they’re safe, copy them to some other media and store them safely. Even if they fall into the wrong hands, your public key is public anyway, so there’s no harm there. And without your passphrase, your private key cannot be restored. But still, keep your backups safe and private.

We’ve copied the files to a Manjaro 21 computer. ls *.gpg

The exported files transferred to a Manjaro computer

By default, Manjaro 21 uses the Z shell, zsh, which is why it looks different. But this doesn’t matter, it won’t affect anything. What we’re doing is governed by the gpg program, not the shell.

To import our keys, we need to use the --import option. gpg --import public.gpg

Importing the public GPG keys

The details of the key are displayed as it is imported. The “trustdb.gpg” file is also created for us. To import the private key is just as easy. We use the --import option again. gpg --import private.gpg

Importing the private GPG keys

We’re prompted to enter the passphrase.

Entering the passphrase to import the private GPG keys

Type it into the “Passphrase” field, hit the “Tab” key, and hit “Enter.”

Confirmation of the imported private GPG keys

The details of the imported keys are displayed. In our case, we only have one key.

To import our trust database, type: gpg –import-ownertrust trust.gpg

Importing the GPG trust relationships

We can check everything has been imported properly by using the --list-secret-keys option once more. gpg --list-secret-keys --keyid-format LONG

Verifying the import has worked

This gives us exactly the same output we saw on our Ubuntu computer earlier.

Protect Your Privacy

Make sure your GPG keys are safe by backing them up. If you have a computer disaster or just upgrade to a newer model, make sure you know how to transfer your keys to the new machine.

Why Use a YubiKey?

A YubiKey is a hardware-based authentication device that can securely store secret keys. Once a private key is written to your YubiKey, it cannot be recovered. Keeping secrets off your computer is more secure than storing them on your computer’s hard drive—another application could read your SSH keys from the ~/.ssh folder.

Various YubiKeys from Yubico

Each type of YubiKey supports a variety of different “interfaces,” each with different use cases. Many people associate a YubiKey with generating long one-time passwords (OTP) that look like this:


However, generating one-time passwords is just a small slice of what you can do with a YubiKey. In this post, I’ll be talking about the OpenPGP interface and how you can use it for authentication.

If you don’t own a YubiKey, you can still follow along and skip the YubiKey parts.

What Is OpenPGP?

OpenPGP is a specification (RFC-4880), which describes a protocol for using public-key cryptography for encryption, signing, and key exchange, based on the original Phil Zimmermann work of Pretty Good Privacy (PGP).

There is often confusion between PGP and Gnu Privacy Guard (GnuPG or GPG), probably because of the inverted acronym. Sometimes these terms are used interchangeably, but GPG is an implementation of the OpenPGP specification (and arguably the most popular one).

You may have seen “Verified” badges on GitHub commits that use OpenPGP to confirm an author’s identity.

GitHub Verified Badge

Set Up and Configure a GPG Key

First, you need to generate a GPG key. You could do this directly on a YubiKey. However, you can NOT back up the keys once they are on the device. So instead, I’ll generate a GPG key on my computer, and once I have everything working, I’ll permanently move it to my YubiKey.

Start by generating a new key using gpg. If you already have a key, you can skip this first step:

gpg --full-generate-key
Please select what kind of key you want:
   (1) RSA and RSA
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
   (9) ECC (sign and encrypt) *default*
  (10) ECC (sign only)
  (14) Existing key from card
Your selection? 1 
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (3072) 4096 
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 2y 
Key expires at Sat Jun  3 15:08:09 2023 EDT
Is this correct? (y/N) y 

GnuPG needs to construct a user ID to identify your key.

Real name: Brian Demers 
Email address: [email protected] 
Comment: bdemers test key 
You selected this USER-ID:
    "Brian Demers (bdemers test key) <[email protected]>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o 
Create an RSA key.
Set the key size to 4096.
Expire in 2 years, you can extend the expiration later.
Enter your name.
Enter your email address.
Enter an optional comment.
You will be prompted for a secret passphrase.
Press `o’ to save and exit.

Now you have a key! You can view your secret keys at any time by running:

gpg --list-secret-keys
sec   rsa4096 2021-06-03 [SC] [expires: 2023-06-03]
uid           [ultimate] Brian Demers (bdemers test key) <[email protected]>
ssb   rsa4096 2021-06-03 [E] [expires: 2023-06-03]
make a note of the Key ID; you will need this for a few different steps below.

Add an authentication sub-key for use with SSH for authentication—more on that below.

gpg --quick-add-key {your-key-id} rsa4096 auth 2y

If you list the secret keys again, you can see the new key and capability:

gpg --list-secret-keys
sec   rsa4096 2021-06-03 [SC] [expires: 2023-06-03] 
uid           [ultimate] Brian Demers (bdemers test key) <[email protected]>
ssb   rsa4096 2021-06-03 [E] [expires: 2023-06-03] 
ssb   rsa4096 2021-06-03 [A] [expires: 2023-06-03] 
The primary key, has the capabilities of signing [S] and certification [C].
The encryption [E] subkey.
The new authentication [A] subkey.

Now that you have your newly minted key, back them up!

Back Up Your GPG Keys

Backups of your GPG keys should be stored offline. You are going through the process of securely storing your keys on a YubiKey, don’t leave your backup hanging around on disk.

Pick a backup strategy that works for you, anything from storing the keys on a USB stick in a lock box, to a printed paper key, or you could go all out.

Run the following commands to export the keys and trust store.

gpg --armor --export > public-keys.asc 
gpg --armor --export-secret-keys > private-keys.asc 
gpg --export-ownertrust > ownertrust.asc 

# Create a revocation certificate, in case you need lose your key
gpg --armor --gen-revoke {your-key-id} > revocation.asc 
# Select 1 for "Key has been compromised"
Export all public keys.
Export all private keys.
Export the trust store.
Create a revocation certificate as well. Take a look at the GnuPG docs to learn more about key revocation.
The --armor argument outputs the key in a PEM format.

If you ever need to restore your keys from this backup, you can run:

# restore public keys
gpg --import public-keys.asc
# restore private keys
gpg --import private-keys.asc
# restore trust store
gpg --import-ownertrust ownertrust.asc

Enable Your GPG Key for SSH

There are a few moving parts needed to expose your new GPG key in a way that your SSH client will use them. Initially, this part confused me the most and left me jumping between blog posts and various Stack Overflow questions (many of which were out of date).

Working backward from the SSH client: – The SSH client reads the SSH_AUTH_SOCK environment variable; it contains the location of a Unix socket managed by an agent. – A gpg-agent running in the background controls this socket and allows your GPG key to be used for authentication.

gpg-agent can replace the need for ssh-agent.

Enable SSH support using standard sockets by updating the ~/.gnupg/gpg-agent.conf file:

echo "enable-ssh-support" >> ~/.gnupg/gpg-agent.conf
echo "use-standard-socket" >> ~/.gnupg/gpg-agent.conf

Next, you will need to find the “keygrip” for the authentication key; this is different from the key id, run:

gpg --list-secret-keys --with-keygrip
sec   rsa4096 2021-06-03 [SC] [expires: 2023-06-03]
      Keygrip = 78BCD171C2DD44E5D6054F0EC98B8C5D2A37D076
uid           [ultimate] Brian Demers (bdemers test key) <[email protected]>
ssb   rsa4096 2021-06-03 [E] [expires: 2023-06-03]
      Keygrip = 48B8049057AE142926CADB23A816DFF57DC85098
ssb   rsa4096 2021-06-03 [A] [expires: 2023-06-03]
      Keygrip = 28E05AC1DCFCB0C23EFD89A86C627B0959758813 
Don’t confuse the Key ID with the “keygrip”
The “keygrip” for the authentication [A] key.

Update ~/.gnupg/sshcontrol with the authentication “keygrip”; this allows the gpg-agent to use this key with SSH.

echo {keygrip} >> ~/.gnupg/sshcontrol

Configure your shell environment to use gpg-agent:

# configure SSH to use GPG
echo 'export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)' >> ~/.zshrc

# start gpg-agent, if it isn't started already
echo 'gpgconf --launch gpg-agent' >> ~/.zshrc
echo 'gpg-connect-agent /bye' >> ~/.zshrc
# the docs say to use: gpg-connect-agent /bye

# Set an environment variable to tell GPG the current terminal.
echo 'export GPG_TTY=$(tty)' >> ~/.zshrc
The gpg-agent is started automatically the first time it is used. However, to make sure it is running and available for SSH, it needs to be run when your shell starts.

Open a new terminal session and run ssh-add -L; if everything is working correctly, your public key in SSH format will be output to your console:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC66/kO8H70GENVLxdD6ZBaRKzj5iDmhUpjFw1WzQmFe+O/dW8FpIXtuZX7QxtV+fqCaK6zbMPfKcUTfogRPdUtzzy/1Ik5WOAfJRF/woL6rMpId0klLalAJ4etOq2X3izBY8RhdiBGPOBPWl9bVTMcvrxIJqcO61FUC0vfwcXX/0GnQ+CnnA2c3CoeZAJbVFWSjo3imii26DdcfL3S0++6yN1y8EFr6BXh7S50Wog/c3CjgyM9t8Hiew/6XpB4deHWEPKkjn/TquRrg1xoFlCkz8w4NJ+jjkhhn8zZ0pcL9fk6VlkzkGiA1ADaEYj+ji0yKvenjrMiiM2FxHEcnTyXsAJkw/3iSxkQ2CpnWjg+BMZnV0inCH9KGvgQcZ3NF6hLuCi1wWP9TA1pVIcLVsDXJrwAnKYyrngWF1O2eI60x2I6ySQUJd1bExYWt2M50V5SynqKWUiYcRecLrO3/wPKzdUsYSNgCcwRSE4pXabAzTsre/WOp7MPQZ9tqWp1tPjyg+wn5UeQ21j0Fm3pZ4EWhBDQmPjm6y9tLv0kzoR8gmqa1KfSqwWyCl3FrNkT1wixxjQL1DVhVy3Kqoy5HA/z30hhkd5BSaqqouykirS/fmFE+k5pwZ/TVwf7BlC1AFNH0AzlCqoWt8s7wFsMUKsVkhZmYaHU52EIvn5rwPcUQQ== (none)
If you don’t see any output, try restarting the agent with the following command: gpg-connect-agent reloadagent /bye

Test Your GPG Keys with GitHub

Now that I have GPG configured on my computer, the next thing is to make sure everything is working correctly.

Log in to GitHub and go the SettingsSSH and GPG page. Copy the output from ssh-add -L and add a new SSH key.

On the same page add your GPG key, copy the value from gpg --armor --export {your-key-id}.

On macOS, you can pipe the output directly to your clipboard using pbcopy, for example, ssh-add -L | pbcopy.

Once you have your key configured, you can open an SSH connection to GitHub:

ssh [email protected]

The session will close immediately but will print a message:

Hi bdemers! You've successfully authenticated, but GitHub does not provide shell access.
Connection to closed.

Woot! Everything is working!

Use a Graphical Pin Entry Program

If you would rather use a graphical application to enter your passphrase, you can install an alternative “pinentry” program. For example, on macOS:

# install a GUI pin entry program
brew install pinentry-mac

# configure gpg-agent to use this pinentry application
echo "pinentry-program $(which pinentry-mac)" >>  ~/.gnupg/gpg-agent.conf

# Restart gpg-agent
gpg-connect-agent reloadagent /bye

Sign Git Commits

Signing your commits is the only way to prove you are the author. Without a signature, someone could easily impersonate you by setting the name and email on a commit to match your information.

Configure Git to sign commits and tags automatically takes a few global properties; you want that “Verified” label on GitHub, don’t you 😉:

git config --global commit.gpgsign true
git config --global tag.gpgSign true
git config --global user.signingkey {your-key-id}

Your next commit will be signed, and you can double-check this by running git log --show-signature:

commit 85e0174d961f44666d8ffc7000e81df22eea13c6
gpg: Signature made Tue Jun  8 12:19:14 2021 EDT
gpg:                using RSA key 4C40E4AD3A157D172ECB27C9B2EAA49E11DE8CBD
gpg: Good signature from "Brian Demers (bdemers test key) <[email protected]>" [ultimate]
Author: Brian Demers <[email protected]>
Date:   Tue Jun 8 12:19:13 2021 -0400

    Testing commit signing

Setting Up a YubiKey

You didn’t need a YubiKey to complete any of the above GPG setup. Without one, though, I don’t think I’d go through setting up GPG + SSH authentication. Using standard SSH keys will offer the same level of security with less complexity. As I mentioned above, the goal was to move keys off my computer, and into the secure storage of the YubiKey.

One of the first things I do when I get a new YubiKey is to disable the keyboard functions. Unfortunately, I found myself accidentally touching the device, only to have it spew out a long set of characters; this is an excellent feature if you use it, but if you don’t, it can easily be disabled.

Open up the YubiKey Manager Application, select the Interfaces tab, and disable “OTP,” “PIV,” and “OATH” interfaces, and press the Save Interfaces button; the result will look something like this:

Enabled YubiKey Interfaces

Open up a terminal and run gpg --card-status, to display information about your device.

GPG refers to devices as “smartcards”, so any time you see the term “card” it refers to your YubiKey.
Reader ...........: Yubico YubiKey OTP FIDO CCID
Application ID ...: D2760001240103040006162602010000
Application type .: OpenPGP
Version ..........: 3.4
Manufacturer .....: Yubico
Serial number ....: 16260201
Name of cardholder: [not set]
Language prefs ...: [not set]
Salutation .......:
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: not forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 0 3
Signature counter : 0
KDF setting ......: off
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

If you see an “Operation not supported by device” error message, make sure you have a recent version of GPG installed and try again. I’m using version 2.2.27 in this post.

To configure the device with your settings, run:

gpg --card-edit

This command will open an interactive session; type admin to enable setting properties on the devices.

Run the following commands to update the card.

gpg/card> admin 
Admin commands are allowed

gpg/card> passwd 
gpg: OpenPGP card no. D2760001240103040006162602010000 detected

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 1 
PIN changed.

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 3 
PIN changed.

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? q

gpg/card> name 
Cardholder's surname: Demers
Cardholder's given name: Brian

gpg/card> lang 
Language preferences: en

gpg/card> login 
Login data (account name): bdemers

gpg/card> url 
URL to retrieve public key:

gpg/card> quit 
The admin command enables additional commands.
Enter the passwd to enter the password/pin sub-menu.
The default PIN is 123456.
The default Admin PIN is 12345678.
Set your name, last name, then first name.
The two-letter shortcode for your primary language.
Your preferred login name.
The URL of where your public key is stored, GitHub serves them at<username>.gpg.
Exit the program.

If you run gpg --card-status again you will updated information stored on your card:

Name of cardholder: Brian Demers
Language prefs ...: en
URL of public key :
Login data .......: bdemers

Move Your GPG Keys to a YubiKey

Make sure you back up your keys before moving them; this is your last chance!

Each key will need to be individual, the signature, encryption, and authentication keys. Edit the key by running:

gpg --edit-key {your-key-id}

Follow along with the prompts:

gpg> keytocard 
Really move the primary key? (y/N) y
Please select where to store the key:
   (1) Signature key
   (3) Authentication key
Your selection? 1


gpg> key 1 

sec  rsa4096/B2EAA49E11DE8CBD
     created: 2021-06-03  expires: 2023-06-03  usage: SC
     trust: ultimate      validity: ultimate
ssb* rsa4096/E45F9D38B846EC9E 
     created: 2021-06-03  expires: 2023-06-03  usage: E
ssb  rsa4096/D81BDB63BB563819
     created: 2021-06-03  expires: 2023-06-03  usage: A
[ultimate] (1). Brian Demers (bdemers test key) <[email protected]>

gpg> keytocard 
Please select where to store the key:
   (2) Encryption key
Your selection? 2

gpg> key 1 

gpg> key 2 


gpg> keytocard 
Please select where to store the key:
   (2) Authentciation key
Your selection? 3 


gpg> q 
Save changes? (y/N) y
Move the primary key to the smartcard.
Switch to key 1, the encryption key.
The selected key is marked with a *. If you do not see a selected key that means the primary key 0 has been selected.
Run keytocard again.
Deselect key 1.
Repeat the process for key 2 the authentication key.
You know the drill keytocard
All done! Exit and save changes.
After moving your keys to smartcard like a YubiKey, running the gpg --list-secret-keys command will show a greater-than symbol > next to the sec and ssb listings:
sec>  rsa4096 2021-06-03 [SC] [expires: 2023-06-03]
      Card serial no. = 0006 16260201
uid           [ultimate] Brian Demers (bdemers test key) <[email protected]>
ssb>  rsa4096 2021-06-03 [E] [expires: 2023-06-03]
ssb>  rsa4096 2021-06-03 [A] [expires: 2023-06-03]

The smart card does NOT store your public key, run the fetch sub command to make sure GPG can fetch your key from the GitHub URL specified above:

gpg --edit-card
gpg/card> fetch
gpg: requesting key from ''
gpg: key B2EAA49E11DE8CBD: duplicated subkeys detected - merged
gpg: key B2EAA49E11DE8CBD: public key "Brian Demers (bdemers test key) <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1

Use Your GPG Key on Multiple Computers

One of the great things about storing your GPG keys on a YubiKey is that you can easily bring the keys to a different device. Since the keys are stored on the smartcard, you simply need to “link” the device’s keys:

gpg --card-edit
gpg/card> fetch
gpg: requesting key from ''
gpg: key B2EAA49E11DE8CBD: duplicated subkeys detected - merged
gpg: key B2EAA49E11DE8CBD: public key "Brian Demers (bdemers test key) <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1

gpg/card> quit

Finally, you can confirm the keys have been linked by running gpg --list-secret-keys and look to see if the sec entry is marked with a >.

sec>  rsa4096 2021-06-03 [SC] [expires: 2023-06-03]
      Card serial no. = 0006 16260201
uid           [ultimate] Brian Demers (bdemers test key) <[email protected]>
ssb>  rsa4096 2021-06-03 [E] [expires: 2023-06-03]
ssb>  rsa4096 2021-06-03 [A] [expires: 2023-06-03]

The last thing to do is update the trust store on the new computer:

gpg --edit-key
gpg> trust 
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5 
Do you really want to set this key to ultimate trust? (y/N) y

gpg> q 
Run the trust subcommand.
Select 5 ultimately trust; ONLY do this for your key.
Finished! Press q to exit.

Your smartcard is now set up on multiple computers!

Changing the trust level of an imported GPG key

It took me quite a while to reach the solution, which is:

gpg --edit-key 'Pang'

which fires up GPG and shows a prompt.

gpg (GnuPG) 1.4.11; Copyright (C) 2010 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  2048R/2F67056A  created: 2013-07-13  expires: never       usage: SC
                     trust: never         validity: unknown
sub  2048R/          created: 2013-07-13  expires: never       usage: E
[ unknown] (1). Pang Yan Han
gpg >

At this point, I entered:


which shows:

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision?

Since this is my own key, I entered:


which trusts it ultimately.


piv-agent requires Homebrew in order to install dependencies. So install that first.

Copy the piv-agent binary into your $PATH, and the launchd .plist files to the correct location:

sudo cp piv-agent /usr/local/bin/
cp deploy/launchd/com.github.smlx.piv-agent.plist ~/Library/LaunchAgents/

From what I can tell .plist files only support absolute file paths, even for user agents. So edit ~/Library/LaunchAgents/com.github.smlx.piv-agent.plist and update the path to $HOME/.gnupg/S.gpg-agent.

If you plan to use gpg, install it via brew install gnupg. If not, you still need a pinentry, so brew install pinentry.

If ~/.gnupg doesn’t already exist, create it.

mkdir ~/.gnupg
chmod 700 ~/.gnupg

Then enable the service:

launchctl bootstrap gui/$UID ~/Library/LaunchAgents/com.github.smlx.piv-agent.plist
launchctl enable gui/$UID/com.github.smlx.piv-agent

A socket should appear in ~/.gnupg/S.gpg-agent.

Disable ssh-agent to avoid SSH_AUTH_SOCK environment variable conflict.

launchctl disable gui/$UID/com.openssh.ssh-agent

Set launchd user path to include /usr/local/bin/ for pinentry.

sudo launchctl config user path $PATH

Reboot and log back in.

Socket activation

piv-agent relies on socket activation, and is currently tested with systemd on Linux, and launchd on macOS. It doesn’t listen to any sockets directly, and instead requires the init system to pass file descriptors to the piv-agent process after it is running. This requirement makes it possible to exit the process when not in use.

ssh-agent and gpg-agent functionality are enabled by default in the systemd and launchd configuration files.

On Linux, the index of the sockets listed in piv-agent.socket are indicated by the arguments to --agent-types.

Understanding systemd at startup on Linux

Exploring Linux startup with systemd

More on sysadmins

Before you can observe the startup sequence, you need to do a couple of things to make the boot and startup sequences open and visible. Normally, most distributions use a startup animation or splash screen to hide the detailed messages that would otherwise be displayed during a Linux host’s startup and shutdown. This is called the Plymouth boot screen on Red Hat-based distros. Those hidden messages can provide a great deal of information about startup and shutdown to a sysadmin looking for information to troubleshoot a bug or to just learn about the startup sequence. You can change this using the GRUB (Grand Unified Boot Loader) configuration.

The main GRUB configuration file is /boot/grub2/grub.cfg, but, because this file can be overwritten when the kernel version is updated, you do not want to change it. Instead, modify the /etc/default/grub file, which is used to modify the default settings of grub.cfg.

Start by looking at the current, unmodified version of the /etc/default/grub file:

[root@testvm1 ~]# cd /etc/default ; cat grub
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_testvm1-swap rd.lvm.
testvm1/usr rhgb quiet"
[root@testvm1 default]#

Chapter 6 of the GRUB documentation contains a list of all the possible entries in the /etc/default/grub file, but I focus on the following:

  • I change GRUB_TIMEOUT, the number of seconds for the GRUB menu countdown, from five to 10 to give a bit more time to respond to the GRUB menu before the countdown hits zero.
  • I delete the last two parameters on GRUB_CMDLINE_LINUX, which lists the command-line parameters that are passed to the kernel at boot time. One of these parameters, rhgb stands for Red Hat Graphical Boot, and it displays the little Fedora icon animation during the kernel initialization instead of showing boot-time messages. The other, the quiet parameter, prevents displaying the startup messages that document the progress of the startup and any errors that occur. I delete both rhgb and quiet because sysadmins need to see these messages. If something goes wrong during boot, the messages displayed on the screen can point to the cause of the problem.

After you make these changes, your GRUB file will look like:

[root@testvm1 default]# cat grub
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_testvm1-swap rd.lvm.
[root@testvm1 default]#

The grub2-mkconfig program generates the grub.cfg configuration file using the contents of the /etc/default/grub file to modify some of the default GRUB settings. The grub2-mkconfig program sends its output to STDOUT. It has a -o option that allows you to specify a file to send the datastream to, but it is just as easy to use redirection. Run the following command to update the /boot/grub2/grub.cfg configuration file:

[root@testvm1 grub2]# grub2-mkconfig > /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.18.9-200.fc28.x86_64
Found initrd image: /boot/initramfs-4.18.9-200.fc28.x86_64.img
Found linux image: /boot/vmlinuz-4.17.14-202.fc28.x86_64
Found initrd image: /boot/initramfs-4.17.14-202.fc28.x86_64.img
Found linux image: /boot/vmlinuz-4.16.3-301.fc28.x86_64
Found initrd image: /boot/initramfs-4.16.3-301.fc28.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-7f12524278bd40e9b10a085bc82dc504
Found initrd image: /boot/initramfs-0-rescue-7f12524278bd40e9b10a085bc82dc504.img
[root@testvm1 grub2]#

Reboot your test system to view the startup messages that would otherwise be hidden behind the Plymouth boot animation. But what if you need to view the startup messages and have not disabled the Plymouth boot animation? Or you have, but the messages stream by too fast to read? (Which they do.)

There are a couple of options, and both involve log files and systemd journals—which are your friends. You can use the less command to view the contents of the /var/log/messages file. This file contains boot and startup messages as well as messages generated by the operating system during normal operation. You can also use the journalctl command without any options to view the systemd journal, which contains essentially the same information:

[root@testvm1 grub2]# journalctl
-- Logs begin at Sat 2020-01-11 21:48:08 EST, end at Fri 2020-04-03 08:54:30 EDT. --
Jan 11 21:48:08 kernel: Linux version 5.3.7-301.fc31.x86_64 ([email protected]) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #1 SMP Mon Oct >
Jan 11 21:48:08 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.3.7-301.fc31.x86_64 root=/dev/mapper/VG01-root ro resume=/dev/mapper/VG01-swap rd>
Jan 11 21:48:08 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 11 21:48:08 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 11 21:48:08 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 11 21:48:08 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 11 21:48:08 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Jan 11 21:48:08 kernel: BIOS-provided physical RAM map:
Jan 11 21:48:08 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 11 21:48:08 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 11 21:48:08 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 11 21:48:08 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000dffeffff] usable
Jan 11 21:48:08 kernel: BIOS-e820: [mem 0x00000000dfff0000-0x00000000dfffffff] ACPI data
Jan 11 21:48:08 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Jan 11 21:48:08 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Jan 11 21:48:08 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 11 21:48:08 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000041fffffff] usable
Jan 11 21:48:08 kernel: NX (Execute Disable) protection: active
Jan 11 21:48:08 kernel: SMBIOS 2.5 present.
Jan 11 21:48:08 kernel: DMI: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
Jan 11 21:48:08 kernel: Hypervisor detected: KVM
Jan 11 21:48:08 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 11 21:48:08 kernel: kvm-clock: cpu 0, msr 30ae01001, primary cpu clock
Jan 11 21:48:08 kernel: kvm-clock: using sched offset of 8250734066 cycles
Jan 11 21:48:08 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 11 21:48:08 kernel: tsc: Detected 2807.992 MHz processor
Jan 11 21:48:08 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 11 21:48:08 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable

I truncated this datastream because it can be hundreds of thousands or even millions of lines long. (The journal listing on my primary workstation is 1,188,482 lines long.) Be sure to try this on your test system. If it has been running for some time—even if it has been rebooted many times—huge amounts of data will be displayed. Explore this journal data because it contains a lot of information that can be very useful when doing problem determination. Knowing what this data looks like for a normal boot and startup can help you locate problems when they occur.

I will discuss systemd journals, the journalctl command, and how to sort through all of that data to find what you want in more detail in a future article in this series.

After GRUB loads the kernel into memory, it must first extract itself from the compressed version of the file before it can perform any useful work. After the kernel has extracted itself and started running, it loads systemd and turns control over to it.

This is the end of the boot process. At this point, the Linux kernel and systemd are running but unable to perform any productive tasks for the end user because nothing else is running, there’s no shell to provide a command line, no background processes to manage the network or other communication links, and nothing that enables the computer to perform any productive function.

Systemd can now load the functional units required to bring the system up to a selected target run state.


A systemd target represents a Linux system’s current or desired run state. Much like SystemV start scripts, targets define the services that must be present for the system to run and be active in that state. Figure 1 shows the possible run-state targets of a Linux system using systemd. As seen in the first article of this series and in the systemd bootup man page (man bootup), there are other intermediate targets that are required to enable various necessary services. These can include,,, and more. Some targets (like are used as checkpoints to ensure that all the required services are up and running before moving on to the next-higher level target.

Unless otherwise changed at boot time in the GRUB menu, systemd always starts the The file is a symbolic link to the true target file. For a desktop workstation, this is typically going to be the, which is equivalent to runlevel 5 in SystemV. For a server, the default is more likely to be the, which is like runlevel 3 in SystemV. The file is similar to single-user mode. Targets and services are systemd units.

The following table, which I included in the previous article in this series, compares the systemd targets with the old SystemV startup runlevels. The systemd target aliases are provided by systemd for backward compatibility. The target aliases allow scripts—and sysadmins—to use SystemV commands like init 3 to change runlevels. Of course, the SystemV commands are forwarded to systemd for interpretation and execution.

systemd targetsSystemV runleveltarget aliasesDescription  This target is always aliased with a symbolic link to either or systemd always uses the to start the system. The should never be aliased to,, or with a GUI
 4runlevel4.targetUnused. Runlevel 4 was identical to runlevel 3 in the SystemV world. This target could be created and customized to start local services without changing the default
multi-user.target3runlevel3.targetAll services running, but command-line interface (CLI) only
 2runlevel2.targetMulti-user, without NFS, but all other non-GUI services running
rescue.target1runlevel1.targetA basic system, including mounting the filesystems with only the most basic services running and a rescue shell on the main console
emergency.targetS Single-user mode—no services are running; filesystems are not mounted. This is the most basic level of operation with only an emergency shell running on the main console for the user to interact with the system.  Halts the system without powering it down
poweroff.target0runlevel0.targetHalts the system and turns the power off

Fig. 1: Comparison of SystemV runlevels with systemd targets and target aliases.

Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies, which are the services required to run the Linux host at a specific level of functionality. When all of the dependencies listed in the target configuration files are loaded and running, the system is running at that target level. If you want, you can review the systemd startup sequence and runtime targets in the first article in this series, Learning to love systemd.

Exploring the current target

Many Linux distributions default to installing a GUI desktop interface so that the installed systems can be used as workstations. I always install from a Fedora Live boot USB drive with an Xfce or LXDE desktop. Even when I’m installing a server or other infrastructure type of host (such as the ones I use for routers and firewalls), I use one of these installations that installs a GUI desktop.

I could install a server without a desktop (and that would be typical for data centers), but that does not meet my needs. It is not that I need the GUI desktop itself, but the LXDE installation includes many of the other tools I use that are not in a default server installation. This means less work for me after the initial installation.

But just because I have a GUI desktop does not mean it makes sense to use it. I have a 16-port KVM that I can use to access the KVM interfaces of most of my Linux systems, but the vast majority of my interaction with them is via a remote SSH connection from my primary workstation. This way is more secure and uses fewer system resources to run compared to

To begin, check the default target to verify that it is the

[root@testvm1 ~]# systemctl get-default
[root@testvm1 ~]#

Now verify the currently running target. It should be the same as the default target. You can still use the old method, which displays the old SystemV runlevels. Note that the previous runlevel is on the left; it is N (which means None), indicating that the runlevel has not changed since the host was booted. The number 5 indicates the current target, as defined in the old SystemV terminology:

[root@testvm1 ~]# runlevel
N 5
[root@testvm1 ~]#

Note that the runlevel man page indicates that runlevels are obsolete and provides a conversion table.

You can also use the systemd method. There is no one-line answer here, but it does provide the answer in systemd terms:

[root@testvm1 ~]# systemctl list-units --type target
UNIT                   LOAD   ACTIVE SUB    DESCRIPTION                  loaded active active Basic System            loaded active active Local Encrypted Volumes           loaded active active Login Prompts            loaded active active Graphical Interface    loaded active active Local File Systems (Pre)        loaded active active Local File Systems      loaded active active Multi-User System   loaded active active Network is Online          loaded active active Network                 loaded active active NFS client services loaded active active User and Group Name Lookups           loaded active active Paths                loaded active active Remote File Systems (Pre)       loaded active active Remote File Systems      loaded active active           loaded active active Slices                     loaded active active Sockets                loaded active active            loaded active active Swap                       loaded active active System Initialization          loaded active active Timers                     

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

21 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

This shows all of the currently loaded and active targets. You can also see the and the The is required before the can be loaded. In this example, the is active.

Switching to a different target

Making the switch to the is easy:

[root@testvm1 ~]# systemctl isolate

The display should now change from the GUI desktop or login screen to a virtual console. Log in and list the currently active systemd units to verify that is no longer running:

[root@testvm1 ~]# systemctl list-units --type target

Be sure to use the runlevel command to verify that it shows both previous and current “runlevels”:

[root@testvm1 ~]# runlevel
5 3

Changing the default target

Now, change the default target to the so that it will always boot into the for a console command-line interface rather than a GUI desktop interface. As the root user on your test host, change to the directory where the systemd configuration is maintained and do a quick listing:

[root@testvm1 ~]# cd /etc/systemd/system/ ; ll
drwxr-xr-x. 2 root root 4096 Apr 25  2018
lrwxrwxrwx. 1 root root   36 Aug 13 16:23 -> /lib/systemd/system/
lrwxrwxrwx. 1 root root   39 Apr 25  2018  display-manager.service -> /usr/lib/systemd/system/lightdm.service
drwxr-xr-x. 2 root root 4096 Apr 25  2018
drwxr-xr-x. 2 root root 4096 Aug 18 10:16
drwxr-xr-x. 2 root root 4096 Apr 25  2018
drwxr-xr-x. 2 root root 4096 Oct 30 16:54
[root@testvm1 system]#

I shortened this listing to highlight a few important things that will help explain how systemd manages the boot process. You should be able to see the entire list of directories and links on your virtual machine.

The entry is a symbolic link (symlink, soft link) to the directory /lib/systemd/system/ List that directory to see what else is there:

[root@testvm1 system]# ll /lib/systemd/system/ | less

You should see files, directories, and more links in this listing, but look specifically for and Now display the contents of, which is a link to /lib/systemd/system/

[root@testvm1 system]# cat 
#  SPDX-License-Identifier: LGPL-2.1+
#  This file is part of systemd.
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

Description=Graphical Interface
Conflicts=rescue.service rescue.service display-manager.service
[root@testvm1 system]#

This link to the file describes all of the prerequisites and requirements that the graphical user interface requires. I will explore at least some of these options in the next article in this series.

To enable the host to boot to multi-user mode, you need to delete the existing link and create a new one that points to the correct target. Make the PWD /etc/systemd/system, if it is not already:

[root@testvm1 system]# rm -f 
[root@testvm1 system]# ln -s /lib/systemd/system/

List the link to verify that it links to the correct file:

[root@testvm1 system]# ll 
lrwxrwxrwx 1 root root 37 Nov 28 16:08 -> /lib/systemd/system/
[root@testvm1 system]#

If your link does not look exactly like this, delete it and try again. List the content of the link:

[root@testvm1 system]# cat 
#  SPDX-License-Identifier: LGPL-2.1+
#  This file is part of systemd.
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

Description=Multi-User System
Conflicts=rescue.service rescue.service
[root@testvm1 system]#

The—which is really a link to the at this point—now has different requirements in the [Unit] section. It does not require the graphical display manager.

Reboot. Your virtual machine should boot to the console login for virtual console 1, which is identified on the display as tty1. Now that you know how to change the default target, change it back to the using a command designed for the purpose.

First, check the current default target:

[root@testvm1 ~]# systemctl get-default
[root@testvm1 ~]# systemctl set-default
Removed /etc/systemd/system/
Created symlink /etc/systemd/system/ → /usr/lib/systemd/system/
[root@testvm1 ~]#

Enter the following command to go directly to the and the display manager login page without having to reboot:

[root@testvm1 system]# systemctl isolate

I do not know why the term “isolate” was chosen for this sub-command by systemd’s developers. My research indicates that it may refer to running the specified target but “isolating” and terminating all other targets that are not required to support the target. However, the effect is to switch targets from one run target to another—in this case, from the multi-user target to the graphical target. The command above is equivalent to the old init 5 command in SystemV start scripts and the init program.

Log into the GUI desktop, and verify that it is working as it should.

Summing up

This article explored the Linux systemd startup sequence and started to explore two important systemd tools, systemctl and journalctl. It also explained how to switch from one target to another and to change the default target.

The next article in this series will create a new systemd unit and configure it to run during startup. It will also look at some of the configuration options that help determine where in the sequence a particular unit will start, for example, after networking is up and running.

How to compile a SELinux policy package


SELinux gained a bit of traction lately. As a follow-up on some SELinux-inspired articles in the community, I present you a tutorial on how to build a policy package yourself.

As long as you put all your files in the intended places, you probably will not notice SELinux running at all on a default CentOS installation. Things will start to get tricky once you try to do things in a way that is considered non-standard in SELinux’ default policy.

The Scenario

The scenario described here is not made up – I came across this problem while working on Efficient Rails DevOps.

I usually host Rails applications in the /var/www directory on my servers, each in a dedicated folder (/var/www/myapp, /var/www/myotherapp and so on). Not only the apps’ codebases and precompiled assets lie there, but also logs (for the application server and the webserver’s vhost), PIDs of the application server, temporary files and (quite important) the sockets nginx uses to forward requests to the application server.

Each application has its own vhost, which looks roughly like this (given a sample application named myapp):

upstream myapp {
  server unix:/var/www/myapp/shared/sockets/unicorn.sock fail_timeout=0;

server {
  listen 80;
  root /var/www/myapp/application/public;
  access_log /var/log/nginx/access.myapp.log;


  location / {
    try_files $uri @app;

  location @app {
    proxy_pass http://myapp;

Now imagine everything for this application is prepared (the database is created and migrated, assets are precompiled and everything is configured correctly). SELinux prevents this application from being run properly because our application server’s socket (/var/www/myapp/shared/sockets/unicorn.sock) cannot be read or written.

To make things interesting, examining nginx’ error.log you are just presented with permission denied errors with no hint about SELinux. SELinux runs as a kernel extension, so most tools are not aware of SELinux denials (they could know but this functionality is not implemented in most tools).

How to know that this is a SELinux issue

Usually SELinux problems show themselves as file not found or permission denied errors, even though the files/directories in question are present and are assigned the proper mode.

It is absolutely normal that you think of an SELinux problem not until you have triple-checked owner, group and permissions of every file which could possibly be involved. This can lead to serious doubt about your general Linux knowledge.

To quickly find out if you are experiencing a SELinux issue, temporarily set SELinux’ mode from enforcing to permissive with the command setenforce 0. If everything suddenly works, you can be sure that there is a problem with your current SELinux policy.

By default, SELinux incidents are logged by the auditd daemon. While in permissive mode, you can take a live look at auditd’s log (tail -f /var/log/audit/audit.log) while executing the commands in question to get an overview of what actions would be denied in enforcing mode.

Building a policy module

It is possible to build a policy module to allow certain actions which are permitted by default.

First, it is a good idea to clear the audit log to have just incidents related to our problem in our log:

> /var/log/audit/audit.log

While still in permissive mode, run all actions in question again – in my case this was starting, stopping and restarting the nginx webserver, running my deploy script and requesting the website with a browser. This will add quite a bunch of lines to the audit log.

If you do not find the errors in your log you might need to change a few settings

Making sure your application shows up in the audit logs

Set SELinux in permissive mode

~# setenforce 0

Disable dontaudit rules

To temporarily disable dontaudit rules, allowing all denials to be logged, enter the following command as root:

~# semodule -DB

Restart service

TO restart clamav-daemon.service to generate audit logs:

~# systemctl restart myapp.service

Find deny message

Find AVC, USER_AVC, SELINUX_ERR message of audit.log:

~# ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today
type=AVC msg=audit(1600117445.764:3149): avc:  denied  { create } for  pid=3857 comm="clamd" name="clamd.ctl" scontext=system_u:system_r:clamd_t:s0 tcontext=system_u:object_r:initrc_var_run_t:s0 tclass=sock_file permissive=1
type=AVC msg=audit(1600117445.764:3149): avc:  denied  { add_name } for  pid=3857 comm="clamd" name="clamd.ctl" scontext=system_u:system_r:clamd_t:s0 tcontext=system_u:object_r:initrc_var_run_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1600117445.764:3149): avc:  denied  { write } for  pid=3857 comm="clamd" name="clamav" dev="tmpfs" ino=15823 scontext=system_u:system_r:clamd_t:s0 tcontext=system_u:object_r:initrc_var_run_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1600117445.764:3149): avc:  denied  { search } for  pid=3857 comm="clamd" name="clamav" dev="tmpfs" ino=15823 scontext=system_u:system_r:clamd_t:s0 tcontext=system_u:object_r:initrc_var_run_t:s0 tclass=dir permissive=1

You can then use this log to build the policy module. To do that, you will need some SELinux-specific commands which can be installed with yum install policycoreutils-python.

Now dump the audit log through the audit2allow command to see what SELinux rules need to be changed in order to allow the actions which were forbidden according to our log:

ausearch -m AVC,USER_AVC,SELINUX_ERR -ts recent | audit2allow -m myapp

This would generate the following output:

module myapp 1.0;

require {
  type httpd_t;
  type httpd_sys_content_t;
  type initrc_t;
  class sock_file write;
  class unix_stream_socket connectto;

#============= httpd_t ==============
allow httpd_t httpd_sys_content_t:sock_file write;
allow httpd_t initrc_t:unix_stream_socket connectto;

Now we could rerun this command with a slightly different flag (-M for -m):

ausearch -m AVC -ts recent | audit2allow -M myapp

This would give us a myapp.pp file in our current working directory which we could integrate in our SELinux policy right away.

However, I recommend a different approach:

We take the previous command’s output and save it as a type enforcement file:

ausearch -m AVC -ts recent | audit2allow -m myapp > myapp.te

This has two major benefits:

  • Before compiling a policy package, we should always check if our type enforcement file does not allow too much or could be tweaked in another way.
  • Type enforcement files are human-readable (compiled policy packages are not), so we can keep them for later reference (maybe in our Ansible playbook).

In order to build a policy package from a type enforcement file, we first have to convert it into a policy module. This is done with the checkmodule command:

checkmodule -M -m -o myapp.mod myapp.te

This command will take our myapp.te file and create a myapp.mod policy module in our current working directory.

We can now take this policy module and compile it:

semodule_package -o myapp.pp -m myapp.mod

This command will result in a policy package called myapp.pp in our working directory.

This generated policy package can now be loaded with the semodule command:

semodule -i myapp.pp

When the policy package is loaded, our webserver will no longer have problems connecting to our application server’s socket and the Rails application will be served properly. Should other SELinux denials occur after loading the new policy package, it’s rinse and repeat.

Enable dontaudit rules

If you changed these settings before you should restore the system to the original settings. If you did not run this before you can ignore it

semodule -B

Set SELinux in enforcing mode

~# setenforce 1

Check module is install success

# semodule -l | grep myapp

A word on efficiency

When you are tweaking your policy package, it can be quite tedious repeating these steps over and over. When tweaking the type enforcement file, the following steps are necessary to load the new module:

  • Remove the policy package (semodule -r myapp)
  • Delete all generated files (rm -f myapp.mod myapp.pp)
  • Tweak the type enforcement file
  • Build the policy module (checkmodule -M -m -o myapp.mod myapp.te)
  • Build the policy package (semodule_package -o myapp.pp -m myapp.mod)
  • Load the policy package (semodule -i myapp.pp)

You can save a great amount of time if you wrap these commands in a small bash script:


semodule -r myapp
rm -f myapp.mod myapp.pp
checkmodule -M -m -o myapp.mod myapp.te
semodule_package -o myapp.pp -m myapp.mod
semodule -i myapp.pp

When provisioning your server, think about live-compiling the policy package instead of using a precompiled one. While you save some time by using a precompiled myapp.pp file, you may risk using an outdated one (which may not be compiled from the myapp.te file in your repository).

If you are using Ansible to provision your servers, the tasks of a role for compiling and loading a policy package may look like this (given a files directory containing the myapp.te file):

- name: Install tools
  yum: pkg=policycoreutils-python

- name: Remove SELinux policy package
  command: semodule -r myapp
  failed_when: false

- name: Copy SELinux type enforcement file
  copy: src=myapp.te

- name: Compile SELinux module file
  command: checkmodule -M -m -o /tmp/myapp.mod /tmp/myapp.te

- name: Build SELinux policy package
  command: semodule_package -o /tmp/myapp.pp -m /tmp/myapp.mod

- name: Load SELinux policy package
  command: semodule -i /tmp/myapp.pp

- name: Remove temporary files
  file: path=/tmp/myapp.*


When used correctly, SELinux adds a lot to your server’s security. When you run into problems with certain commands being denied, you should first make sure that you truly understand what causes the error.

Chances are very high that SELinux complains for a reason. Often you can avoid the problem altogether by rethinking where you put which files.

When you are absolutely sure that you need to build a new policy package, do yourself a favor and research thoroughly what each added rule does – it is only too easy to create security holes which would defeat SELinux’ purpose.

Further reading

How to Install Kubernetes Cluster on Ubuntu 22.04 with ZFS

Are you looking for an easy guide on how to install Kubernetes Cluster on Ubuntu 22.04 (Jammy Jellyfish)?

The step-by-step guide on this page will show you how to install Kubernetes cluster on Ubuntu 22.04 using Kubeadm command step by step.

Kubernetes is a free and open-source container orchestration tool, it also known as k8s. With the help of Kubernetes, we can achieve automated deployment, scaling and management of containerized application.

A Kubernetes cluster consists of worker nodes on which application workload is deployed and a set up master nodes which are used to manage worker nodes and pods in the cluster.

In this guide, we are using one master node and two worker nodes. Following are system requirements on each node,

  • Minimal install Ubuntu 22.04
  • Minimum 2GB RAM or more
  • Minimum 2 CPU cores / or 2 vCPU
  • 20 GB free disk space on /var or more
  • Sudo user with admin rights
  • Internet connectivity on each node

Lab Setup

  • Master Node: –
  • First Worker Node: –
  • Second Worker Node: –

Without any delay, let’s jump into the installation steps of Kubernetes cluster

Step 1) Set hostname and add entries in the hosts file

Login to to master node and set hostname using hostnamectl command,

sudo hostnamectl set-hostname ""

On the worker nodes, run

sudo hostnamectl set-hostname ""
sudo hostnamectl set-hostname ""

Add the following entries in /etc/hosts file on each node k8smaster k8sworker1 k8sworker2

Step 2) Disable swap & add kernel settings

Execute beneath swapoff and sed command to disable swap. Make sure to run the following commands on all the nodes.

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Load the following kernel modules on all the nodes,

sudo tee /etc/modules-load.d/containerd.conf <<EOF
sudo modprobe br_netfilter

Set the following Kernel parameters for Kubernetes, run beneath tee command

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

Reload the above changes, run

sudo sysctl --system

Step 3) Install containerd run time

In this guide, we are using containerd run time for our Kubernetes cluster. So, to install containerd, first install its dependencies.

sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

Enable docker repository

sudo curl -fsSL | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"

Now, run following apt command to install containerd

sudo apt update
sudo apt install -y

Configure containerd so that it starts using systemd as cgroup.

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo sed -i 's/snapshotter \= "overlayfs"/snapshotter \= "zfs"/g' /etc/containerd/config.toml

You will now need to create a zpool to use as the snapshotter for containerd. If you create this in the default path everything should work with the config created above, but you might need to set the path for the zfs snapshotter if you want a different path.

sudo zfs create -o mountpoint=/var/lib/containerd/io.containerd.snapshotter.v1.zfs <your zfs pool>/containerd

Restart and enable containerd service

sudo systemctl restart containerd
sudo systemctl enable containerd

Step 4) Add apt repository for Kubernetes

Execute following commands to add apt repository for Kubernetes

sudo curl -fsSL | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/google.gpg
sudo apt-add-repository "deb kubernetes-xenial main"

Note: At time of writing this guide, Xenial is the latest Kubernetes repository but when repository is available for Ubuntu 22.04 (Jammy Jellyfish) then you need replace xenial word with ‘jammy’ in ‘apt-add-repository’ command.

Step 5) Install Kubernetes components Kubectl, kubeadm & kubelet

Install Kubernetes components like kubectl, kubelet and Kubeadm utility on all the nodes. Run following set of commands,

sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Step 6) Initialize Kubernetes cluster with Kubeadm command

Now, we are all set to initialize Kubernetes cluster. Run the following Kubeadm command from the master node only.

sudo kubeadm init

Output of above command should end with something like the following,

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join --token vt4ua6.23wer232423134 \
        --discovery-token-ca-cert-hash sha256:3a2c36feedd14cff3ae835abcdefgesadf235adca0369534e938ccb307ba5

As the output above confirms that control-plane has been initialize successfully. In output also we are getting set of commands for interacting the cluster and also the command for worker node to join the cluster.

So, to start interacting with cluster, run following commands from the master node,

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now, try to run following kubectl commands to view cluster and node status

kubectl cluster-info
kubectl get nodes


user@server:~ $ kubectl cluster-info
Kubernetes control plane is running at
CoreDNS is running at

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
user@server:~ $ kubectl get nodes
NAME         STATUS   ROLES           AGE    VERSION
k8smaster   Ready    control-plane   153m   v1.26.1

If you only want to have one node you can run the following to allow scheduling on the master

kubectl taint node k8smaster
kubectl taint nodes --all
kubectl taint nodes --all

Join both the worker nodes to the cluster, command is already there is output, just copy paste on the worker nodes,

sudo kubeadm join --token vt4ua6.23wer232423134 \
   --discovery-token-ca-cert-hash sha256:3a2c36feedd14cff3ae835abcdefgesadf235adca0369534e938ccb307ba5

Output from both the worker nodes,

Check the nodes status from master node using kubectl command,

kubectl get nodes

As we can see nodes status is ‘NotReady’, so to make it active. We must install CNI (Container Network Interface) or network add-on plugins like Calico, Flannel and Weave-net.

Step 6) Install Calico Pod Network Add-on

Run following curl and kubectl command to install Calico network plugin from the master node,

curl -O
kubectl apply -f calico.yaml

Output of above commands would look like below,


Verify the status of pods in kube-system namespace,

kubectl get pods -n kube-system



Perfect, check the nodes status as well.

kubectl get nodes

Great, above confirms that nodes are active node. Now, we can say that our Kubernetes cluster is functional.

Step 7) Test Kubernetes Installation

To test Kubernetes installation, let’s try to deploy nginx based application and try to access it.

kubectl create deployment nginx-app --image=nginx --replicas=2

Check the status of nginx-app deployment

kubectl get deployment nginx-app
nginx-app   2/2     2            2           68s

Expose the deployment as NodePort,

kubectl expose deployment nginx-app --type=NodePort --port=80
service/nginx-app exposed

Run following commands to view service status

kubectl get svc nginx-app
kubectl describe svc nginx-app

Output of above commands,


Use following command to access nginx based application,

curl http://<woker-node-ip-addres>:31246



Great, above output confirms that nginx based application is accessible.

That’s all from this guide, I hope you have found this guide useful. Most of this post comes from with modifications to work with ZFS.

Resolving Oracle Cloud “Out of Capacity” issue and getting free VPS with 4 ARM cores / 24GB of memory (using OCI CLI)

Very neat and useful configuration was recently announced at Oracle Cloud Infrastructure (OCI) blog as a part of Always Free tier. Unfortunately, as of July 2021, it’s still very complicated to launch an instance due to the “Out of Capacity” error. Here we’re solving that issue as Oracle constantly adds capacity from time to time.

Each tenancy gets the first 3,000 OCPU hours and 18,000 GB hours per month for free to create Ampere A1 Compute instances using the VM.Standard.A1.Flex shape (equivalent to 4 OCPUs and 24 GB of memory).

Starting from Oracle Cloud Infrastructure (OCI) CLI installation.


The installer script automatically installs the CLI and its dependencies, Python and virtualenv. Before running the…

On a Mac Computer you can also install the OCI cli with Brew.

brew install oci-cli jq

Generating API key

After logging in to OCI Console, click profile icon and then “User Settings”

Go to Resources -> API keys, click “Add API Key” button

Add API Key

Make sure “Generate API Key Pair” radio button is selected, click “Download Private Key” and then “Add”.

Download Private Key

Copy the contents from textarea and save it to file with a name “config”. I put it together with *.pem file in newly created directory $HOME/.oci

That’s all about the API key generation part.

Setting up CLI

Specify config location


If you haven’t added OCI CLI binary to your PATH, run

alias oci="$HOME/bin/oci"

(or whatever path it was installed).

Set permissions for the private key

oci setup repair-file-permissions --file $HOME/.oci/oracleidentitycloudservice***.pem

Test the authentication (user value should be taken from textarea when generating API key):

oci iam user get --user-id ocid1.user.oc1..aaaaaaaaa***123

Output should be similar to:

  "data": {
    "capabilities": {
      "can-use-api-keys": true,
      "can-use-auth-tokens": true,
      "can-use-console-password": false,
      "can-use-customer-secret-keys": true,
      "can-use-db-credentials": true,
      "can-use-o-auth2-client-credentials": true,
      "can-use-smtp-credentials": true
    "compartment-id": "ocid1.tenancy.oc1..aaaaaaaa***123",
    "db-user-name": null,
    "defined-tags": {
      "Oracle-Tags": {
        "CreatedBy": "scim-service",
        "CreatedOn": "2021-08-31T21:03:23.374Z"
    "description": "[email protected]",
    "email": null,
    "email-verified": true,
    "external-identifier": "123456789qwertyuiopas",
    "freeform-tags": {},
    "id": "ocid1.user.oc1..aaaaaaaaa***123",
    "identity-provider-id": "ocid1.saml2idp.oc1..aaaaaaaae***123",
    "inactive-status": null,
    "is-mfa-activated": false,
    "last-successful-login-time": null,
    "lifecycle-state": "ACTIVE",
    "name": "oracleidentitycloudservice/[email protected]",
    "previous-successful-login-time": null,
    "time-created": "2021-08-31T21:03:23.403000+00:00"
  "etag": "121345678abcdefghijklmnop"

Acquiring launch instance params

We need to know which Availability Domain is always free. Click Oracle Cloud menu -> Compute -> Instances


Click “Create Instance” and notice which one has “Always Free Eligible” label in Placement Section. In our case it’s AD-2.

Almost every command needs compartment-id param to be set. Let’s save it to COMPARTMENT var (replace with your “tenancy” value from the config file) then save the following under ~/bin/launch-instance:

#!/bin/bash -x
if [[ -f "$FLAG_FILE" ]]; then
  echo "Already deployed!"
  exit 0


# Setup the oci profile using the following command:
#   oci session authenticate --region us-ashburn-1
mkdir -p $HOME/.oci/hosts/
#AUTH_PARAMS="--profile $PROFILE --auth security_token"

AD=$($OCI_CLI iam availability-domain $AUTH_PARAMS list --compartment-id $COMPARTMENT | $JQ -r ".data| .[0].name")
if [[ $? != 0 ]]; then
   echo "Could not determine AD.  You might need to reauthenticate"
   echo "oci session authenticate --region us-ashburn-1 $AUTH_PARAMS"
   exit 1

SUBNET=$($OCI_CLI network subnet $AUTH_PARAMS list --compartment-id $COMPARTMENT | $JQ -r ".data| .[0].id")
if [[ $? != 0 ]]; then
   echo "Could not determine Subnet"
   exit 1

IMAGE=$($OCI_CLI compute image $AUTH_PARAMS list --compartment-id=$COMPARTMENT --shape=$SHAPE | $JQ -r '[ .data[] | select(."operating-system" == "Oracle Linux") | select(."operating-system-version"|startswith("8"))] | .[0].id')
if [[ $? != 0 ]]; then
   echo "Could not determine Image"
   exit 1
# export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-bundle.crt
OCI_INFO=$($OCI_CLI $AUTH_PARAMS compute instance launch --shape $SHAPE \
   --availability-domain $AD \
   --compartment-id $COMPARTMENT \
   --image-id $IMAGE \
   --display-name $DISPLAY_NAME \
   --metadata "{ \"hostclass\": \"$HC\" }" \
   --subnet-id $SUBNET --shape-config "{ \"memoryInGBs\": 24.0, \"ocpus\": 4.0 }" \
   --ssh-authorized-keys-file $SSH_PUB_KEY_FILE \
if [[ $? != 0 ]]; then
   echo "Failed to deploy"
   exit 1
if [[ -z $INSTANCE_ID ]]; then
   echo "Faild to read instance info from the file"
   exit 1
while [[ -z "$INSTANCE_IP" ]]; do
  echo "Waiting 10s for the ip to be availible"
  sleep 10s
  INSTANCE_IP=$($OCI_CLI $AUTH_PARAMS compute instance list-vnics --instance-id $INSTANCE_ID | $JQ -r '.data[]."public-ip"')
if [[ ! -z $INSTANCE_IP ]]; then
echo "Updating the SSH config to include $DISPLAY_NAME"
cat >> ~/.ssh/config.d/custom < $FLAG_FILE

You can now setup crontab to run this script e.g. every minute by, saving this to $HOME/bin/launch-instance as a script file and making sure cron user is able to access private key. Some of the variables in the script might need to be updated to match your system. We won’t cover this part.


  "data": {
    "agent-config": {
      "are-all-plugins-disabled": false,
      "is-management-disabled": false,
      "is-monitoring-disabled": false,
      "plugins-config": null
    "availability-config": {
      "is-live-migration-preferred": null,
      "recovery-action": "RESTORE_INSTANCE"
    "availability-domain": "RCFH:US-ASHBURN-AD-1",
    "capacity-reservation-id": null,
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaa***123",
    "dedicated-vm-host-id": null,
    "defined-tags": {},
    "display-name": "user-2023-01-30-22601",
    "extended-metadata": {},
    "fault-domain": "FAULT-DOMAIN-3",
    "freeform-tags": {},
    "id": "ocid1.instance.oc1.iad.adsfasfasdfasdfasdfasdf12323dfsag234",
    "image-id": "ocid1.image.oc1.iad.aaaaaaaa**123",
    "instance-options": {
      "are-legacy-imds-endpoints-disabled": false
    "ipxe-script": null,
    "launch-mode": "PARAVIRTUALIZED",
    "launch-options": {
      "boot-volume-type": "PARAVIRTUALIZED",
      "firmware": "UEFI_64",
      "is-consistent-volume-naming-enabled": true,
      "is-pv-encryption-in-transit-enabled": false,
      "network-type": "PARAVIRTUALIZED",
      "remote-data-volume-type": "PARAVIRTUALIZED"
    "lifecycle-state": "PROVISIONING",
    "metadata": {
      "hostclass": "Your-Hostclass",
      "ssh_authorized_keys": "ssh-rsa AAAAB3123432412343241234324123432412343241234324123432412343241234324123432412343241234324123432412343241234324123432412343241234324123432412343241234324/71ctthb1Ek= your-ssh-key"
    "platform-config": null,
    "preemptible-instance-config": null,
    "region": "iad",
    "shape": "VM.Standard.A1.Flex",
    "shape-config": {
      "baseline-ocpu-utilization": null,
      "gpu-description": null,
      "gpus": 0,
      "local-disk-description": null,
      "local-disks": 0,
      "local-disks-total-size-in-gbs": null,
      "max-vnic-attachments": 2,
      "memory-in-gbs": 6.0,
      "networking-bandwidth-in-gbps": 1.0,
      "ocpus": 1.0,
      "processor-description": "3.0 GHz Ampere® Altra™"
    "source-details": {
      "boot-volume-size-in-gbs": null,
      "boot-volume-vpus-per-gb": null,
      "image-id": "ocid1.image.oc1.iad.aaaaaaaas***123",
      "kms-key-id": null,
      "source-type": "image"
    "system-tags": {},
    "time-created": "2023-01-30T19:13:44.584000+00:00",
    "time-maintenance-reboot-due": null
  "etag": "123456789123456789123456789123456789123456789",
  "opc-work-request-id": "ocid1.coreservicesworkrequest.oc1.iad.abcd***123"

I believe it’s pretty safe to leave the cron running and check cloud console once per few days. Because when you’ll succeed, usually you won’t be able to create more instances than allowed — but start getting something like

    "code": "LimitExceeded",
    "message": "The following service limits were exceeded: standard-a1-memory-count, standard-a1-core-count. Request a service limit increase from the service limits page in the console. "

or (again)

    "code": "InternalError",
    "message": "Out of host capacity."

At least that’s how it worked for me. Just in case the script writes a file when it successfully deploys an instance. If the file is in place the script will not run again.

To verify the instance you can run the following.

oci compute instance list --compartment-id $C

You could also add something to check it’s output periodically to know when cron needs to be disabled. That’s not related to our issue here.

Assigning public IP address

We are not doing this during the command run due to the default limitation (2 ephemeral addresses per compartment). That’s how you can achieve this. When you’ll succeed with creating an instance, open OCI Console, go to Instance Details -> Resources -> Attached VNICs by selecting it’s name


Then Resources -> IPv4 Addresses -> … -> Edit

IPv4 Addresses

Choose ephemeral and click “Update”

Edit IP address


That’s how you will login when instance will be created (notice opc default username)

ssh -i ~/.ssh/id_rsa [email protected]

If you didn’t assign public IP, you can still copy internal FQDN or private IP (10.x.x.x) from the instance details page and connect from your other instance in the same VNIC. e.g.

ssh -i ~/.ssh/id_rsa [email protected]

Thanks for reading!

Python 3.10 on OL8

Step 1 – Install Required Dependencies

The latest version of Python is not included in the Oracle Linux 8 default repo, so you will need to compile it from the source.

To compile Python from the source, you will need to install some dependencies on your system. You can install all of them by running the following command:

dnf install curl gcc openssl-devel bzip2-devel libffi-devel zlib-devel sqlite-devel wget make -y

Once all the dependencies are installed, you can proceed to the next step.

Step 2 – Install Python 3.10.4 on Oracle Linux 8

Next, visit the Python official download page and download the latest version of Python using the following command:


Once the download is completed, extract the downloaded file using the following command:

tar xzf Python-3.10.8.tgz

Next, change the directory to the extracted directory and configure Python using the following command:

cd Python-3.10.8
sudo ./configure --enable-optimizations --with-system-ffi --with-computed-gotos

Next, start the build process using the following command:

sudo make -j ${nproc}

Finally, install Python 3.10 by running the following command:

sudo make altinstall

After the successful installation, verify the Python installation using the following command:

python3.10 --version

You will get the following output:

Python 3.10.8

Step 3 – Create a Virtual Environment in Python

Python provides a venv module that helps developers to create a virtual environment and deploy applications easily in an isolated environment.

To create a virtual environment named python-env, run the following command:

python3.10 -m venv python-env

Next, activate the virtual environment using the following command:

source python-env/bin/activate

You will get the following shell:

(python-env) [root@oraclelinux8 ~]#

Now, you can use the PIP package manager to install any package and dependencies inside your virtual environment.

For example, run the following command to install apache-airflow:

pip3.10 install apache-airflow

If you want to remove this package, run the command below:

pip3.10 uninstall apache-airflow

To exit from the Python virtual environment, run the following command:



In this guide, we explained how to install Python 3.10.8 on Oracle Linux 8. You can now install Python in the development environment and start developing your first application using the Python programming language.

How to check if AES-NI is enabled for OpenSSL on Linux

Intel Advanced Encryption Standard New Instructions (AES-NI) is a special instruction set for x86 processors, which is designed to accelerate the execution of AES algorithms. AES-based symmetric encryption is widely used in a variety of security applications and protocol implementations (e.g., IPSec, SSL/TLS, HTTPS, SSH). OpenSSL crypto library supports AES-based ciphers as well.

To support available hardware extensions, OpenSSL provides so-called EVP crypto APIs (e.g., EVP_Decrypt/EVP_Encrypt) which can automatically leverage hardware acceleration like AES-NI (if available) and fall back to software implementation (if not available), via a single interface. If you want to check whether currently installed OpenSSL supports AES-NI hardware acceleration, you can test using OpenSSL’s EVP APIs.

Check if AES-NI is Available on CPU Processors

Before proceeding, first verify that current CPUs have the AES instruction set. For this you can inspect CPU flags as follows.

$ grep -m1 -o aes /proc/cpuinfo

If the output shows aes, that means AES-NI engine is available on current CPUs.

Check if AES-NI is Enabled for OpenSSL

To check whether OpenSSL can leverage AES instruction sets, you can use OpenSSL’s EVP APIs. When EVP APIs are called, they can automatically detect the presence of AES-NI and accelerate AES encryption computations using AES instruction sets. Thus you can compare AES performance with or without EVP functions. If AES-NI is available for OpenSSL, you will see significant performance boost when EVP functions are used.

Let’s use OpenSSL’s built-in speed test.

To measure AES algorithm speed without AES-NI acceleration:

$ openssl speed -elapsed aes-128-cbc

To measure AES algorithm speed with AES-NI acceleration (via EVP APIs):

$ openssl speed -elapsed -evp aes-128-cbc

The above two example outputs show encryption rates for different block sizes. You can see that AES speed with AES-NI acceleration is about five times higher than non-acceleration. This confirms that AES-NI is enabled for OpenSSL. If OpenSSL cannot leverage AES-NI for any reason, two outputs would show the same performance.

Install Remote Desktop Server on Oracle Linux

Xrdp is an open-source implementation of the Microsoft Remote Desktop Protocol (RDP) that allows you to graphically control a remote system. With RDP, you can log in to the remote machine and create a real desktop session the same as if you had logged in to a local machine.

This tutorial explains how to install and configure Xrdp server on Oracle Linux 8.

Installing Desktop Environment

Generally, Linux servers don’t have a desktop environment installed. If the machine you want to connect to doesn’t have GUI, the first step is to install it. Otherwise, skip this step.

Gnome is the default desktop environment in Oracle Linux 8. To install Gnome on your remote machine, run the following command

sudo dnf groupinstall "Server with GUI"

Depending on your system, downloading and installing the Gnome packages and dependencies may take some time.

Installing Xrdp

Xrdp is available in the EPEL software repository. If EPEL is not enabled on your system, enable it by typing:

sudo dnf install epel-release

Install the Xrdp package:

sudo dnf install xrdp 

When the installation process is complete, start the Xrdp service and enable it at boot:

sudo systemctl enable xrdp --now

You can verify that Xrdp is running by typing:

sudo systemctl status xrdp

The output will look something like this:

● xrdp.service - xrdp daemon
   Loaded: loaded (/usr/lib/systemd/system/xrdp.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-02-02 18:30:43 UTC; 11s ago

Configuring Xrdp

The configuration files are located in the /etc/xrdp directory. For basic Xrdp connections, you do not need to make any changes to the configuration files. Xrdp uses the default X Window desktop, which in this case, is Gnome.

The main configuration file is named xrdp.ini . This file is divided into sections and allows you to set global configuration settings such as security and listening addresses and create different xrdp login sessions.

Whenever you make any changes to the configuration file you need to restart the Xrdp service:

sudo systemctl restart xrdp

Xrdp uses file to launch the X session. If you want to use another X Window desktop, edit this file.

Configuring Firewall

By default, Xrdp listens on port 3389 on all interfaces. If you run a firewall on your Oracle Linux machine (which you should always do), you’ll need to add a rule to allow traffic on the Xrdp port.

Typically you would want to allow access to the Xrdp server only from a specific IP address or IP range. For example, to allow connections only from the range, enter the following command:

sudo firewall-cmd --new-zone=xrdp --permanentsudo firewall-cmd --zone=xrdp --add-port=3389/tcp --permanentsudo firewall-cmd --zone=xrdp --add-source= --permanentsudo firewall-cmd --reload

To allow traffic to port 3389 from anywhere use the commands below. Allowing access from anywhere is highly discouraged for security reasons.

sudo firewall-cmd --add-port=3389/tcp --permanentsudo firewall-cmd --reload

For increased security, you may consider setting up Xrdp to listen only on localhost and creating an SSH tunnel that securely forwards traffic from your local machine on port 3389 to the server on the same port.

Another secure option is to install OpenVPN and connect to the Xrdp server trough the private network.

Connecting to the Xrdp Server

Now that the Xrdp server is configured, it is time to open your local Xrdp client and connect to the remote Oracle Linux 8 system.

Windows users can use the default RDP client. Type “remote” in the Windows search bar and click on “Remote Desktop Connection”. This will open up the RDP client. In the “Computer” field, type the remote server IP address and click “Connect”.

Chroot into an Ubuntu on zfs system

Mount everything correctly:

zpool export -a
zpool import -N -R /mnt rpool
zpool import -N -R /mnt bpool
zfs load-key -a
# Add “UUID” at the end, if appropriate; use zfs list to see your values:
zfs mount rpool/ROOT/ubuntu
zfs mount bpool/BOOT/ubuntu
zfs mount -a

If needed, you can chroot into your installed environment:

for i in proc sys dev run tmp; do mount -o bind /$i /mnt/$i; done
chroot /mnt /bin/bash --login
mount -a

Do whatever you need to do to fix your system.

When done, cleanup:

mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    xargs -i{} umount -lf {}
zpool export -a

Resize a disk in Linux

The following example shows the volumes on a Nitro-based instance:

[system-user ~]$ lsblk
sda       8:0    0   30G  0 disk
├─sda1    8:1    0  9.9G  0 part /
├─sda14   8:14   0    4M  0 part
└─sda15   8:15   0  106M  0 part /boot/efi
  • The root volume, /dev/sda, has a partition, /dev/sda1. While the size of the root volume reflects the new size, 30 GB, the size of the partition reflects the original size, 30 GB, and must be extended before you can extend the file system.

To extend the partition on the root volume, use the following growpart command. Notice that there is a space between the device name and the partition number.

[system-user ~]$ sudo growpart /dev/sda 1

You can verify that the partition reflects the increased volume size by using the lsblk command again.

[system-user ~]$ lsblk
sda       8:0    0   30G  0 disk 
├─sda1    8:1    0 29.9G  0 part /
├─sda14   8:14   0    4M  0 part 
└─sda15   8:15   0  106M  0 part /boot/efi

Extending the File System

Use a file system-specific command to resize each file system to the new volume capacity. For a file system other than the examples shown here, refer to the documentation for the file system for instructions.

Example: Extend an ext2, ext3, or ext4 file system

Use the df -h command to verify the size of the file system for each volume. In this example, /dev/sda1 reflects the original size of the volume, 10 GB.

[system-user ~]$ df -h
/dev/root       9.6G  3.9G  5.7G  41% /

Use the resize2fs command to extend the file system on each volume.

[system-user ~]$ sudo resize2fs /dev/sda1

You can verify that each file system reflects the increased volume size by using the df -h command again.

[system-user ~]$ df -h
/dev/root        29G  3.8G   26G  14% /

Copyright © 2018 All right reserved