The Little known SSH ForceCommand

There may be times when you want to restrict what commands a user can issue when they attempt to login over an SSH connection. Instead of executing the users shell, you can instead execute a custom script that limits the user to a specific set of commands. This is known as ForceCommand.

There are two ways one can choose to use this. Today, I’ll describe a scenario where you don’t have permissions to modify the SSH server config (/etc/ssh/sshd_config) but still want to enforce specific commands for certain users (identified by their SSH key).

First, create a script somewhere that you have write permissions. We’ll reference this later in our config. Here’s a quick example to get you started that only allows you to get a process list (ps -ef) and print system statistics (vmstat).

#!/bin/sh
# script: /home/shane/bin/wrapper.sh 

case "$SSH_ORIGINAL_COMMAND" in
	"ps")
		ps -ef
		;;
	"vmstat")
		vmstat 1 100
		;;
	*)
		echo "Only these commands are available to you:"
		echo "ps, vmstat, cupsys stop, cupsys start"
		exit 1
		;;
esac

Be sure to set the script to be executable.

chmod +x /home/shane/bin/wrapper.sh

Now, we just need to edit our ~/.ssh/authorized_keys file to reference this script. Note, you could create multiple scripts and assign them to different users’s SSH keys to allow different commands.

command="/home/shane/bin/wrapper.sh",no-port-forwarding,no-agent-forwarding ssh-ed25519 AAAAC3NzaC....snip...

Here’s how it would look in practice.

User can only run “ps” or “vmstat”
User running “vmstat”

Slow iSCSI performance on ZFS Volumes (zvol)

TL;DR: For reasons, don’t use ZVOLs for iSCSI volumes. Instead, just use a generic file.

I’ve been reorganizing my lab a bit to consolidate some storage and wanted to experiment with iSCSI. I thought “wow, what a great use-case for ZFS ZVOLS…”.

If you recall, ZFS has the ability to create block devices called ZVOLs. When you do this, you get a new device presented on the machine under /dev/zvol/<poolname>/ that you can use as you would any other disk. As part of my consolidation effort, I decided to use one and present it over iSCSI to my workstation. To my surprise, the performance was dismal, maxing out at around 30MB/s when writing to it over iSCSI.

Here are the steps I took to create the ZVOL and present over iSCSI. Note, I’m using FreeBSD as my storage server.

# zfs create -V 500gb zroot/luns/backup
# cat > /etc/ctl.conf <<EOF
portal-group pg0 {
	discovery-auth-group no-authentication
	listen 0.0.0.0
}
target iqn.2020-01.life.shaner:target0 {
	portal-group pg0
	lun 0 {
		path /dev/zvol/zroot/backup
		size 500G
	}
}
EOF
# sysrc ctld_enable=YES
# service ctld start
# ctladm lunlist
(7:1:0/0): <FREEBSD CTLDISK 0001> Fixed Direct Access SPC-5 SCSI device

With this in place, we can move to the (Linux) client machine (initiator in iSCSI parlance) and initiate a connection to the iSCSI drive then format it.

# iscsiadm --mode discovery -t sendtargets --portal 192.168.1.10
192.168.1.10:3260,-1 iqn.2020-01.life.shaner:target0

# iscsiadm --mode node --targetname  iqn.2020-01.life.shaner:target0 --portal 192.168.1.10 --login
Logging in to [iface: default, target: iqn.2020-01.life.shaner:target0, portal: 192.168.1.10,3260]
Login to [iface: default, target: iqn.2020-01.life.shaner:target0, portal: 192.168.1.10,3260] successful.

# dmesg |tail
[117514.525034] sd 9:0:0:0: Attached scsi generic sg5 type 0
[117514.525245] sd 9:0:0:0: Power-on or device reset occurred
[117514.527424] sd 9:0:0:0: [sdg] 1048576000 512-byte logical blocks: (537 GB/500 GiB)
[117514.527428] sd 9:0:0:0: [sdg] 131072-byte physical blocks
[117514.527706] sd 9:0:0:0: [sdg] Write Protect is off
[117514.527709] sd 9:0:0:0: [sdg] Mode Sense: 7f 00 10 08
[117514.528159] sd 9:0:0:0: [sdg] Write cache: enabled, read cache: enabled, supports DPO and FUA
[117514.528750] sd 9:0:0:0: [sdg] Optimal transfer size 8388608 bytes
[117514.675486] sd 9:0:0:0: [sdg] Attached SCSI disk

# mkfs.ntfs -Q /dev/sdg 

Let’s format it then mount it. Note this drive will eventually be mounted by a Windows machine thus we’re formatting it with NTFS.

# mkfs.ntfs -Q /dev/sdg
# mount -t ntfs3 /dev/sdg /mnt

At this point I proceeded to copy data onto the drive where it maxed out at 35MB/s. Abysmal. So, I decided to switch from ZVOL to a plain file on disk and use that instead.

# zfs destroy zroot/luns/backup
# zfs create -o mountpoint=/luns/backup zroot/luns/backup
# cd /luns/backup
# truncate -s 500G disk.img
# sed -i 's/\/dev\/zvol\/zroot/backup/\/luns\/backup\/disk.img/g' /etc/ctl.conf
# service ctld restart

After setting it up this way I was maxing out my 1Gb connnection with writes speeds of over 100MB/s, a 2x improvement in speed.

Lesson learned.

Finding Idle Cloud Desktops (Linux)

Suppose you’re hosting remote Linux desktops in your cloud environment and want to discover which ones could be able to shutdown to save on valuable resources like money, RAM, or CPU.

Most Linux remote desktop protocols still utilize Xorg (as opposed to Wayland) for their display server. Prime examples would be tigervnc, tightvnc, or X2go. Because of this, the utility xprintidle is still useful for determining how long an X session has been idle, as its name suggests. With it, we can automate the discovery process with a simple script, querying each desktop to see when it was last used. This assumes you have permissions on the host to run commands as the user actually running the X server (or have access to their .Xauthority file).

Depending on your infrastructure you might choose to run something like the below script via SSH, Ansible, Salt Stack, Puppet, or something else.

This is a rough example and assumes the username is the same on all hosts. You’ll likely have different usernames on each host so you’d need to adjust the script to filter out the users and corresponding display number.

#!/usr/bin/env bash
# A contrived example of checking for idle X sesssions on remote systems.

HOSTS="host1 host2 host3"
USER=shaner  # the user running the X session
DISPLAY=:1  # typical/default display for most VNC servers
XPIPATH=./xprintidle  # path to 'xprintidle' binary.

for h in ${HOSTS}; do
  echo "put ${XPIPATH} /usr/local/bin/" | sftp -b- root@$h >/dev/null
  IDLE=$(ssh root@$h sudo -u ${USER} DISPLAY=${DISPLAY} /usr/local/bin/xprintidle)
  IDLE=$(echo $IDLE/1000/60 | bc)
  printf "[*] ${USER}@${h}:${DISPLAY} idle for ${IDLE} minutes\n"
done

Here’s what it looks like in practice. From the output, we could probably shutdown host1 for the time being.

$ ./check_idles.sh
[*] shaner@host1:1 idle for 18564 minutes
[*] shaner@host2:1 idle for 20 minutes
[*] shaner@host3:1 idle for 108 minutes
$

Create SSL Cert and Key

Sometimes during development you may find yourself needing an SSL certificate and key to test with. I’ve had to do this so much I went ahead and added the below function to my ~/.bashrc file.

createss () 
{ 
    openssl req -x509 -nodes -newkey rsa:4096 \
      -keyout ${1}.key -out ${1}.crt -days 365 \
      -subj "/C=US/ST=Ohio/L=Elida/O=ShanerOPS/OU=OPS/CN=${1}"
}

Now, I can create certs on-the-fly without having to look it up in my notes. Here’s how it looks in practice.

$ createss demo.site.local
Generating a RSA private key
...........snip....

$ ls -l
-rw------- 1 shane shane 3272 Jul 20 20:47 demo.site.local.key
-rw-rw-r-- 1 shane shane 2033 Jul 20 20:47 demo.site.local.crt

Key is stored in legacy trusted.gpg keyring

apt-key has been deprecated. Here’s a quick one-liner to fix the annoying message during apt-get update . Ideally, you’d want to pluck out each key using apt-key list then apt-key export <id> placing each key in it’s own file under /etc/apt/trusted.gpg.d.

sudo apt-key --keyring /etc/apt/trusted.gpg exportall | \
sudo tee  /etc/apt/trusted.gpg.d/all_keys.asc

Windows VM using LXD

It’s not entirely obvious how to create a Windows Virtual Machine when using LXD. Here are the most basic steps to get it up and running. This is largely for my own documentation but will probably help someone else out there I’m sure.

The easiest option is to embed the VirtIO drivers directly into the Windows ISO using distrobuilder (via snap). This is the method I’ll be demonstrating. Alternatively, you could attach both ISO’s and when it comes to selecting a drive to install to (during windows installer) you’ll have to click the ‘load drivers from removable media’ and select them from their.

First, install the needed tools. Note, you’re more than welcome to compile distrobuilder from source, but using the snap is much quicker.

sudo snap install distrobuilder --classic
sudo apt install -y libguestfs-tools wimtools

Proceed to download the needed ISOs. You can download Windows ISO HERE and VirtIO ISO HERE.

Now we’re ready to embed the drivers directly into the Windows ISO. Here’s the command:

sudo distrobuilder repack-windows /home/shaner/Downloads/win10_21H2.iso ./win10_packed.iso --drivers=/home/shaner/Downloads/virtio-win.iso

Next, we create an empty machine (VM),set the disk size, and attach our custom ISO with virtio drivers.

lxc init win10pro --empty --vm -c security.secureboot=false
lxc config device override win10pro root size=40GiB
lxc config device add win10pro iso disk source=/home/shaner/win10_packed.iso  boot.priority=10

Now, start the machine and attach to the VGA console to walk-through the installer:

lxc start win10pro --console=vga

Once the install is complete, be sure to enable Remote Desktop as it’s a much better experience than using the LXC (spice) console.