Hashicorp Vault Dev Mode

Ever needed to spin-up a quick Vault cluster to test commands or functionality? Sure, you could spin up minikube and deploy a helm chart, but what if you could do it even faster, without Kubernetes?

Vault actually has some *currently* undocumented command-line options that can save you a ton of time. Read on, brother.

I debated on even writing a post about it because it’s so simple. It’s literally a command-line flag -dev-three-node . Below, I’m redirecting STDERR to STDOUT and redirecting to a file called output, if you’re not a Linux fan.

$ vault server -dev-three-node -dev-root-token-id="root" > output 2>&1 &

I redirect to a file because the output is too fast to catch the needed info. Let’s use head to see the useful bits.

$ head -30 output
==> Vault server configuration:

                     Cgo: disabled
 Cluster Parameters Path: /tmp/vault-test-cluster-282710121
              Go Version: go1.16.12
               Log Level: info
      Node 0 Api Address: https://127.0.0.1:8200
      Node 1 Api Address: https://127.0.0.1:8201
      Node 2 Api Address: https://127.0.0.1:8202
                 Version: Vault v1.7.9
             Version Sha: 571cd46419fe273d75de1e0d5aa46af60a222961

==> Three node dev mode is enabled

The unseal key and root token are reproduced below in case you
want to seal/unseal the Vault or play with authentication.

Unseal Key 1: +V7oGQ/q3lHGgWoVjRgKxS0OLUs9KZs8aDppOMWcYDFj
Unseal Key 2: ZlmQLgpPohGOAb7m1XUfikiHSneei+AFIwxyqmkNAq5H
Unseal Key 3: tHr08qqUd7GAtcfY+ynqo6+Go2vovj1wbdGIQtSWJ/r0

Root Token: root


Useful env vars:
VAULT_TOKEN=root
VAULT_ADDR=https://127.0.0.1:8200
VAULT_CACERT=/tmp/vault-test-cluster-282710121/ca_cert.pem

==> Vault server started! Log data will stream in below:

Alrighty, let’s just export those variables and we can begin using our cluster!

$ export VAULT_TOKEN=root
$ export VAULT_ADDR=https://127.0.0.1:8200
$ export VAULT_CACERT=/tmp/vault-test-cluster-282710121/ca_cert.pem

Ok, let’s make sure vault is on the same page as us by checking its status.

$ vault status
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    3
Threshold       3
Version         1.7.9
Storage Type    n/a
Cluster Name    vault-cluster-7a71b0b6
Cluster ID      75e763bc-78f1-9783-8cc4-505a5a5861d9
HA Enabled      true
HA Cluster      https://127.0.0.1:45555
HA Mode         active
Active Since    2022-03-09T02:12:27.947440981Z
$

Looks good! We can now start testing whatever we need. In future posts, we’ll explore more of the cluster and play with some of the available vault secrets engines.

IP Address Discovery for LXD Machines

I’m currently working on a side project that uses LXD as the primary back-end hypervisor (more backends to come, looking at libcloud). It seems pretty well thought out and, so far, it’s been really nice to work with.

I did run into a snag however, but it’s not really a fault of LXD. I needed to be able to learn the IP address of a VM that doesn’t have the LXD-agent running. This is the case for any machine images not provided by Canonical’s upstream image servers. For example, Windows, FreeBSD, OpenBSD, and any custom Linux distribution certainly will not have it available by default.

Doing some research, I discovered that the dnsmasq DHCP server has the ability to call external programs when creating new leases. This is perfect and should be easy enough to implement. Here’s snippet from the dnsmasq man page:

--dhcp-script=<path> Whenever a new DHCP lease is created, or an old one destroyed, or a TFTP file transfer completes, the executable specified by this option is run. <path> must be an absolute pathname, no PATH search occurs. The arguments to the process are "add", "old" or "del", the MAC address of the host (or DUID for IPv6) , the IP address, and the hostname, if known.

So, we need some sort of middleware that I could push new leases to when they’re created by dnsmasq. Since I already had a redis server running handling celery tasks, I decided to just use that.

Great, let’s whip up a python script for dnsmasq to call out to and update redis.

#!/usr/bin/env python3
# /usr/local/bin/dhcpredis.py

import sys
import redis


if len(sys.argv) < 4:
    sys.exit(1)
op = sys.argv[1]
mac = sys.argv[2]
ip = sys.argv[3]
# hostname = sys.argv[4]  # Not interested at this time.

RHOST = '172.16.0.252'
RPORT = 6379
RDB = 0

r = redis.Redis(host=RHOST, port=RPORT, db=RDB)
if 'add' in op:
    r.set(mac, ip)

if 'del' in op:
    r.delete(mac)

Certainly some room for improvement in the script above, but it gets the point across. If it’s not apparent, we’re using the MAC address as the redis key, and the IP as it’s value. Now, configure dnsmasq to point at the script and restart it to pick up the changes.

# echo "dhcp-script=/usr/local/bin/dhcpredis.py" >> /etc/dnsmasq.conf
# systemctl restart dnsmasq

Awesome, we’re good to go! The last thing I had to do is just update my code to pull the KEY:VALUE pair out of redis. If you’re curious to see what that might look like, here’s the (poorly-coded) function I used. Note, I’m using the python LXD library (pylxd) and getting the MAC address from LXD directly.

def get_mach_ip(mach):
    retries = 60
    sleep_time = 5
    hwaddr = None
    while retries > 0:
        retries -= 1
        r = redis.Redis(host=app.config['REDIS_HOST'],
                        port=app.config['REDIS_PORT'])
        try:
            hwaddr = mach.config['volatile.eth0.hwaddr']
            cip = r.get(hwaddr).decode('utf-8') if hwaddr else None
            return cip if cip
        except Exception:
            logging.info(f'Waiting on IP for ether {hwaddr}')
        time.sleep(sleep_time)
    return None

That’s pretty much it!

Syncthing on SmartOS

This is a quick tutorial on how to get syncthing running on SmartOS by means of an LX branded zone (ubuntu-16.04-20170403).

I initially tried using Joyent brand, but when starting syncthing, I received ‘Watching is not supported’ which is related to an issue with fsnotify on github https://github.com/fsnotify/fsnotify/pull/263 . An alternative would be to simply use a bhyve vm which would get around this, but a container has a little less overhead than a full virtual machine, so that’s what I’m going with for now.

The setup is pretty straight forward but there are a few gotchas.

First, we’ll start with the zone definition. Note, this reserves a 100GB dataset for the container which we’ll use for syncing our files to/from.

$ cat > syncthing.json <<EOF
{
  "brand": "lx",
  "kernel_version": "4.3.0",
  "alias": "sync.shaner.life",
  "image_uuid": "7b5981c4-1889-11e7-b4c5-3f3bdfc9b88b",
  "quota": 100,
  "delegate_dataset": true,
  "max_physical_memory": 1024,
  "resolvers": [
    "192.168.1.1",
    "1.1.1.1"
  ],
  "nics": [
      {
        "nic_tag": "external",
        "ip": "192.168.1.30",
        "netmask": "255.255.255.0",
        "gateway": "10.40.0.1",
        "primary": true
      }
  ]
}
EOF

Next, we’ll create and login to the zone and change the mountpoint for data.

$ vmadm create -f syncthing.json
Successfully created VM fa986110-8fef-6110-bc37-a27b1d70cd3f
$ zlogin fa986110-8fef-6110-bc37-a27b1d70cd3f

# PATH=/native/usr/sbin:/native/usr/bin:$PATH
# zfs set mountpoint=/data zones/$(zonename)/data
# apt-mark hold systemd-sysv udev

Ok, we should be all set. Let’s add the syncthing repo and install it.

# curl -s https://syncthing.net/release-key.txt | sudo apt-key add -
# echo "deb https://apt.syncthing.net/ syncthing stable" > /etc/apt/sources.list.d/syncthing.list
# apt-get update
# apt-get install -y syncthing

When I tried enabling the syncthing service, I get the below error.

# systemctl enable syncthing
Failed to execute operation: No such file or directory

So to fix this I did the following:

# cp /usr/lib/systemd/user/syncthing.service /etc/systemd/system/

If you try to start syncthing up now, it will fail. We need to modify the syncthing service file and comment out the process hardening settings as this is an LX container and not relevant. Because we’re disabling all the hardening, we should setup a non-privileged user to run under.

# useradd -d /data -M syncthing

Another thing we need to do is set a HOME environment variable in the service definition. All said and done, it should look like this:

[Unit]
Description=Syncthing - Open Source Continuous File Synchronization
Documentation=man:syncthing(1)

[Service]
User=syncthing
Group=syncthing
Environment=HOME=/data
ExecStart=/usr/bin/syncthing -no-browser -no-restart -logflags=0
Restart=on-failure
SuccessExitStatus=3 4
RestartForceExitStatus=3 4

[Install]
WantedBy=default.target

Now, we should be all set. Let’s start it up and check status.

# systemctl daemon-reload
# systemctl enable syncthing
# systemctl start syncthing
# systemctl status syncthing
● syncthing.service - Syncthing - Open Source Continuous File Synchronization
   Loaded: loaded (/etc/systemd/system/syncthing.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-06-18 20:21:14 UTC; 3s ago
     Docs: man:syncthing(1)
 Main PID: 571357 ((yncthing))
   CGroup: /system.slice/syncthing.service
           └─571357 /usr/bin/syncthing -no-browser -no-restart -logflags=0
           ‣ 571357 /usr/bin/syncthing -no-browser -no-restart -logflags=0

Now, since syncthing admin will listen on localhost, we’ll ssh port-forward so we can access the admin page.

shaner@prec:~$ ssh -N -L 8384:127.0.0.1:8384 192.168.1.30