Syncthing on SmartOS

This is a quick tutorial on how to get syncthing running on SmartOS by means of an LX branded zone (ubuntu-16.04-20170403).

I initially tried using Joyent brand, but when starting syncthing, I received ‘Watching is not supported’ which is related to an issue with fsnotify on github https://github.com/fsnotify/fsnotify/pull/263 . An alternative would be to simply use a bhyve vm which would get around this, but a container has a little less overhead than a full virtual machine, so that’s what I’m going with for now.

The setup is pretty straight forward but there are a few gotchas.

First, we’ll start with the zone definition. Note, this reserves a 100GB dataset for the container which we’ll use for syncing our files to/from.

$ cat > syncthing.json <<EOF
{
  "brand": "lx",
  "kernel_version": "4.3.0",
  "alias": "sync.shaner.life",
  "image_uuid": "7b5981c4-1889-11e7-b4c5-3f3bdfc9b88b",
  "quota": 100,
  "delegate_dataset": true,
  "max_physical_memory": 1024,
  "resolvers": [
    "192.168.1.1",
    "1.1.1.1"
  ],
  "nics": [
      {
        "nic_tag": "external",
        "ip": "192.168.1.30",
        "netmask": "255.255.255.0",
        "gateway": "10.40.0.1",
        "primary": true
      }
  ]
}
EOF

Next, we’ll create and login to the zone and change the mountpoint for data.

$ vmadm create -f syncthing.json
Successfully created VM fa986110-8fef-6110-bc37-a27b1d70cd3f
$ zlogin fa986110-8fef-6110-bc37-a27b1d70cd3f

# PATH=/native/usr/sbin:/native/usr/bin:$PATH
# zfs set mountpoint=/data zones/$(zonename)/data
# apt-mark hold systemd-sysv udev

Ok, we should be all set. Let’s add the syncthing repo and install it.

# curl -s https://syncthing.net/release-key.txt | sudo apt-key add -
# echo "deb https://apt.syncthing.net/ syncthing stable" > /etc/apt/sources.list.d/syncthing.list
# apt-get update
# apt-get install -y syncthing

When I tried enabling the syncthing service, I get the below error.

# systemctl enable syncthing
Failed to execute operation: No such file or directory

So to fix this I did the following:

# cp /usr/lib/systemd/user/syncthing.service /etc/systemd/system/

If you try to start syncthing up now, it will fail. We need to modify the syncthing service file and comment out the process hardening settings as this is an LX container and not relevant. Because we’re disabling all the hardening, we should setup a non-privileged user to run under.

# useradd -d /data -M syncthing

Another thing we need to do is set a HOME environment variable in the service definition. All said and done, it should look like this:

[Unit]
Description=Syncthing - Open Source Continuous File Synchronization
Documentation=man:syncthing(1)

[Service]
User=syncthing
Group=syncthing
Environment=HOME=/data
ExecStart=/usr/bin/syncthing -no-browser -no-restart -logflags=0
Restart=on-failure
SuccessExitStatus=3 4
RestartForceExitStatus=3 4

[Install]
WantedBy=default.target

Now, we should be all set. Let’s start it up and check status.

# systemctl daemon-reload
# systemctl enable syncthing
# systemctl start syncthing
# systemctl status syncthing
● syncthing.service - Syncthing - Open Source Continuous File Synchronization
   Loaded: loaded (/etc/systemd/system/syncthing.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-06-18 20:21:14 UTC; 3s ago
     Docs: man:syncthing(1)
 Main PID: 571357 ((yncthing))
   CGroup: /system.slice/syncthing.service
           └─571357 /usr/bin/syncthing -no-browser -no-restart -logflags=0
           ‣ 571357 /usr/bin/syncthing -no-browser -no-restart -logflags=0

Now, since syncthing admin will listen on localhost, we’ll ssh port-forward so we can access the admin page.

shaner@prec:~$ ssh -N -L 8384:127.0.0.1:8384 192.168.1.30

Using cloud-init with SmartOS

SmartOS provides the ability to inject cloud-init data into a zone/VM. This is extremely useful for automating some of the menial tasks one would normally have to perform manually like setting up users, installing packages, or pulling down a git repo. Basically, anything you can stuff into cloud-init user-data is at your disposal.

However, since SmartOS zone definitions are in JSON and cloud-init data is in yaml, it’s not immediately obvious how to supply this information. What it boils down to is, escape all double-quotes (“) and line-feeds.

Here’s our cloud-init config which creates a new user and import their ssh key from launchpad.net.

#cloud-config

users:
  - default
  - name: shaner
    ssh_import_id: shaner
    lock_passwd: false
    sudo: "ALL=(ALL) NOPASSWD:ALL"
    shell: /bin/bash

So following the above escape rules above, here’s our full SmartOS zone spec, including the cloud-init data. Note the cloud-init:user-data key.

{
  "brand": "kvm",
  "alias": "ubuntu-xenial",
  "ram": "2048",
  "vcpus": "2",
  "resolvers": [
    "192.168.1.1",
    "1.1.1.1"
  ],
  "nics": [
    {
      "nic_tag": "admin",
      "ip": "192.168.1.50",
      "netmask": "255.255.255.0",
      "gateway": "192.168.1.1",
      "model": "virtio",
      "primary": true
    }
  ],
  "disks": [
    {
      "image_uuid": "429bf9f2-bb55-4c6f-97eb-046fa905dd03",
      "boot": true,
      "model": "virtio"
    }
  ],
  "customer_metadata": {
    "cloud-init:user-data": "#cloud-config\n\nusers:\n  - default\n  - name: shaner\n    ssh_import_id: shaner\n    lock_passwd: false\n    sudo: \"ALL=(ALL) NOPASSWD:ALL\"\n    shell: /bin/bash"
  }
}

Let’s go ahead and create the zone on our SmartOS box.

[root@vmm01 /opt/templates]# vmadm create < ubuntu-xenial.json
Successfully created VM 0e908925-600a-4365-f161-b3a51467dc08
[root@vmm01 /opt/templates]# vmadm list 
UUID                                  TYPE  RAM      STATE             ALIAS
0e908925-600a-4365-f161-b3a51467dc08  KVM   2048     running           ubuntu-xenial

After a bit of time, we can try logging in as our new user we requested. Recall, we asked cloud-init to pull in our public ssh key from launchpad so, if you get prompted for a password, something is wrong.

shaner@tp25:~$ ssh 192.168.1.50
The authenticity of host '192.168.1.50 (192.168.1.50)' can't be established.
ECDSA key fingerprint is SHA256:hFPjwUJjd7N/Gb9EE37fTVt2Lk6NVzoLKvhFN7wYw2M.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.50' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-116-generic x86_64)
Certified Ubuntu Cloud Image

   __        .                   .
 _|  |_      | .-. .  . .-. :--. |-
|_    _|     ;|   ||  |(.-' |  | |
  |__|   `--'  `-' `;-| `-' '  ' `-'
                   /  ;  Instance (Ubuntu 16.04.3 LTS 20180222)
                   `-'   https://docs.joyent.com/images/linux/ubuntu-certified
                         http://www.ubuntu.com/cloud#joyent

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

shaner@0b8d7a26-ffe4-e859-eb56-d96d02bf213e:~$ sudo ls
shaner@0b8d7a26-ffe4-e859-eb56-d96d02bf213e:~$ sudo apt-update && sudo apt-upgrade -y

There’s a LOT you can do with cloud-init data. See the below links for more info.

Cloud-init examples: https://cloudinit.readthedocs.io/en/latest/topics/examples.html
Joyent Datasource: https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceSmartOS.py
Joyent Ubuntu Image documentation: https://docs.joyent.com/public-cloud/instances/virtual-machines/images/linux/ubuntu-certified

Deploying Graylog and Filebeat for Suricata event logging using Juju

I <3 graylog and with graylog 3.0 almost GA, it promises to be even better. In this post (part of a multi-post series) I’ll walk you through setting up Graylog to ingest logs from Suricata.

First, let’s install graylog. While there are almost too many ways to deploy software these days, I’ve been playing with Juju lately, so that’s what I’ll use here to deploy graylog, mongodb, and elasticsearch into individual lxd containers. As we’ll be deploying to our local machine for demonstration purposes, I’ve already installed lxd (snap install lxd) and the juju client (snap install juju).

Let’s bootstrap our juju controller and deploy our charm bundle.

shaner@tp25:~$ juju bootstrap

Select a cloud [localhost]:       

Enter a name for the Controller [localhost-localhost]: lxd

Creating Juju controller "lxd" on localhost/localhost
Looking for packaged Juju agent version 2.4.7 for amd64
To configure your system to better support LXD containers, please see: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
Launching controller instance(s) on localhost/localhost...
 - Retrieving image: rootfs: 100% (394.82kB/s)s)
Installing Juju agent on bootstrap instance
Fetching Juju GUI 2.14.0
Waiting for address
Attempting to connect to 10.131.74.104:22
Connected to 10.131.74.104
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.131.74.104 to verify accessibility...
Bootstrap complete, "lxd" controller now available
Controller machines are in the "controller" model
Initial model "default" added

OK, our juju controller is bootstrapped and a default model has been added. Let’s proceed with deploying our software stack. Below is our charm bundle, copy into a file called graylog.yaml .

applications:
  graylog:
    charm: 'cs:graylog-23'
    num_units: 1
    series: bionic
    to:
      - '0'
  elasticsearch:
    charm: 'cs:elasticsearch-32'
    num_units: 1
    series: bionic
    to:
      - '1'
  mongodb:
    charm: 'cs:mongodb-51'
    num_units: 1
    series: bionic
    to:
      - '2'
relations:
  - - 'mongodb:database'
    - 'graylog:mongodb'
  - - 'elasticsearch:client'
    - 'graylog:elasticsearch'
machines:
  '0': {}
  '1': {}
  '2': {}

Now we can deploy it using juju.

shaner@tp25:~$ juju deploy graylog.yaml
Resolving charm: cs:elasticsearch-32
Resolving charm: cs:graylog-23
Resolving charm: cs:mongodb-51
Executing changes:
- upload charm cs:elasticsearch-32 for series bionic
- deploy application elasticsearch on bionic using cs:elasticsearch-32
- upload charm cs:graylog-23 for series bionic
- deploy application graylog on bionic using cs:graylog-23
  added resource graylog
- upload charm cs:mongodb-51 for series bionic
- deploy application mongodb on bionic using cs:mongodb-51
- add new machine 3 (bundle machine 0)
- add new machine 4 (bundle machine 1)
- add new machine 5 (bundle machine 2)
- add relation mongodb:database - graylog:mongodb
- add relation elasticsearch:client - graylog:elasticsearch
- add unit elasticsearch/1 to new machine 4
- add unit graylog/1 to new machine 3
- add unit mongodb/0 to new machine 5
Deploy of bundle completed.
shaner@tp25:~$

We can watch the status of the deployment using ‘watch -c juju status –color’. Here’s what that might look like:

Waiting for juju to finish deploying our stack.

When complete, it’ll look something like this:

Juju has finished deploying our stack.

Now that graylog is up and running, we should be able to access the WebUI at the IP address show in the ‘juju status’ output. However, we need to get the admin password first. We’ll run the charm action below to get it:

shaner@tp25:~$ juju run-action --wait graylog/1 show-admin-password
unit-graylog-1:
  id: 188f2eda-2293-462d-8536-ded94afad957
  results:
    admin-password: zL9dkq3c6BMtrfqbyn
  status: completed
  timing:
    completed: 2018-12-21 03:58:05 +0000 UTC
    enqueued: 2018-12-21 03:58:04 +0000 UTC
    started: 2018-12-21 03:58:05 +0000 UTC
  unit: graylog/1

Now armed with the password and the IP of our graylog server go ahead and open a web browser to http://<ip_address>:9000. You should see something like this:

Logging into graylog.

Switching gears a bit, we need to go setup Suricata. To do this, we’ll go ahead and launch a new lxd container using the Ubuntu and Filebeat charms, adding the Juju relations where necessary.

shaner@tp25:~$ juju deploy ubuntu
Located charm "cs:ubuntu-12".
Deploying charm "cs:ubuntu-12".
shaner@tp25:~$ juju deploy filebeat
Located charm "cs:filebeat-20".
Deploying charm "cs:filebeat-20".
shaner@tp25:~$ juju add-relation filebeat:beats-host ubuntu
shaner@tp25:~$ juju add-relation filebeat graylog
shaner@tp25:~$ juju config filebeat logpath=/var/log/suricata/eve.json

Using the power of Juju, filebeat will automatically configure a graylog Input and start sending our Suricata logs from /var/log/suricata/eve.json to it. We can see it did so here:

All that’s left now is to login to the container and setup Suricata.

shaner@tp25:~$ juju ssh ubuntu/0
ubuntu@juju-b43ea2-6:~$ apt-get install -y suricata filebeat

Because this is a demo and we’re in an unprivileged container, we’ll configure Suricata to use the good old pcap method for packet acquisition.

sed -i 's/LISTENMODE=nfqueue/LISTENMODE=pcap/g' /etc/default/suricata

Awesome, we’re ready to start things up.

root@juju-b43ea2-6:~# systemctl enable suricata filebeat
root@juju-b43ea2-6:~# systemctl start suricata filebeat

Go ahead and kick off an HTTP connection to generate some traffic for Suricata to see.

root@juju-b43ea2-3:~# curl -s https://www.google.com > /dev/null

If we did everything right, we should be able to switch back to the Graylog WebUI and click on ‘Search’ at the top and see some messages coming in.

You’ll notice however, the message field is one big jumble of JSON text. We’ll want to configure extractors in order to map the JSON message string coming in from filebeat to actual fields in graylog.

Go ahead navigate back to ‘System’->’Inputs’ and click on ‘Manage extractors’ for the input you just created. Then click on ‘Get Started’ button and load a message to work with.

Click on ‘Get Started’.

Once you load a message from the input, you’ll want to scroll down to the ‘message’ field and select a new JSON extractor.

After you’ve clicked on ‘JSON’ from the drop down menu, scroll to the bottom of the page and after giving it a title, click ‘Create extractor’.

Switch back to you Ubuntu container and re-issue our curl command to generate some more traffic. Afterwards, switch back to graylog WebUI and go back to the Search dashboard. You should now see the the different JSON values being mapped to their respective fields.

In our next article, we’ll dive into setting up Streams, pipelines, and more.

Thanks for reading!