With the help of the Bro Kafka plug-in, we’ll configure Bro to stream JSON-formatted logs through Kafka and use python to subscribe and print events from the stream.
This tutorial uses FreeBSD 11.1-RELEASE. But can easily be adapted to Linux installations.
How do you monitor events from multiple Bro sensors throughout a network? Do you go to each one and search logs ad-hoc? Maybe fire up a tmux session with multiple synced panes and search them all at once?
With tools like filebeat (previously logstash-forwarder) we’ve been able to ship Bro logs off to remote systems without much effort for a number of years now. However, the way I see it, you’re left with two options.
1. Enable policy/tuning/json-logs.bro to produce JSON logs instead of the standard tab-delimited logs.
– No need to normalize/convert logs to JSON upstream.
– Easier to setup filebeat and tag with extra info.
– Can’t use bro-cut and other CLI tools to parse bro logs on the system.
2. Use the Kafka plug-in and ship logs through Apache Kafka.
– Logs are written to the host system as normal (tab-delimited), but are sent in JSON format to the specified Kafka topic(s).
– You can choose which logs are sent to Kafka (conn, dns, http, notice etc..)
– You can subscribe to a Kafka topic and receive logs from all sensors publishing to it as a single stream.
– You’ll need to manage a Kafka cluster.
Depending on your needs, both are decent options. However, for this tutorial, we’re going to setup and push logs into Kafka.
To start, we’ll want to get bro installed. Refer my previous tutorial on using Bro with Netmap to get up and running. Similar to compiling the netmap plug-in, we’ll need to compile the Kafka plug-in.
pkg install -y librdkafka cd $BRO_SRC/aux/plugins/kafka make && make install
The `make install` step isn’t needed if you’re building the plug-in for another system (matching FreeBSD version). You’ll find the compiled plug-in under $BRO_SRC/aux/plugins/kafka/build/BRO_KAFKA.tgz for this purpose.
Next, we’ll want to get Kafka up and runing. Here, we’ll use iocage to create a Kafka (+zookeeper) jail. Since Kafka runs on Java, we’ll want to have fdecfs and procfs available inside the jail as well. Replace ‘kafka’ in the last line here with whatever hostname you chose for your jail as Kafka will attempt to resolve it on start-up and generate an error if it’s unable to.
iocage create -r 11.1-RELEASE -n kafka ip4_addr="igb0|10.0.0.10/24" boot=on mount_fdescfs=1 mount_procfs=1 iocage console kafka pkg install -y kafka zookeeper echo "10.0.0.10 kafka" >> /etc/hosts
For development purposes, this will be the only node in the Kafka cluster so you shouldn’t need to change much. Go ahead and edit /usr/local/etc/kafka/server.properties and set the options below. Note, be sure to use whatever IP address you’ve configured for your jail:
delete.topic.enable=true listeners=PLAINTEXT://10.0.0.10:9092 zookeeper.connect=10.0.0.10:2181
Now, lets enable all the things and fire up zookeeper and Kafka. There’s a small, first-time startup bug for kafka we’ll need to fix before starting Kafka. The init script attempts to chown a file that doesn’t exist (yet).
sysrc zookeeper_enable=YES sysrc kafka_enable=YES touch /var/log/kafka/kafkaServer.out service zookeeper start; service kafka start
Make sure zookeeper and Kafka are running. Both 2181/tcp and 9092/tcp should be listening respectively. If they’re not, you can check the logs under /var/log/zookeeper and /var/log/kafka to see what’s going on.
root@kafka:~ # netstat -an | grep LISTEN tcp4 0 0 172.16.0.68.9092 *.* LISTEN tcp4 0 0 172.16.0.68.2181 *.* LISTEN
If everything looks good, go ahead and exit the jail, we’re done here for now.
Let’s tie the two together by configuring Bro to send logs to Kafka. Go ahead and log into your Bro system and add the following to local.bro file. On my system (installed from source) it’s located under /usr/local/bro/share/bro/site/.
root@kafka:~ # netstat -an | grep LISTEN @load Bro/Kafka/logs-to-kafka.bro redef Kafka::topic_name = "THREATLINE"; redef Kafka::tag_json = T; redef Kafka::logs_to_send = set(Conn::LOG, DHCP::LOG, DNS::LOG, FTP::LOG, HTTP::LOG, SMTP::LOG, SSL::LOG, Notice::LOG, Software::LOG, Weird::LOG); redef Kafka::kafka_conf = table(["metadata.broker.list"] = "10.0.0.10:9092");
From the above, you can see we’re sending the following logs to Kafka: conn, dhcp, dns, ftp, http, smtp, ssl, notice, software, and weird. There are a lot more logs available depending on which bro scripts you’ve enabled. Here, you’ll find more logs you can send. Be sure to use the IP address of the Kafka jail you created earlier in the above `metadata.broker.list` setting.
Have bro check our config before deploying it.
broctl check
If everything looks good, go ahead and deploy the new config. If you get any errors, double-check your config before running the next `deploy` command.
broctl deploy
Go ahead and generate some traffic for Bro to log. Bro will automatically create the topic if it doesn’t already exist.
Switch back to your Kafka system and run the below command to see if the topic you specified in the Bro config was created.
/usr/local/share/java/kafka/bin/kafka-topics.sh --list --zookeeper 172.16.0.68:2181 THREATLINE
If this doesn’t produce any output, the topic hasn’t been created yet and you’ll probably need to check that bro is running and logging traffic.
Alright, if you’ve made it this far you’re doing good. Let’s use a bit of python to connect to the Kafka topic and print the events to the screen.
First, we’ll install kafka-python While there are many python Kafka libraries out there now-a-days, this one seems to work pretty well. You can install using pip or your package manager.
pkg install py27-kafka-python fetch https://gist.githubusercontent.com/shanerman/746f79771702bd2ff0a9eb23de0343d3/raw/43437b2b0cb319d755d036eb33e037fe5b1dfeab/print_bro_stream.py python2.7 print_bro_stream.py
At this point, logs should start printing to your screen. If you’re not seeing anything, you may have to (again) generate some traffic for Bro to log.
Ok, so we have all our Bro sensors pushing various log data into a unified stream of events ready for consumption. Now what? Well the sky is the limit at this point. Here are a few ideas
– Have logstash subscribe to the Kafka topic and push events into Elasticsearch.
– Monitor `dns` events, check for evil domain names.
– Watch the `conn` events and look for compromised IP addresses.
– Watch `software` events and get an idea what software is running on your network.
– Monitor `ssl` events for bad SSL Certificates.
– etc…
Once you have a the Bro event in a python data structure, the sky really is the limit. In future posts, we’ll dive deeper into processing these events using python and do some alerting.
Leave a Reply
You must be logged in to post a comment.