I had a need to analyze Sensu client log files in /var/log/sensu for historial CPU, disk and network data.

Web search revealed I should give Logstash a try.

I decided I would try and also install ElastichSearch and Kibana. My distro of choice is of cause Ubuntu, in this case Ubuntu 14.04

Install Logstash

Download Logstash deb and install it using dpkg

# sudo dpkg -i logstash_1.5.0-1_all.deb

Logstash service did not work me. So for the rest of this post, I just manually called logstash from a terminal

Install ElasticSearch

Download ElasticSearch deb and install it using dpkg

sudo dpkg -i elasticsearch-1.5.2.deb

Elasticsearch did start as a service sudo service elasticsearch start and I went to http://localhost:9200 to confirm it worked.

Install Kibana

Download Kibana Linux tar archive and install it using tar

tar xvfz kibana-4.0.2-linux-x64.tar.gz $HOME/apps

Kibana is by default, in config/kibana.yml configured to find a elasticsearch server running at localhost:9200. so just start kibana using ./bin/kibana in the expanded tar archive.

Delete any sincedb files

Remove any $HOME/.sincedb* files. This keeps track of the current position of monitored log files. When I first started experimenting with Logstash, it created the sincedb files which prevented the sensu files from being reparsed after I fixed the logstash config. After deleting these files, Logstash correctly reparsed the Sensu log files and I was happy.

Create Logstash Config file


# delete $HOME/.since_db files to reparse files
input {
  file {
     type => "sensu_logs"
     path => "/home/stanley/tmp/sensu/sensu-client*"
     start_position => "beginning"

filter {
  json {
     source => "message"
  grok {
    match => { "payload" => "cpu.total.user %{NUMBER:total_user_cpu:int} %{NUMBER:total_user_cpu2:int}" }

output {
  elasticsearch {
    host => ''

Run Logstash

# /opt/logstash/bin/logstash agent -f logstash_config

This is a breakdown of what Logstash is doing:

Parse any file, from the beginning, in /home/stanley/tmp/sensu that starts with sensu-client. The file input plugin splits up the file at the delimiter, by default a \n(carriage return), and then returns a structure for each entry that looks like this:

       "message" =>
/etc/sensu/plugins/cpu-metrics.rb -s
844451226 1431681440\\n\",\"status\":0}}}",
      "@version" => "1",
    "@timestamp" => "2015-05-17T12:08:57.237Z",
          "type" => "browsers",
          "host" => "stanleyk-pc",
          "path" => "/home/stanleyk/tmp/sensu/sensu-client.log"

Logstash then takes the value of the message hash key and applies the json filter to it. The filter is smart in that it recursively goes down the json output and outputs each json field and its value. So the output then becomes like this

Then the grok filter takes the payload.check.output payload and takes the 2 numbers in the total.cpu.user field and puts it into its own fields.

       "message" => "publishing check result",
      "@version" => "1",
    "@timestamp" => "2015-05-17T12:13:13.167Z",
          "type" => "browsers",
          "host" => "stanleyk-pc",
          "path" => "/home/stanleyk/tmp/sensu/sensu-client.log",
     "timestamp" => "2015-05-15T02:17:20.561814-0700",
         "level" => "info",
       "payload" => {
        "client" => "my-server",
         "check" => {
              "handlers" => [
                [0] "graphite"
                  "type" => "metric",
               "command" => "/opt/sensu/embedded/bin/ruby
/etc/sensu/plugins/cpu-metrics.rb -s os.inf.net.sw.my-server.cpu",
            "standalone" => true,
              "interval" => 60,
                  "name" => "cpu-metrics",
                "issued" => 1431681438,
              "executed" => 1431681438,
              "duration" => 2.0319999999999996,
                "output" => "os.inf.net.sw.my-server.cpu.total.user 844451226
                "status" => 0
    "total_user_cpu" => 844451226,
    "total_user_cpu2" => 1431681440


I really should put some conditionals around the grok filter because not all entries from the sensu logs have cpu.total.user field. When I figure that out, I'll update the post.

These entries are feed into the output elasticsearch plugin that adds the entries into the elasticsearch datastore.

Viewing Data in Kibana

This Kibana tutorial can help you setup Kibana.

Next picture shows the graph settings and generated the time series graph.

Kibana visual example