There is no silver bullet. This is how I always answer those asking about the best logging solution for Docker. A bit pessimistic of me, I know. But what I’m implying with that statement is that there is no perfect method for gaining visibility into containers. Dockerized environments are distributed, dynamic, and multi-layered in nature, so they are extremely difficult to log.
That’s not to say that there are no solutions — to the contrary. From Docker’s logging drivers to logging containers to using data volumes, there are plenty of ways to log Docker, but all have certain limitations or pitfalls. Logz.io users use a dedicated container that acts as a log collector by pulling Docker daemon events, stats, and logs into our ELK Stack (Elasticsearch, Logstash and Kibana).
That’s why I was curious to hear about the first release of Dockbeat (called Dockerbeat prior to Docker’s new repo naming conventions) — the latest addition to Elastic’s family of beats, which is a group of different log collectors developed for different environments and purposes. Dockbeat was contributed by the ELK community and is focused on using the docker stats API to push container resource usage metrics such as memory, IO, and CPU to either Elasticsearch or Logstash.
Below is a short review of how to get Dockbeat up and running as well as a few personal first impressions by me. My environment was a locally installedELK Stack and Docker on Ubuntu 14.04.
To get Dockbeat up and running, you can either build the project yourself or use the binary release on the GitHub repository . The former requires some additional setup steps (installing Go and Glide, for starters), and I eventually opted for the latter. It took just a few steps and proved to be pretty painless (an additional method is to run Dockbeat as a container — see the repo’s readme for more details).
You will first need to download the source code and release package from: https://github.com/Ingensi/dockbeat/releases
[code]$ gitclone https://github.com/Ingensi/dockbeat.git
[/code] Configuring and Running Dockbeat
Before you start Dockbeat, there is the matter of configurations. Since I used a vanilla installation with Docker and ELK installed locally, I did not need to change a thing in the supplied dockbeat.yml file.
Dockbeat is configured to connect to the default Docker socket:
[/code] My local Elasticsearch was already defined in the output section:
[code]### Elasticsearch as output
[/code] Of course, if you’re using a remotely installed Elasticsearch or Logstash instance, you will need to change these configurations respectively.
Before you start Dockbeat, you will need to grant execution permissions to the binary file:
[code]$ chmod +x dockbeat-v1.0.0-x86_64
[/code] Then, to start Dockbeat, use the following run command:
[code]./dockbeat-v1.0.0-x86_64 -c dockbeat-1.0.0/dockbeat.yml -v -e
[/code] Please note: I used the two optional parameters (‘-v’, ‘-e’) to see the output of the run command, but these, of course, are not mandatory.
Dockbeat then runs, and if all goes as expected, you should see the following lines in the debug output:
[code]2016/09/18 09:13:40.851229 beat.go:173: INFOdockbeatsuccessfullysetup. Startrunning.
2016/09/18 09:13:40.851278 dockbeat.go:196: INFOdockbeat%!(EXTRAstring=dockbeatis running! HitCTRL-C to stopit.)
2016/09/18 09:14:47.101231 dockbeat.go:320: INFOdockbeat%!(EXTRAstring=Publishing %v events, int=5)
[/code] It seems like all is working as expected, so my next step is to ping Elasticsearch:
[/code] The output displayed displays a cross-section of Elasticsearch indices:
[code]yellowopendockbeat-2016.09.18 5 1 749 0 773.7kb 773.7kb
yellowopen .kibana 1 1 1 0 3.1kb 3.1kb
[/code] The next step is to define the index pattern in Kibana. After clicking on the Setting tab in Kibana, I entered dockbeat.* in the index name/pattern field and selected the @timestamp filter to create the new index pattern:
Now, all the metrics collected by Dockbeat and stored by Elasticsearch are listed in the Visualize tab in Kibana: