Logging in Clojure / JVM – Part 4

In the previous part of this series, we learnt how we could store log data so it’s easy to get insights from it later. For instance, we use the GELF log format on Zolodeck, our side project. In this part, we’ll look at how to actually get insights from our logs, using a bunch of open source tools.

Here’re the simple 3-steps to get insights from our logs:

1) Write Logs
2) Transport Logs
3) Process Logs

Simple enough!

Write Logs:

In part 3, we saw how logging in a standard JSON format is beneficial. Some of my readers asked me why not use Clojure or Ruby data structures instead of JSON. Here’s why it’s better to use JSON format:

  • JSON is accessible from all languages
  • There are already a bunch of tools available to transport and process logs that accept JSON

Transport Logs:

Always write logs to local disk. It is tempting to have a log4j appender that directly sends logs to a remote server using UDP or HTTP. Unfortunately, you can’t guarantee that you’ll consistently reach those servers. So it’s better to write to local disk first, and then transport your logs to wherever your logs are going to be processed. There are many open source tools available for transporting logs, and depending on what tool you end up using to process your logs, your choice for transporting logs will change. Some tools that you can use for transporting logs are:

  • Scribe – Facebook open-sourced this. This tool does more than just transporting logs.
  • Lumberjack
  • rsync
  • Logstash – This tool does a lot more than transporting logs.

Process Logs:

We need to be able to collect, index, and search through our log data for anything we care to find. Some open-source tools out there for processing logs are:

  • Logstash – Like I said, this tool does lot more than transporting logs :)
  • Graylog2

Both Logstash (Kibana) and Graylog2 provides web interfaces to make life easy when analyzing and searching the underlying logs.

As you can see there are many options for managing and analyzing logs. Here’s what we currently do in our project:

Logging in Clojure 4.001

It is simple for now and we’re hoping to keep it that way :)

Conclusion

Logs are useful. Logs that provide us with insights are more useful :), and if they do this easily, they’re even more so. When we start a new project, we need to spend some time thinking about logging up front, since this is crucial part of managing growth. Thanks to a variety of open-source tools and libraries, it is not that expensive to try out different logging strategies. A properly thought-out logging architecture will save you a lot of time later on. I hope this series has shed some light on logging and why it’s important in this age of distributed computing.

Please do share your experiences, how’re handling logging on your projects. What went well? What didn’t? I’d love to see what folks are doing out there, and document them here, to make this knowledge available for others. Onward!

Logging in Clojure / JVM – Part 3

In part 1 and 2, we looked at the history of logging and how to use SLF4J (a library I’m using with Zolodeck). In this part, we’re going to learn about different formats that we can use to log things in. We need to choose a correct log format so we can then get insights from our logs when we want them. If we cannot easily process the logs to get at the insights we need, then it doesn’t matter what logging framework we use and how many gigabytes of logs we collect everyday.

Purpose of logs:

  • Debugging issues
  • Historic analysis
  • Business and Operational Analysis

If the logs are not adequate for these purposes, it means we’re doing something wrong. Unfortunately, I’ve seen this happen on many projects.

Consumer of logs:

Before we look into what format to log in, we need to know who is going to consume our logs for insights. Most logging implementations assume that humans will be consuming log statements. So they’re essentially formatted string (think printf) so that humans can easily read them. In these situations,  what we’re really doing is creating too much log data for humans to consume to get any particularly useful insights. People then try to solve this overload problem by being cautious about what they log, the idea being that lesser information will be easier for humans to handle. Unfortunately, we can’t know beforehand what information we may need to debug an issue. What always ends up happening is that some important piece of information is missed out.

Remember how we can program machines to consume lots of data and provide better insights? So instead of creating log files for humans to consume, we need to create them for machines.

Format of logs:

Now that we know that machines are will be consuming our logs, we need to make a decision what format our logs should be in. Optimizing for machine readability makes sense, of course.

Formatted strings:

We can easily write a program using regex to parse our formatted strings log messages. Formatted strings, however, are still not a good fit, because of these following reasons:

  • Logging Java stack traces can break our parser thanks to new line characters
  • Developers can’t remove or add fields without breaking the parser.

What is a better way then?

JSON Objects:

JSON objects aren’t particularly human readable, but machines love them. We can use any JSON library to parse our logs. Developers can add and remove fields, our parser will still work fine. We can also log Java stacktraces without breaking our parser by just treating it as a field of data.

JSON log object fields:

Now that it makes sense to use JSON objects as logs, the question is  what basic fields ought to be included. Obviously, this will depend on the application and business requirements. But at a minimum, we’d need the following fields:

  • Host
  • Message
  • Timestamp
  • Log Level
  • Module / Facility
  • File
  • Line Number
  • Trace ID
  • Environment

Standard JSON log format:

Instead of coming up with a custom JSON log format, we ought to just use a standard JSON log format. One option is to use GELF, which is used by many log analysis tools. GELF stands for Graylog Extended Log Format. There are lot of open source log appenders that create logs in GELF format. I’m using it on my side project Zolodeck, and we use logback-gelf.

In this part of this blog series, we learnt why we need to think about machine readable logs, and why we ought to use JSON as the log format. In the next part, we will look at how to get insights from logs, using a bunch of open source tools.