请选择 进入手机版 | 继续访问电脑版

技术控

    今日:48| 主题:54663
收藏本版 (1)
最新软件应用技术尽在掌握

[其他] Structuring data with Logstash

[复制链接]
浅忆︿梦微凉 发表于 2016-11-28 08:10:56
215 4

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
Given the trend around microservices, it has become mandatory to be able to follow a transaction across multiple microservices.    Spring Cloud Sleuthis such a distributed tracing system fully integrated into the Spring Boot ecosystem. By adding the    spring-cloud-starter-sleuthinto a project’s POM, it instantly becomes Sleuth-enabled and every standard log call automatically adds additional data, such as    spanIdand    traceIdto the usual data.  
  1. 2016-11-25 19:05:53.221  INFO [demo-app,b4d33156bc6a49ec,432b43172c958450,false] 25305 ---\n
  2. [nio-8080-exec-1] ch.frankel.blog.SleuthDemoApplication      : this is an example message
复制代码
(broken on 2 lines for better readability)
  Now, instead of sending the data to Zipkin, let’s say I need to store it into Elasticsearch instead. A product is as good as the way it’s used. Indexing unstructured log messages is not very useful. Logstash configuration allows to pre-parse unstructured data and send structured data instead.
  Grok

      Grokkingdata is the usual way to structure data with pattern matching.  
  Last week, I wrote about    some hintsfor the configuration. Unfortunately, the hard part comes in writing the matching pattern itself, and those hints don’t help. While it might be possible to write a perfect Grok pattern on the first draft, the above log is complicated enough that it’s far from a certainty, and chances are high to stumble upon such message when starting Logstash with an unfit Grok filter:  
  1. "tags" => [
  2.     [0] "_grokparsefailure"
  3. ]
复制代码
However, there’s “an    appfor that” (sounds familiar?). It offers three fields:  
  
       
  • The first field accepts one (or more) log line(s)   
  • The second the grok pattern   
  • The 3      rdis the result of filtering the 1      stby the 2      nd  
      

Structuring data with Logstash

Structuring data with Logstash-1-技术控-additional,structured,multiple,standard,message
  
  The process is now to match fields one by one, from left to right. The first data field    .e.g.    2016-11-25 19:05:53.221is obviously a timestamp. Among common    grok patterns, it looks as if the    TIMESTAMP_ISO8601pattern would be the best fit.  
  Enter    %{TIMESTAMP_ISO8601:timestamp}into the Pattern field. The result is:  
  1. {
  2.   "timestamp": [
  3.     [
  4.       "2016-11-25 17:05:53.221"
  5.     ]
  6.   ]
  7. }
复制代码
The next field to handle looks like the log level. Among the patterns, there’s one    LOGLEVEL. The Pattern now becomes    %{TIMESTAMP_ISO8601:timestamp} *%{LOGLEVEL:level}and the result:  
  1. {
  2.   "timestamp": [
  3.     [
  4.       "2016-11-25 17:05:53.221"
  5.     ]
  6.   ],
  7.   "level": [
  8.     [
  9.       "INFO"
  10.     ]
  11.   ]
  12. }
复制代码
Rinse and repeat until all fields have been structured. Given the initial log line, the final pattern should look something along those lines:
  1. %{TIMESTAMP_ISO8601:timestamp} *%{LOGLEVEL:level} \[%{DATA:application},%{DATA:traceId},%{DATA:spanId},%{DATA:zipkin}]\n
  2. %{DATA:pid} --- *\[%{DATA:thread}] %{JAVACLASS:class} *: %{GREEDYDATA:log}
复制代码
(broken on 2 lines for better readability)
  And the associated result:
  1. {
  2.         "traceId" => "b4d33156bc6a49ec",
  3.          "zipkin" => "false",
  4.           "level" => "INFO",
  5.             "log" => "this is an example message",
  6.             "pid" => "25305",
  7.          "thread" => "nio-8080-exec-1",
  8.            "tags" => [],
  9.          "spanId" => "432b43172c958450",
  10.            "path" => "/tmp/logstash.log",
  11.      "@timestamp" => 2016-11-26T13:41:07.599Z,
  12.     "application" => "demo-app",
  13.        "@version" => "1",
  14.            "host" => "LSNM33795267A",
  15.           "class" => "ch.frankel.blog.SleuthDemoApplication",
  16.       "timestamp" => "2016-11-25 17:05:53.221"
  17. }
复制代码
Dissect

  The Grok filter gets the job done. But    it seemsto suffer from performance issues, especially if the pattern doesn’t match. An alternative is to use the    dissectfilter instead, which is based on    separators.  
  Unfortunately, there’s no app for that - but it’s much easier to write a separator-based filter than a regex-based one. The mapping equivalent to the above is:
  1. %{timestamp} %{+timestamp} %{level}[%{application},%{traceId},%{spanId},%{zipkin}]\n
  2. %{pid} %{}[%{thread}] %{class}:%{log}
复制代码
(broken on 2 lines for better readability)
  This outputs the following:
  1. {
  2.         "traceId" => "b4d33156bc6a49ec",
  3.          "zipkin" => "false",
  4.             "log" => " this is an example message",
  5.           "level" => "INFO ",
  6.             "pid" => "25305",
  7.          "thread" => "nio-8080-exec-1",
  8.            "tags" => [],
  9.          "spanId" => "432b43172c958450",
  10.            "path" => "/tmp/logstash.log",
  11.      "@timestamp" => 2016-11-26T13:36:47.165Z,
  12.     "application" => "demo-app",
  13.        "@version" => "1",
  14.            "host" => "LSNM33795267A",
  15.           "class" => "ch.frankel.blog.SleuthDemoApplication      ",
  16.       "timestamp" => "2016-11-25 17:05:53.221"
  17. }
复制代码
Notice the slight differences: by moving from a regex-based filter to a separator-based one, some strings end up padded with spaces. There are 2 ways to handle that:
  
       
  • change the logging pattern in the application - which might make direct log reading harder   
  • strip additional spaces with Logstash  
  With the second option, the final filter configuration snippet is:
  1. filter {
  2.   dissect {
  3.         mapping => { "message" => ... }
  4.   }
  5.   mutate {
  6.     strip => [ "log", "class" ]
  7.   }
  8. }
复制代码
Conclusion

  In order to structure data, the    grokfilter is powerful and used by many. However, depending on the specific log format to parse, writing the filter expression might be quite complex a task. The    dissectfilter, based on separators, is an alternative that makes it much easier - at the price of some additional handling. It also is an option to consider in case of performance issues.



上一篇:ES6 Iterators and Generators: 6 Exercises and Solutions
下一篇:Probabilistic Programming
电商令狐冲 发表于 2016-11-28 20:18:10
楼主会很有节奏的吐槽!
回复 支持 反对

使用道具 举报

時間結束所有 发表于 2016-11-30 03:31:30
吾生也有涯,而吃也无涯.
回复 支持 反对

使用道具 举报

奈何情深缘浅 发表于 2016-12-3 03:53:54
我们都知道恶虎架不住群狼。说明:”团队很重要!”
回复 支持 反对

使用道具 举报

守护神 发表于 2016-12-23 15:29:08
楼上的心情不错啊!
回复 支持 反对

使用道具 举报

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读


回页顶回复上一篇下一篇回列表
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )

© 2001-2017 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表