智象运维干货又来啦!这次讲的是“ELK中logstash收集的日志写入redis中”。ELK中我们通常是使用logstash收集到日志然后直接给到ES进行存储,但是在大量日志的场景下,当存储已经跟不上收集的速度是我们需要使用队列来进行顺序存储,这时就可以用到我们redis了。
修改/etc/logstash/conf.d/logstash.conf
input {
beats {
port => 5044
codec => plain {
charset => "UTF-8"
}
}
}
filter {
grok {
match => ["message", '%{IP:Client} - - \[%{HTTPDATE:timestamp}\] \"(%{WORD:method} %{NOTSPACE:request} HTTP/%{NUMBER:httpversion})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent}']
}
if "beats_input_codec_plain_applied" in [tags] {
mutate {
remove_tag => ["beats_input_codec_plain_applied"]
}
}
}
output {
redis {
data_type => "list"
host => "172.26.61.58"
port => "6379"
db => "0"
#password => "123456"
key => "nginx_log"
}
}
然后测试
logstash -f /etc/logstash/conf.d/logstash.conf -t
结果如下
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-10-29 16:30:54.152 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-10-29 16:30:56.037 [LogStash::Runner] Reflections - Reflections took 55 ms to scan 1 urls, producing 20 keys and 40 values
Configuration OK
然后我们新增一个配置
input {
redis {
data_type => "list" #工作方式
host => "172.26.61.58"
port => "6379"
db => "0"
#password => "123456"
key => "nginx_log"
threads => 1
}
}
output {
elasticsearch {
hosts => "172.26.61.61:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
}
}
然后我们去redis和kibana里面确认下
可以看到创建了nginx_log的key,但是没有数据,是被ES取走了,我们取kibana上面看下有没有最新的日志数据
看到新日志持续输入到ES中,kibana可以看到最新的数据那就ok了