使用 ELK (Elasticsearch + Logstash + Kibana) 搭建日志集中分析平台实践 前言 Elasticsearch + Logstash + Kibana ( ELK )是一套开源的日志管理方案,分析网站的访问情况时我们一般会借助 Google/百度 /CNZZ 等方式嵌入 JS 做数据统计,但是当网站访问异常或者被攻击时我们需要在后台分析如 Nginx 的具体日志,而 Nginx 日志分割 /GoAccess/Awstats 都是相对简单的单节点解决方案,针对分布式集群或者数据量级较大时会显得心有余而力不足,而 ELK 的出现可以使我们从容面对新的挑战。
Logstash:负责日志的收集,处理和储存
Elasticsearch:负责日志检索和分析
Kibana:负责日志的可视化
ELK(Elasticsearch + Logstash + Kibana)
ELK 简介 ELK 官方文档 是一个分布式、可扩展、实时的搜索与数据分析引擎。目前我在工作中只用来收集 server 的 log, 开发锅锅们 debug 的好助手。
安装设置集群 ELK 版本 6.7 ELK 安装文档 集群主要是高可用,多节点的 Elasticsearch 还可以扩容。本文中用的官方镜像 The base image is centos:7
Elasticsearch 多节点安装 官方安装文档 Elasticsearch
1 2 3 4 5 6 7 8 mkdir -p /data/elk-data && chmod 755 /data/elk-data chmod g+rwx /data/elk-data chown -R 1000 /data/elk-data docker run -p WAN_IP:9200:9200 -p 10.66.236.116:9300:9300 \ -v /data/elk-data:/usr/share/elasticsearch/data \ --name test_elk \ docker.elastic.co/elasticsearch/elasticsearch:6.7.0
1 2 3 4 5 docker run -d -p 160.255.0.227:9200:9200 -p 160.255.0.227:9300:9300 -v /data/elk-data:/usr/share/elasticsearch/data --name test_elasticsearch docker.elastic.co/elasticsearch/elasticsearch:6.7.0 docker run -d -p 160.255.0.165:9200:9200 -p 160.255.0.165:9300:9300 -v /data/elk-data:/usr/share/elasticsearch/data --name test_elasticsearch docker.elastic.co/elasticsearch/elasticsearch:6.7.0 docker run -d -p 160.255.0.133:9200:9200 -p 160.255.0.133:9300:9300 -v /data/elk-data:/usr/share/elasticsearch/data --name test_elasticsearch docker.elastic.co/elasticsearch/elasticsearch:6.7.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 cluster.name: "test_elk" network.host: 0.0 .0 .0 node.master: true node.data: true node.name: node2 network.publish_host: 160.255 .0 .227 discovery.zen.ping.unicast.hosts: ["160.255.0.227:9300","160.255.0.165:9300","160.255.0.133:9300"] cluster.name: "test_elk" network.host: 0.0 .0 .0 node.master: true node.data: true node.name: node3 network.publish_host: 160.255 .0 .165 discovery.zen.ping.unicast.hosts: ["160.255.0.227:9300","160.255.0.165:9300","160.255.0.133:9300"] cluster.name: "test_elk" network.host: 0.0 .0 .0 node.master: true node.data: true node.name: node4 network.publish_host: 160.255 .0 .133 discovery.zen.ping.unicast.hosts: ["160.255.0.227:9300","160.255.0.165:9300","160.255.0.133:9300"]
检查集群节点个数,状态等
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 { "cluster_name" : "test_elk" , "status" : "green" , "timed_out" : false , "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }
Kibana 安装 官方安装文档 Kibana
1 2 3 4 5 6 docker run -d --restart always -p 160.255.0.227:5601:5601 --link test_elk:elasticsearch --name kibana docker.elastic.co/kibana/kibana:6.7.0 docker run -d --restart always -p 外网 IP:5601:5601 --name test_kibana docker.elastic.co/kibana/kibana:6.7.0
we recommend that you use user-defined networks to facilitate communication between two containers instead of using –link
1 2 3 4 5 6 7 8 9 10 11 server.name: kibana server.host: "0.0.0.0" elasticsearch.hosts: [ "http://172.17.0.2:9200" ] xpack.monitoring.ui.container.elasticsearch.enabled: true docker restart [container_ID]
Logstash 安装 官方安装文档 Logstash
1 2 3 4 5 6 7 8 9 10 11 docker run -p 5044:5044 -d --name test_logstash docker.elastic.co/logstash/logstash:6.7.0 docker run -p 160.255.0.227:5044:5044 -d --name test_logstash docker.elastic.co/logstash/logstash:6.7.0 elasticseatch { hosts => ["160.255.0.227" ,"160.255.0.165" ,"160.255.0.133" ] }
配置文件结构 Logstash配置文件可以配置所有插件(Plugin),input filter output codec
,存在多个filter则按顺序应用。每个Plugin都支持丰富的扩展。
For example:
1 2 3 4 5 6 7 8 9 10 11 input { file { path => "/var/log/messages" type => "syslog" } file { path => "/var/log/apache/access.log" type => "apache" } }
logstash 过滤规则 见上文的配置和 grok 语法规则
1 2 3 4 5 6 http.host: "0.0.0.0" xpack.monitoring.elasticsearch.url: http://elasticsearch_master_IP:9200 node.name: "node2" pipeline.workers: 24
Filebeat 安装 官方安装文档Filebeat
1 2 3 4 5 6 7 docker run \ -d --name filebeat \ -v /data/prod/seat/tomcat/logs:/var/log /tomcat \ -v /data/filebeat/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro \ -v /var/lib/docker/containers:/var/lib/docker/containers:ro \ -v /var/run/docker.sock:/var/run/docker.sock:ro \ docker.elastic.co/beats/filebeat:7.1.1
filebeat.yml文件结构
1 2 3 4 5 6 7 8 9 filebeat.inputs: - type: log enabled: true paths: - /var/log/tomcat/applogs.log output.elasticsearch: hosts: ["160.255.0.240:9200"] setup.kibana: host: "160.255.0.240:5601"