從ELK 到 EFK 的演進(jìn):日志平臺構(gòu)建實踐


整體架構(gòu)

Filebeat: 6.2.4Kafka: 2.11-1Logstash: 6.2.4Elasticsearch: 6.2.4Kibana: 6.2.4相應(yīng)的版本最好下載對應(yīng)的插件
具體實踐
我們就以比較常見的 Nginx 日志來舉例說明下,日志內(nèi)容是 JSON 格式
{"@timestamp":"2017-12-27T16:38:17+08:00","host":"192.168.56.11","clientip":"192.168.56.11","size":26,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.56.11","url":"/nginxweb/index.html","domain":"192.168.56.11","xff":"-","referer":"-","status":"200"}{"@timestamp":"2017-12-27T16:38:17+08:00","host":"192.168.56.11","clientip":"192.168.56.11","size":26,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.56.11","url":"/nginxweb/index.html","domain":"192.168.56.11","xff":"-","referer":"-","status":"200"}{"@timestamp":"2017-12-27T16:38:17+08:00","host":"192.168.56.11","clientip":"192.168.56.11","size":26,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.56.11","url":"/nginxweb/index.html","domain":"192.168.56.11","xff":"-","referer":"-","status":"200"}{"@timestamp":"2017-12-27T16:38:17+08:00","host":"192.168.56.11","clientip":"192.168.56.11","size":26,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.56.11","url":"/nginxweb/index.html","domain":"192.168.56.11","xff":"-","referer":"-","status":"200"}{"@timestamp":"2017-12-27T16:38:17+08:00","host":"192.168.56.11","clientip":"192.168.56.11","size":26,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.56.11","url":"/nginxweb/index.html","domain":"192.168.56.11","xff":"-","referer":"-","status":"200"}
Filebeat
為什么用 Filebeat ,而不用原來的 Logstash 呢?
$ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-darwin-x86_64.tar.gz
解壓
tar -zxvf filebeat-6.2.4-darwin-x86_64.tar.gzmv filebeat-6.2.4-darwin-x86_64 filebeatcd filebeat
修改配置
修改 Filebeat 配置,支持收集本地目錄日志,并輸出日志到 Kafka 集群中
$ vim fileat.ymlfilebeat.prospectors:- input_type: logpaths:- /opt/logs/server/nginx.logjson.keys_under_root: truejson.add_error_key: truejson.message_key: logoutput.kafka:hosts: ["192.168.0.1:9092,192.168.0.2:9092,192.168.0.3:9092"]topic: 'nginx'
Filebeat 6.0 之后一些配置參數(shù)變動比較大,比如 document_type 就不支持,需要用 fields 來代替等等。
啟動
$ ./filebeat -e -c filebeat.yml
Kafka
生產(chǎn)環(huán)境中 Kafka 集群中節(jié)點(diǎn)數(shù)量建議為(2N + 1 )個,這邊就以 3 個節(jié)點(diǎn)舉例
下載
直接到官網(wǎng)下載 Kafka
$ wget http://mirror.bit.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz
解壓
tar -zxvf kafka_2.11-1.0.0.tgzmv kafka_2.11-1.0.0 kafkacd kafka
修改 Zookeeper 配置
修改 Zookeeper 配置,搭建 Zookeeper 集群,數(shù)量 ( 2N + 1 ) 個
ZK 集群建議采用 Kafka 自帶,減少網(wǎng)絡(luò)相關(guān)的因素干擾
$ vim zookeeper.propertiestickTime=2000dataDir=/opt/zookeeperclientPort=2181maxClientCnxns=50initLimit=10syncLimit=5server.1=192.168.0.1:2888:3888server.2=192.168.0.2:2888:3888server.3=192.168.0.3:2888:3888
Zookeeper data 目錄下面添加 myid 文件,內(nèi)容為代表 Zooekeeper 節(jié)點(diǎn) id (1,2,3),并保證不重復(fù)。
$ vim /opt/zookeeper/myid1
啟動 Zookeeper 節(jié)點(diǎn)
分別啟動 3 臺 Zookeeper 節(jié)點(diǎn),保證集群的高可用
$ ./zookeeper-server-start.sh -daemon ./config/zookeeper.properties
修改 Kafka 配置
kafka 集群這邊搭建為 3 臺,可以逐個修改 Kafka 配置,需要注意其中 broker.id 分別 (1,2,3)
$ vim ./config/server.propertiesbroker.id=1port=9092host.name=192.168.0.1num.replica.fetchers=1log.dirs=/opt/kafka_logsnum.partitions=3zookeeper.connect=192.168.0.1: 192.168.0.2: 192.168.0.3:2181zookeeper.connection.timeout.ms=6000zookeeper.sync.time.ms=2000num.io.threads=8num.network.threads=8queued.max.requests=16fetch.purgatory.purge.interval.requests=100producer.purgatory.purge.interval.requests=100delete.topic.enable=true
啟動 Kafka 集群
分別啟動 3 臺 Kafka 節(jié)點(diǎn),保證集群的高可用
$ ./bin/kafka-server-start.sh -daemon ./config/server.properties
查看 topic 是否創(chuàng)建成功
$ bin/kafka-topics.sh --list --zookeeper localhost:2181nginx
監(jiān)控 Kafka Manager
Kafka-manager 是 Yahoo 公司開源的集群管理工具。
可以在 Github 上下載安裝:https://github.com/yahoo/kafka-manager

如果遇到 Kafka 消費(fèi)不及時的話,可以通過到具體 cluster 頁面上,增加 partition。Kafka 通過 partition 分區(qū)來提高并發(fā)消費(fèi)速度。

Logstash
Logstash 提供三大功能
INPUT 進(jìn)入
FILTER 過濾功能
OUTPUT 出去
如果使用 Filter 功能的話,強(qiáng)烈推薦大家使用 Grok debugger 來預(yù)先解析日志格式。

下載
$ wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.tar.gz
解壓重命名
$ tar -zxvf logstash-6.2.4.tar.gz$ mv logstash-6.2.4 logstash
$ vim nginx.confinput {kafka {type => "kafka"bootstrap_servers => "192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181"topics => "nginx"group_id => "logstash"consumer_threads => 2}}output {elasticsearch {host => ["192.168.0.1","192.168.0.2","192.168.0.3"]port => "9300"index => "nginx-%{+YYYY.MM.dd}"}}
啟動 Logstash
$ ./bin/logstash -f nginx.conf
Elasticsearch
下載
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz
解壓
$ tar -zxvf elasticsearch-6.2.4.tar.gz$ mv elasticsearch-6.2.4.tar.gz elasticsearch
修改配置
$ vim config/elasticsearch.ymlcluster.name: esnode.name: es-node1network.host: 192.168.0.1discovery.zen.ping.unicast.hosts: ["192.168.0.1"]discovery.zen.minimum_master_nodes: 1
$ ./bin/elasticsearch -d
打開網(wǎng)頁 http://192.168.0.1:9200/, 如果出現(xiàn)下面信息說明配置成功
{name: "es-node1",cluster_name: "es",cluster_uuid: "XvoyA_NYTSSV8pJg0Xb23A",version: {number: "6.2.4",build_hash: "ccec39f",build_date: "2018-04-12T20:37:28.497551Z",build_snapshot: false,lucene_version: "7.2.1",minimum_wire_compatibility_version: "5.6.0",minimum_index_compatibility_version: "5.0.0"},tagline: "You Know, for Search"}
控制臺Cerebro 這個名字大家可能覺得很陌生,其實過去它的名字叫 kopf !因為 Elasticsearch 5.0 不再支持 site plugin,所以 kopf 作者放棄了原項目,另起爐灶搞了 cerebro,以獨(dú)立的單頁應(yīng)用形式,繼續(xù)支持新版本下 Elasticsearch 的管理工作。

注意點(diǎn)
Master 與 Data 節(jié)點(diǎn)分離,當(dāng) Data 節(jié)點(diǎn)大于 3 個的時候,建議責(zé)任分離,減輕壓力
Data Node 內(nèi)存不超過 32G ,建議設(shè)置成 31 G ,具體原因可以看上一篇文章
discovery.zen.minimum_master_nodes 設(shè)置成 ( total / 2 + 1 ),避免腦裂情況
最重要的一點(diǎn),不要將 ES 暴露在公網(wǎng)中,建議都安裝 X-PACK ,來加強(qiáng)其安全性
kibana
下載
$ wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-darwin-x86_64.tar.gz
解壓
$ tar -zxvf kibana-6.2.4-darwin-x86_64.tar.gz$ mv kibana-6.2.4-darwin-x86_64.tar.gz kibana
修改配置
$ vim config/kibana.ymlserver.port: 5601server.host: "192.168.0.1"elasticsearch.url: "http://192.168.0.1:9200"
啟動 Kibana
$ nohup ./bin/kibana &
界面展示
創(chuàng)建索引頁面需要到 Management -> Index Patterns 中通過前綴來指定

最終效果展示

總結(jié)
來源:https://blog.51cto.com/13527416/2117141
文章轉(zhuǎn)載:高效運(yùn)維
(版權(quán)歸原作者所有,侵刪)
![]()

點(diǎn)擊下方“閱讀原文”查看更多
