E 表示ElasticsearchL 表示LogStashK表示Kibana 参考文章
SpringBoot+kafka+ELK分布式日志收集 https://yq.aliyun.com/articles/645316 Docker 安装 elasticSearch
参考文章 https://www.cnblogs.com/hackyo/p/9951684.html 下载镜像
docker pull elasticsearch:7.5.0
目录结构[root@centos01 elasticsearch]# pwd/data/elasticsearch[root@centos01 elasticsearch]# tree.├── data└── es.yml
es.yml (elasticsearch.yml)
custer.name: "docker-cluster"network.host: 0.0.0.0
快速安装docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --name group_elasticsearch_1 elasticsearch:7.5.0
完整安装 (单机版)docker run -d -p 9200:9200 -p 9300:9300 --privileged=true -v /data/elasticsearch/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /etc/localtime:/etc/localtime:ro --name group_elasticsearch_1 -e "discovery.type=single-node" -e "ES_JAVA_OPTS:'-Xmx256m -Xms256m'" -e "ELASTIC_PASSWORD:changeme" elasticsearch:7.5.0
Web控制台浏览器输入地址:http://192.168.25.129:9200
{ "name" : "WmBn0H‐", "cluster_name" : "elasticsearch", "cluster_uuid" : "2g‐VVbm9Rty7J4sksZNJEg", "version" : {"number" : "5.6.8","build_hash" : "688ecce","build_date" : "2018‐02‐16T16:46:30.010Z","build_snapshot" : false,"lucene_version" : "6.6.1" }, "tagline" : "You Know, for Search"}
安装ik分词器进入容器
docker exec -it group_elasticsearch_1 bash
运行安装命令
/usr/share/elasticsearch/bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.5.0/elasticsearch-analysis-ik-7.5.0.zip
Docker 安装 LogStash
参考文章 https://www.cnblogs.com/hackyo/p/9951684.html
logStash介绍 https://www.cnblogs.com/chadiandianwenrou/p/6478940.html
下载镜像docker pull logstash:7.5.0
目录结构[root@centos01 logstash]# pwd/data/logstash[root@centos01 logstash]# tree.├── logstash.conf└── logstash.yml
复制配置文件/usr/share/logstash/config到宿主机上
docker cp group_logstash_1:/usr/share/logstash/config/logstash.yml /data/logstash/logstash.yml
复制配置文件/usr/share/logstash/pipeline到宿主机上
docker cp group_logstash_1:/usr/share/logstash/pipeline/logstash.conf /data/logstash/logstash.conf
修改./logstash.yml 为如下配置
http.host: "0.0.0.0"xpack.monitoring.elasticsearch.hosts: [ "http://192.168.0.13:9200" ]xpack.monitoring.enabled: truexpack.monitoring.elasticsearch.username: elasticxpack.monitoring.elasticsearch.password: changeme
修改./logstash.conf 为如下配置
## Logstash configuration## TCP -> Logstash -> Elasticsearch pipeline.## 5044是beats 默认端口,数据源通过tcp的5000端口传输数据input { tcp { port => 5000 host => 0.0.0.0 mode => ["server"] }}## Add your filters / logstash plugins configuration herefilter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYdata:logThread} %{LOGLEVEL:logLevel} %{GREEDYdata:loggerClass} - %{GREEDYdata:logContent}" } }}output { elasticsearch { hosts => ["http://192.168.0.13:9200"] user => "elastic" password => "changeme" }}
快速安装docker run -d --name group_logstash_1 logstash:7.5.0
完整安装docker run -d -p 5000:5000 -p 9600:9600 -v /data/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml -v /data/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf --privileged=true -v /etc/localtime:/etc/localtime:ro --name group_logstash_1 logstash:7.5.0
Docker 安装 Kibana参考文章 https://www.cnblogs.com/hackyo/p/9951684.html
下载镜像docker pull kibana:7.5.0
目录结构[root@centos01 kibana]# pwd/data/kibana[root@centos01 kibana]# tree.└── config └── kibana.yml
复制配置文件/usr/share/kibana/config到宿主机
docker cp group_kibana_1:/usr/share/kibana/config /data/kibana/config
修改配置文件./config/kibana.yml
## ** THIS IS AN AUTO-GENERATED FILE **## Default Kibana configuration for docker targetserver.name: kibanaserver.host: "0.0.0.0"elasticsearch.hosts: [ "http://192.168.0.13:9200" ]xpack.monitoring.ui.container.elasticsearch.enabled: trueelasticsearch.username: elasticelasticsearch.password: changeme
快速安装docker run -d -p 5601:5601 --name group_kibana_1 kibana:7.5.0
完整安装docker run -d -p 5601:5601 -v /data/kibana/config:/usr/share/kibana/config --privileged=true -v /etc/localtime:/etc/localtime:ro --name group_kibana_1 kibana:7.5.0
Web控制台
管理界面 http://192.168.0.13:5601/
创建索引 http://192.168.0.13:5601/app/kibana#/management?_g=() 里面的Index Patterns
1、Create index pattern2、填写Index pattern,例如:logstash*3、Next step4、选择Time Filter field name Refresh,选择@timestamp(此处可随意)5、Create index pattern
查看日志 http://192.168.0.13:5601/app/kibana#/discover日志管理界面简单操作 示例: http://119.23.50.122/ms/ELK/3000.png 模糊查询
参考文章: https://blog.csdn.net/qq_16590169/article/details/87927511
例如:查询logContent字段包含exception的结果集logContent: ~exception ~
ELK数据源-SpringBoot应用(简单应用) pom.xml<?xml version="1.0" encoding="UTF-8"?>
配置说明encoder.pattern的值在filter.grok.match处进行解析,将数据进行格式化
-----------------------logback-spring.xml-----------------------
参考文章 https://yq.aliyun.com/articles/645316 Logstash-改造
此处的kafka#topics => input.log.stash要与logback-spring.xml#appender#topic的值保持一致
./logstash.conf
## Logstash configuration## TCP -> Logstash -> Elasticsearch pipeline.## 数据源是kafka消息队列input { kafka { id => "my_plugin_id" bootstrap_servers => "192.168.0.13:9092" topics => ["input.log.stash"] auto_offset_reset => "latest" }}## Add your filters / logstash plugins configuration herefilter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYdata:logThread} %{LOGLEVEL:logLevel} %{GREEDYdata:loggerClass} - %{GREEDYdata:logContent}" } }}output { elasticsearch { hosts => ["http://192.168.0.13:9200"] user => "elastic" password => "changeme" }}
客户端-SpringBoot应用pom.xml
logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>