欢迎您访问365答案网,请分享给你的朋友!
生活常识 学习资料

ELK-日志管理

时间:2023-05-12
ELK-日志管理

E 表示ElasticsearchL 表示LogStashK表示Kibana 参考文章

SpringBoot+kafka+ELK分布式日志收集 https://yq.aliyun.com/articles/645316 Docker 安装 elasticSearch

参考文章 https://www.cnblogs.com/hackyo/p/9951684.html 下载镜像

docker pull elasticsearch:7.5.0

目录结构

[root@centos01 elasticsearch]# pwd/data/elasticsearch[root@centos01 elasticsearch]# tree.├── data└── es.yml

es.yml (elasticsearch.yml)

custer.name: "docker-cluster"network.host: 0.0.0.0

快速安装

docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --name group_elasticsearch_1 elasticsearch:7.5.0

完整安装 (单机版)

docker run -d -p 9200:9200 -p 9300:9300 --privileged=true -v /data/elasticsearch/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /etc/localtime:/etc/localtime:ro --name group_elasticsearch_1 -e "discovery.type=single-node" -e "ES_JAVA_OPTS:'-Xmx256m -Xms256m'" -e "ELASTIC_PASSWORD:changeme" elasticsearch:7.5.0

Web控制台

浏览器输入地址:http://192.168.25.129:9200

{ "name" : "WmBn0H‐", "cluster_name" : "elasticsearch", "cluster_uuid" : "2g‐VVbm9Rty7J4sksZNJEg", "version" : {"number" : "5.6.8","build_hash" : "688ecce","build_date" : "2018‐02‐16T16:46:30.010Z","build_snapshot" : false,"lucene_version" : "6.6.1" }, "tagline" : "You Know, for Search"}

安装ik分词器

进入容器

docker exec -it group_elasticsearch_1 bash

运行安装命令

/usr/share/elasticsearch/bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.5.0/elasticsearch-analysis-ik-7.5.0.zip

Docker 安装 LogStash

参考文章 https://www.cnblogs.com/hackyo/p/9951684.html

logStash介绍 https://www.cnblogs.com/chadiandianwenrou/p/6478940.html

下载镜像

docker pull logstash:7.5.0

目录结构

[root@centos01 logstash]# pwd/data/logstash[root@centos01 logstash]# tree.├── logstash.conf└── logstash.yml

复制配置文件/usr/share/logstash/config到宿主机上

docker cp group_logstash_1:/usr/share/logstash/config/logstash.yml /data/logstash/logstash.yml

复制配置文件/usr/share/logstash/pipeline到宿主机上

docker cp group_logstash_1:/usr/share/logstash/pipeline/logstash.conf /data/logstash/logstash.conf

修改./logstash.yml 为如下配置

http.host: "0.0.0.0"xpack.monitoring.elasticsearch.hosts: [ "http://192.168.0.13:9200" ]xpack.monitoring.enabled: truexpack.monitoring.elasticsearch.username: elasticxpack.monitoring.elasticsearch.password: changeme

修改./logstash.conf 为如下配置

## Logstash configuration## TCP -> Logstash -> Elasticsearch pipeline.## 5044是beats 默认端口,数据源通过tcp的5000端口传输数据input { tcp { port => 5000 host => 0.0.0.0 mode => ["server"] }}## Add your filters / logstash plugins configuration herefilter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYdata:logThread} %{LOGLEVEL:logLevel} %{GREEDYdata:loggerClass} - %{GREEDYdata:logContent}" } }}output { elasticsearch { hosts => ["http://192.168.0.13:9200"] user => "elastic" password => "changeme" }}

快速安装

docker run -d --name group_logstash_1 logstash:7.5.0

完整安装

docker run -d -p 5000:5000 -p 9600:9600 -v /data/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml -v /data/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf --privileged=true -v /etc/localtime:/etc/localtime:ro --name group_logstash_1 logstash:7.5.0

Docker 安装 Kibana

参考文章 https://www.cnblogs.com/hackyo/p/9951684.html

下载镜像

docker pull kibana:7.5.0

目录结构

[root@centos01 kibana]# pwd/data/kibana[root@centos01 kibana]# tree.└── config └── kibana.yml

复制配置文件/usr/share/kibana/config到宿主机

docker cp group_kibana_1:/usr/share/kibana/config /data/kibana/config

修改配置文件./config/kibana.yml

## ** THIS IS AN AUTO-GENERATED FILE **## Default Kibana configuration for docker targetserver.name: kibanaserver.host: "0.0.0.0"elasticsearch.hosts: [ "http://192.168.0.13:9200" ]xpack.monitoring.ui.container.elasticsearch.enabled: trueelasticsearch.username: elasticelasticsearch.password: changeme

快速安装

docker run -d -p 5601:5601 --name group_kibana_1 kibana:7.5.0

完整安装

docker run -d -p 5601:5601 -v /data/kibana/config:/usr/share/kibana/config --privileged=true -v /etc/localtime:/etc/localtime:ro --name group_kibana_1 kibana:7.5.0

Web控制台

管理界面 http://192.168.0.13:5601/

创建索引 http://192.168.0.13:5601/app/kibana#/management?_g=() 里面的Index Patterns

1、Create index pattern2、填写Index pattern,例如:logstash*3、Next step4、选择Time Filter field name Refresh,选择@timestamp(此处可随意)5、Create index pattern

查看日志 http://192.168.0.13:5601/app/kibana#/discover日志管理界面简单操作 示例: http://119.23.50.122/ms/ELK/3000.png 模糊查询

参考文章: https://blog.csdn.net/qq_16590169/article/details/87927511

例如:查询logContent字段包含exception的结果集logContent: ~exception ~

ELK数据源-SpringBoot应用(简单应用) pom.xml

net.logstash.logback logstash-logback-encoder 6.3

logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?> 192.168.0.13:5000 %d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n UTF-8

配置说明encoder.pattern的值在filter.grok.match处进行解析,将数据进行格式化

-----------------------logback-spring.xml-----------------------%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%nUTF-8-----------------------LogStash#logstash.conf-----------------------filter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYdata:logThread} %{LOGLEVEL:logLevel} %{GREEDYdata:loggerClass} - %{GREEDYdata:logContent}" } }}

ELK数据源-kafka(生产应用)

参考文章 https://yq.aliyun.com/articles/645316 Logstash-改造

此处的kafka#topics => input.log.stash要与logback-spring.xml#appender#topic的值保持一致

./logstash.conf

## Logstash configuration## TCP -> Logstash -> Elasticsearch pipeline.## 数据源是kafka消息队列input { kafka { id => "my_plugin_id" bootstrap_servers => "192.168.0.13:9092" topics => ["input.log.stash"] auto_offset_reset => "latest" }}## Add your filters / logstash plugins configuration herefilter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYdata:logThread} %{LOGLEVEL:logLevel} %{GREEDYdata:loggerClass} - %{GREEDYdata:logContent}" } }}output { elasticsearch { hosts => ["http://192.168.0.13:9200"] user => "elastic" password => "changeme" }}

客户端-SpringBoot应用

pom.xml

com.github.danielwegener logback-kafka-appender 0.2.0-RC2

logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?> %d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n input.log.stash bootstrap.servers=192.168.0.13:9092

Copyright © 2016-2020 www.365daan.com All Rights Reserved. 365答案网 版权所有 备案号:

部分内容来自互联网,版权归原作者所有,如有冒犯请联系我们,我们将在三个工作时内妥善处理。