欢迎您访问365答案网,请分享给你的朋友!
生活常识 学习资料

ELK---搭建(Linux&Windows)

时间:2023-06-20
ELK搭建(Linux&Windows)

☁️前言ELK搭建 -- win10环境

提前准备安装使用

安装elasticsearch安装kibana ELK系统添加x-pack安全插件

生成证书将elasticsearch节点密码添加至elasticsearch-keystore修改elasticsearch各个节点config目录下的elasticsearch.yml文件重启elasticsearch待elasticsearch所有节点重启完毕后,设置elasticsearch 节点密码修改kibana配置 20220212更新 安装7.17.0版本,适配IK分词器7.17版本

安装elasticsearch安装kibana注意 ELK搭建 -- Docker中Linux环境

提前准备☀️ 搭建步骤

所有机器都关闭防火墙和selinux安装elasticsearch安装kibana安装kafka+zookeeper安装logstash安装Filebeat搜集日志 ELK同步Mysql中的表数据

在Logstash中配置JDBC相关配置 ElasticSearch的9200和9300端口的区别


☁️前言

ELK + kafka(zookeeper) + filebeat 的设计模型


ELK搭建 – win10环境

ELK和JDK版本对比查询

提前准备

官网下载最新的windows版本zip包
ELK+filebeat官网下载地址

网盘链接:https://pan.baidu.com/s/1nK8wVBbvvk9TH7rZBiGzOQ
提取码:YYDS
版本为7.5.2 系统默认使用JDK11.所以需要将环境改为JDK11

JDK 网盘地址
链接:https://pan.baidu.com/s/1fnLxbzRsQ6ziwO9JCBKRqg
提取码:YYDS

安装使用 安装elasticsearch

首先解压elasticsearch-7.5.2-windows-x86_64.zip,命令行执行启动命令

启动之后浏览器访问:http://localhost:9200/

安装kibana

解压kibana-7.5.2-windows-x86_64.zip,修改配置文件
修改…kibana-7.5.2-windows-x86_64config下的kibana.yml文件。增加配置

server.port: 5601server.name: "DESKTOP-IFO48BG"server.host: "localhost"elasticsearch.hosts: "http://localhost:9200/"


命令行启动服务

ELK系统添加x-pack安全插件

x-pack是一个Elastic Stack扩展,它提供安全性、警报、监视、报告、机器学习和许多其他功能

生成证书

在…ELKelasticsearch-7.5.2bin> 目录下生成证书

a) 输入命令./elasticsearch-certutil ca生成证书b) 输入证书保存路径和密码c) 输入命令./elasticsearch-certutil cert --ca .configelastic-stack-ca.p12, 生成第二个证书(注意命令最后参数是第一个证书的相对路径)d) 输入第一个证书密码进行验证e) 输入第二个证书保存的路径和密码


将elasticsearch节点密码添加至elasticsearch-keystore

a) 输入命令./elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_passwordb) 输入第二个证书的密码进行验证c) 输入命令./elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_passwordd) 输入第二个证书的密码进行验证

修改elasticsearch各个节点config目录下的elasticsearch.yml文件

添加以下配置属性,注意证书的相对路径

xpack.security.enabled: truexpack.license.self_generated.type: basicxpack.security.transport.ssl.enabled: truexpack.security.transport.ssl.verification_mode: certificatexpack.security.transport.ssl.keystore.path: .elastic-certificates.p12xpack.security.transport.ssl.truststore.path: .elastic-certificates.p12

重启elasticsearch 待elasticsearch所有节点重启完毕后,设置elasticsearch 节点密码

a)执行设置密码命令./elasticsearch-setup-passwords interactiveb)按PowerShell提示操作,按y确定继续操作c)设置各个系统预设users密码(为了方便记忆可以设置成一样的,后面在kibana可以进行修改)


修改kibana配置

a) 停止kibana服务b) 进入kibana config目录修改kibana.yml档,添加以下属性配置(注意密码是设置节点elastic用户名的密码)elasticsearch.username: "elastic"elasticsearch.password: "密码"c) 重启kibana


20220212更新 安装7.17.0版本,适配IK分词器7.17版本 安装elasticsearch

首先解压elasticsearch-7.17.0-windows-x86_64.zip,命令行执行启动命令

启动之后浏览器访问:http://localhost:9200/

安装kibana

解压kibana-7.17.0-windows-x86_64.zip,修改配置文件
修改…kibana-7.17.0-windows-x86_64config下的kibana.yml文件。增加配置

server.port: 5601server.name: "DESKTOP-IFO48BG"server.host: "localhost"elasticsearch.hosts: "http://localhost:9200/"


命令行启动服务

注意 本次安装在虚拟机的windows环境,发现安装之后ELK只能用localhouse访问,不能用IP访问。 解决这个问题,需要做一些配置

1.在elasticsearch 中的config下的elasticsearch .yml文件新增配置

http.host: 0.0.0.0network.host: 0.0.0.0discovery.seed_hosts: ["0.0.0.0", "[::1]"]

2.在kibana config下的kibana.yml文件

server.host: "localhost"改为server.host: "0.0.0.0"


ELK搭建 – Docker中Linux环境 提前准备

首先需要下载ELK3件套+kafka中间件+filebeat的tar包(本文使用7.5.1 版本

ELK+filebeat官网下载地址
Kafka官网下载地址

☀️ 搭建步骤 所有机器都关闭防火墙和selinux

systemctl stop firewalldsystemctl disable firewalldvim /etc/sysconfig/selinuxSELinux=disabledsetenforce 0

安装elasticsearch

将下载好的tar包上传解压安装

因为es7.2及以上版本要求java11及以上的安装环境,从官网查询得知,es自带了java11,所以我们只要把我们本地的java环境变量给注释掉重启即可。如果该机器上确实又有其他项目需要用到java8怎么办,那么就直接修改es配置让它使用自己自带的java11。
vim bin/elasticsearch 需要在前面加这javahome目录这一行。

JAVA_HOME='/usr/local/elasticsearch-7.5.1/jdk'source "`dirname "$0"`"/elasticsearch-env

配置启动
cd /usr/local/elasticsearch-7.5.1/config
vim elasticsearch.yml 需要配置下面这些项

# ======================== Elasticsearch Configuration =========================## NOTE: Elasticsearch comes with reasonable defaults for most settings.# Before you set out to tweak and tune the configuration, make sure you# understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file、This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.elastic.co/guide/en/elasticsearch/reference/index.html## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:##cluster.name: IMP## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:#node.name: master## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):#path.data: /data/elasticsearch/data## Path to log files:#path.logs: /data/elasticsearch/logs## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:#bootstrap.memory_lock: false## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## Elasticsearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## Set the bind address to a specific IP (IPv4 or IPv6):#network.host: 0.0.0.0## Set a custom port for HTTP:#http.port: 9200## For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when this node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]##discovery.seed_hosts: ["127.0.0.1", "[::1]"]## Bootstrap the cluster using an initial set of master-eligible nodes:#cluster.initial_master_nodes: ["master"]## For more information, consult the discovery and cluster formation module documentation.## ---------------------------------- Gateway -----------------------------------## Block initial recovery after a full cluster restart until N nodes are started:##gateway.recover_after_nodes: 3## For more information, consult the gateway module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:##action.destructive_requires_name: true

创建日志和数据目录

mkdir -p /data/elasticsearch/{data,log}

启动

cd /usr/local/elasticsearch-7.5.1/bin;./elasticsearch -d

安装kibana

上传安装tar包配置启动

cd /usr/local/kibana-7.5.1-linux-x86_64vi config/kibana.yml

注意: elasticsearch的地址要写上面安装好的elasticsearch的地址

server.port: 5601
server.host: “0.0.0.0”
elasticsearch.hosts©: [“http://localhost:9200”]
配置中文
i18n.locale: “zh-CN”

启动

cd /usr/local/kibana-7.5.1-linux-x86_64/bin/nohup ./kibana &

验证
访问192.168.9.168:5601,会进入kibana的系统页面。

安装kafka+zookeeper

kafka的包里面就自带了zookeeper,所以不需要额外的下载

cd /usr/local/src
tar -xzvf kafka_2.12-2.3.1.tgz -C /usr/local

安装配置zk

mkdir -p /data/zookeeper/{data,logs}vi /usr/local/kafka_2.12-2.3.1/config/zookeeper.properties

Zookeeper 配置如下

dataDir=/data/zookeeper/datadataLogDir=/data/zookeeper/logsclientPort=2181maxClientCnxns=100tickTime=2000initLimit=10

启动zk

Java配置生效source /etc/profilenohup /usr/local/kafka_2.12-2.3.1/bin/zookeeper-server-start.sh /usr/local/kafka_2.12-2.3.1/config/zookeeper.properties > /data/zookeeper/logs/zk_output.log 2>&1 &

验证zk

配置kafka

cd /usr/local/kafka_2.12-2.3.1/config/ vi server.properties

配置如下

broker.id=0listeners=PLAINTEXT://192.18.2.15:9092num.network.threads=3num.io.threads=8num.network.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600log.dirs=/data/kafka/kafka-logs-servernum.partitions=2num.recovery.threads.per.data.dir=1offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1log.retention.hours=168log.segment.bytes=1073741824log.retention.check.interval.ms=300000zookeeper.connect=localhost:2181zookeeper.connection.timeout.ms=6000

启动kafka

source /etc/profilenohup /usr/local/kafka_2.12-2.3.1/bin/kafka-server-start.sh /usr/local/kafka_2.12-2.3.1/config/server.properties > /data/kafka/logs/run_output.log 2>&1 &

验证Kafka

单独验证Kafka是否通行

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test bin/kafka-topics.sh -list -zookeeper localhost:2181 bin/kafka-console-producer.sh --broker-list 192.18.2.15:9092 --topic test bin/kafka-console-consumer.sh --bootstrap-server 192.18.2.15:9092 --topic test --from-beginning


安装logstash

下载logstash并解压

cd /usr/local/srctar -xzvf logstash/logstash-7.5.1.tar.gz -C /usr/local

编辑配置文件

cd /usr/local/logstash-7.5.1/config/vi cyk_logstash.confinput { kafka { bootstrap_servers => ["192.18.2.15:9092,192.18.2.15:9093"] topics => "topic_cyk_filebeat_config_output.kafka_cyk215" codec => "json" }}output { if "cas" in [tags]{ elasticsearch { hosts => ["192.168.9.168:9200"] index => "cas-%{+YYYY.MM.dd}" } } if "uum" in [tags]{ elasticsearch { hosts => ["192.168.9.168:9200"] index => "uum-%{+YYYY.MM.dd}" } }}

启动logstash

su - elastic/usr/local/logstash-7.5.1/bin/logstash -f /usr/local/logstash-7.5.1/config/cyk_logstash.conf &

安装Filebeat搜集日志

下载filebeat并解压

cd /usr/local/srctar -xzvf filebeat-7.5.1-linux-x86_64.tar.gz -C /usr/localcd /usr/local/ && mv filebeat-7.5.1-linux-x86_64/ filebeat-7.5.1

配置filebeat

先把filebeat的目录更改一下属主 chown elastic:root -R /usr/local/filebeat-7.5.1cd /usr/local/filebeat-7.5.1/vi cyk_filebeat.ymlfilebeat.inputs:- type: log tail_files: true paths: - /opt/apache-tomcat-6.0.39/logs/catalina.out fields: appid: appid_cyk_filebeat_config_tag_input_cyk215 tags: ["uum"]output.kafka: hosts: ["192.18.2.15:9092","192.18.2.15:9093"] topic: 'topic_cyk_filebeat_config_output.kafka_cyk215' partition.round_robin: reachable_only: false required_acks: 1 compression: gzip max_message_bytes: 1000000

启动filebeat

su - elastic/usr/local/logstash-7.5.1/bin/logstash -f /usr/local/logstash-7.5.1/config/cyk_logstash.conf &

ELK同步Mysql中的表数据

准备链接驱动,并上传

在Logstash中配置JDBC相关配置

vi cyk_logstash.confinput{kafka {bootstrap_servers => ["192.18.2.15:9092,192.18.2.15:9093"]topics => "topic_cyk_filebeat_config_output.kafka_cyk215"codec => "json"}#-----全量--------------- jdbc { type => "all" jdbc_driver_library => "/opt/mysql-connector-java-5.1.48.jar" jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_connection_string => "jdbc:mysql://192.18.2.11:3306/jh_uum2" jdbc_user => "root" jdbc_password => "root" schedule => "* * * * *" statement => "select * from tbl_sys_login_log" clean_run => false jdbc_paging_enabled => "true" jdbc_page_size => "100"jdbc_default_timezone =>"Asia/Shanghai" }}filter{ if [type] == "all"{ mutate { remove_field => "@version" } json { source => "message" target => "all" remove_field => ["message"] }}}output{if "cas" in [tags]{elasticsearch {hosts => ["192.168.9.168:9200"]index => "cas-%{+YYYY.MM.dd}"}}if "uum" in [tags]{elasticsearch {hosts => ["192.168.9.168:9200"]index => "uum-%{+YYYY.MM.dd}"}} if [type] == "all"{ elasticsearch { hosts => ["192.168.9.168:9200"] index => "uumdb" #将"_id"的值设为mysql的id列 document_id => "%{id}" #document_type => "base" } }}

验证数据

ElasticSearch的9200和9300端口的区别

9200用于外部通讯,基于http协议,程序与es的通信使用9200端口。
9300jar之间就是通过tcp协议通信,遵循tcp协议,es集群中的节点之间也通过9300端口进行通信。

ES配置文件中

# 设置节点间交互的tcp端口 和 http端口不能一致transport.tcp.port: 9300# 设置节点间交互的tcp端口 和 http端口不能一致transport.tcp.port: 自定义# For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when new node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]# 设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点#discovery.zen.ping.unicast.hosts: [根据实际情况添加格式 "ip:对外暴漏的端口","ip:对外暴漏的端口"]discovery.zen.ping_timeout: 3s## 设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点(防止脑裂)discovery.zen.minimum_master_nodes: 2

Copyright © 2016-2020 www.365daan.com All Rights Reserved. 365答案网 版权所有 备案号:

部分内容来自互联网,版权归原作者所有,如有冒犯请联系我们,我们将在三个工作时内妥善处理。