下载ELK包

下载地址:https://www.elastic.co/cn/downloads/

https://elasticsearch.cn/download/


这里我下载最新版本的(这个版本建议用jdk11,但还是能支持jdk8)

 

安装elaticSearch

解压  tar -zxvf  elasticsearch-7.10.1-linux-x86_64.tar.gz

找到config目录下的elasticsearch.yml文件,修改配置:


cluster.initial_master_nodes: ["node-1"]

cluster.name: es-application
node.name: node-1

network.host: 0.0.0.0
http.port: 9200

path.data: /usr/elk/elasticsearch-7.10.1/data
path.logs: /usr/elk/elasticsearch-7.10.1/logs

http.cors.enabled: true
http.cors.allow-origin: "*"

 

配置完之后,因为ElasticSearch使用非root用户启动,所以创建一个用户,这里我创建用户elk密码也是elk,然后授权目录。

# 创建用户
useradd elk
# 设置密码
passwd elk
# 赋予用户权限
chown -R elk:elk /usr/elk/elasticsearch-7.10.1/

然后切换用户,启动:

# 切换用户
su elk
# 启动 -d表示后台启动
./bin/elasticsearch -d

 

使用命令netstat -nltp查看端口号:


访问  http://192.168.104.45:9200/   可以看到如下信息,表示安装成功。

 

安装Logstash

解压  tar -zxvf  logstash-7.10.1-linux-x86_64.tar.gz

找到/config目录下的logstash-sample.conf文件,修改配置:

 

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

 

input {
file{
path => ['/home/smarthome/servers/*.log']
type => 'user_log'
start_position => "beginning"
}
}

 

output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "user-%{+YYYY.MM.dd}"
}
}

 

input表示输入源,output表示输出,还可以配置filter过滤,架构如下:
在这里插入图片描述


启动logstash

nohup ./bin/logstash -f /usr/elk/logstash-7.10.1/config/logstash-sample.conf &



启动logstash:sh logstash -f logstash1.conf  &

如果提示--path.data的问题,则需要指定path.data的路径,随便找个路径就行,

我的是这样启动:sh logstash -f logstash1.conf  --path.data=/home/elk/logstash-6.4.2/logs &

 

安装Kibana

解压  tar -zxvf  kibana-7.10.1-linux-x86_64.tar.gz

找到/config目录下的kibana.yml文件,修改配置:

 

server.port: 5601
server.host: "192.168.104.45"
elasticsearch.hosts: ["http://192.168.104.45:9200"]

 

和elasticSearch一样,不能使用root用户启动,需要创建一个用户:

这里我还是用刚才创建的用户elk就行

# 赋予用户权限

chown -R elk:elk /usr/elk/kibana-7.10.1-linux-x86_64/

然后使用命令启动:

#切换用户
su kibana
#非后台启动,关闭shell窗口即退出
./bin/kibana
#后台启动
nohup ./bin/kibana &


#停止

fuser -n tcp 5601

然后kill -9 对应的进程id 

 

启动后在浏览器打开  http://192.168.104.45::5601  可以看到kibana的web交互界面

 




//开启端口

firewall-cmd --zone=public --add-port=5601/tcp --permanent

//查询端口号5601是否开启:

firewall-cmd --query-port=5601/tcp

//重启防火墙:

firewall-cmd --reload

//查询有哪些端口是开启的:

firewall-cmd --list-port

//禁用端口

firewall-cmd --zone=public --remove-port=5601/tcp --permanent




到此为止,elk已经搭建完成,下面我总结了我安装过程遇到的问题:

  

oot@test8:/usr/elk/elasticsearch-7.10.1# uncaught exception in thread [main]

java.lang.RuntimeException: can not run elasticsearch as root

at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:111)

at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)

at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393)

at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170)

at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161)

at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)

at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127)

at org.elasticsearch.cli.Command.main(Command.java:90)

at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126)

at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)

For complete error details, refer to the log at /usr/elk/elasticsearch-7.10.1/logs/es-application.log

2021-01-12 08:59:39,803026 UTC [3655] INFO Main.cc@103 Parent process died - ML controller exiting

 

解决方案:
创建新的用户,再启动
# 创建用户
useradd elk
# 设置密码
passwd elk
# 赋予用户权限
chown -R yelk:elk /usr/elk/elasticsearch-7.10.1/

然后切换用户,启动:

# 切换用户
su elk
# 启动 -d表示后台启动
./bin/elasticsearch -d

 


elk@test8:/usr/elk/elasticsearch-7.10.1$ ERROR: [2] bootstrap checks failed

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

[2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

ERROR: Elasticsearch did not exit normally - check the logs at /usr/elk/elasticsearch-7.10.1/logs/es-application.log

 

解决方案:
编辑 /etc/sysctl.conf,追加以下内容:
vm.max_map_count=262144

保存后,执行:
sysctl -p


future versions of Elasticsearch will require Java 11; your Java version from [/usr/java/jdk1.8.0_144/jre] does not meet this requirement
future versions of Elasticsearch will require Java 11; your Java version from [/usr/java/jdk1.8.0_144/jre] does not meet this requirement
elk@test8:/usr/elk/elasticsearch-7.10.1$ ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /usr/elk/elasticsearch-7.10.1/logs/es-application.log

解决方案:
修改elasticsearch.yml
取消注释保留一个节点
cluster.initial_master_nodes: ["node-1"]
这个的话,这里的node-1是上面一个默认的记得打开就可以了

 

参照:

https://www.elastic.co/guide/cn/kibana/current/introduction.html

https://blog.csdn.net/yehongzhi1994/article/details/109459225


SpringBoot配置ELK日志分析系统搭建

https://blog.csdn.net/weixin_43184769/article/details/84971532


在logstash配置文件中添加

input {

  tcp {

    mode => "server"

    host => "0.0.0.0"

    type => "carduser"

    port => 4567      

    codec => json_lines

  }

}


output {

  if [type] == "carduser" {

    elasticsearch {

      action => "index"

      hosts  => "http://localhost:9200"

      index  => "carduser"

      codec  => "json"

     }

  }

}


在pom.xml文件引入

<dependency>

            <groupId>net.logstash.logback</groupId>

            <artifactId>logstash-logback-encoder</artifactId>

            <version>5.2</version>

        </dependency>



在resource文件夹下创建配置文件logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>

<configuration>

    <include resource="org/springframework/boot/logging/logback/base.xml" />


    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">

        <!--配置logstash的ip和端口,在logstash配置文件中-->

        <destination>192.168.104.45:4567</destination>

        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />

    </appender>


    <root level="INFO">

        <appender-ref ref="LOGSTASH" />

        <appender-ref ref="CONSOLE" />

    </root>

</configuration>



spring:
  profiles:
    #指定读取配置文件:dev(开发环境),prod(生产环境),qa(测试环境)
    active: qa

#日志
logging:
  config: classpath:logback-${spring.profiles.active}.xml