从0开始搭建ELK日志分析系统

2020-06-01 00:00:00 数据 文件 启动 日志 希望

搭建前软件准备:

CentOS 6.8

JDK 1.8+

zookeeper-3.4.10.tar.gz

apache-flume-1.7.0-bin.tar.gz

kafka_2.10-0.10.2.2.tgz

elasticsearch-5.4.0.tar.gz

kibana-5.4.0-linux-x86_64.tar.gz

logstash-5.4.0.tar.gz


图示架构(有点丑嘻嘻):

1.flume收集日志信息到kafka

2.logstash过滤并收集kafka中的数据到elasticsearch

3.kibana将elastisearch的数据显示在前台


(1)在随意位置新建一个名为access.log的文件,这个文件就是我们flume收集的日志.

(2)启动zookeeper,在启动kafka。(这里是单机版,集群也是可以的)

(3)新建flume从access-log文件采集到kafka的配置文件(flume-access-log-kafka.properties)我放置在 /usr/local/flume/conf 下面

access-log-agent.sources=access-log
access-log-agent.sinks=kafka
access-log-agent.channels=memory

access-log-agent.sources.access-log.type=exec
access-log-agent.sources.access-log.channels=memory
access-log-agent.sources.access-log.command=tail -f /usr/local/flume/access.log
access-log-agent.sources.access-log.fileHeader=false

access-log-agent.channels.memory.type=memory
access-log-agent.channels.memory.capacity=1000
access-log-agent.channels.memory.transactionCapacity=1000
access-log-agent.channels.memory.byteCapacityBufferPercentage=20
access-log-agent.channels.memory.byteCapacity=800000

access-log-agent.sinks.kafka.type=org.apache.flume.sink.kafka.KafkaSink
access-log-agent.sinks.kafka.channel=memory
access-log-agent.sinks.kafka.kafka.bootstrap.servers=localhost:9092
access-log-agent.sinks.kafka.kafka.topic=account-access-log
access-log-agent.sinks.kafka.serializer.class=kafka.serializer.StringEncoder
access-log-agent.sinks.kafka.kafka.producer.acks=1
access-log-agent.sinks.kafka.custom.encoding=UTF-8

相关文章