👇 Click on the card, follow, pin Official number 👇

technical dry goods, timely delivery!

In the CentOS7 environment, how to build an ELK system?

The elk installation package

elasticsearch-7.8.0-x86_64.rpm +

kibana-7.8.0-x86_64.rpm + logst is shown below

CentOS 7

CentOS 7.4 64bit, graphic.

After

actually becoming familiar with the use of docker, there is not much difference between using docker and not using docker, and the

space is limited, only the key points are recorded, and the installation and configuration in the docker environment are not involved.

Note: All the operations mentioned in it are tested by me. 

II. 2. Installation (ES and Kibana)

1. After installing ES

, you need to configure the content. In particular, ES needs to be configured to be accessible from remote machines. After configuration, the effect is as follows. points:

  1. ES depends on the Java environment, so install the JDK first and solve it yourself.
  2. After the ES

  3. installation is completed, you may need to modify the configuration file
  4. , such as ports, etc.

  5. ES may report errors when booting, such as can not run elasticsearch as root (create other users, and configure various directory permissions to solve, one search), etc., need to be handled
  6. max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535] This error needs to be configured, refer to these two to solve the confusion. https://www.jianshu.com/p/692608b3b6f9 https://blog.csdn.net/weixin_40143280/article/details/105273199

2. Install Kibana

After the installation is complete, you need to configure the content. In particular, it is necessary to configure kibana to be accessible from remote machines. After configuration, the effect is as follows.

One of the errors is encountered, the solution: The Reporting plugin encountered issues launching Chromium in a self-test

The effect after starting is shown in the figure the startup process is shown in the figure : The kibana I installed started relatively smoothly. /usr/share/kibana --allow-root succeeds, there is some WARN information, but it doesn’t matter. Of course, if you don’t like the output of the console, you can do this: Nohup bin/kibana --allow-root &

key parts are temporarily deployed, with the goal of storing data in ES via .NET or Java or logstash, and then flexibly displaying and using it in Kibana

At this point, Kibana can directly connect to the ES just deployed to see and manipulate some indexes of ES.

Third, use

your own ideas:

    only install es and kibana first,

  1. es to store data (let’s use kibana’s sample data first, or play with data imported through csv files, Regardless of how to import data into ES)
  2. Use Kibana’s own sample data or imported data, familiarize yourself with the concepts of dashboard, cavans, visulize, etc., and do a general understanding and know what to expect. You can generate a variety of flexible pie chart bar charts, line charts, heat charts and other things to.

As shown in the following figure:

4. After the installation and use

of logstash

is completed, you need to configure the content and then start.

Of course, it is very normal to start the error, what to solve it, just see the tricks.

As with ES and Kibana above, files appear at /usr/share/logstash and /etc/logstash after the installation is complete.

With

logstash

, my current goal is to continuously read data from a MySQL table and write it to an ES index through logstash.

as shown in the figure, The two conf files with the horizontal line are the configuration files I need for two imports, the content is modeled after logstash-sample.conf, using mysql data sources, such as ph3.conf The content is as follows:

input { jdbc {

# jdbc_driver_library => "./mysql-connector-java-5.1.49.jar"


                jdbc_driver_library => "/usr/share/logstash/mylib/5.1.45/mysql-connector-java-5.1.45.jar"
                jdbc_driver_class => "com.mysql.jdbc.Driver"
                jdbc_connection_string => "jdbc:mysql://1.2.3.4:13308/test?characterEncoding=UTF-8&useSSL= false" jdbc_user => root jdbc_password => 123456

jdbc_paging_enabled => "true" #是否进行分页


# jdbc_page_size => "50000"
                tracking_column => "id"
                use_column_value => true
                 # statement_filepath => "SQL file path, and the following execution statement choose 1"
statement => "SELECT * FROM ph3 where id > 0 "
# Set the listening interval The meaning of each field (from left to right) seconds, minutes, hours, days, months, and years, all of which are *The default meaning is updated every minute
                # schedule => " 10 * * * * *"
                schedule => "5 * * * * *"        }}output {        elasticsearch {

                document_id =>  "%{id}"


                # document_type => ""
                index => "ph4-new-index"
                hosts => ["localhost:9200" ] } stdout{ codec => rubydebug }}

then execute the command and start the logstash process to do this: sudo bin/logstash -f /etc/logstash/ph3.conf ---path.settings=/etc/logstash command content:

-f specifies a configuration file
–path.settings Specify a directory that uses some other common configuration, regardless of it.

You may encounter some errors, such as prompting that the logstash.yml file cannot be found, –path.settings is not pointing to a directory, insufficient permissions, etc., which can be solved by a search on the Internet.

Among them, special attention should be paid to the line tracking_column => “id” and document_id => "%{id}" to understand the meaning, so as not to import the program for half a day and only have 1 document in the database. This problem may be encountered, and it is easy to solve it when encountered, and it will be solved in a search. That is, this id field is not written casually, it needs to be a field in the table of the mysql database, which can be used as a unique identifier, such as int primary key.

The previous boot diagram below. as shown in the figure , the red frame part is that the task has begun to be executed. Then in the indexing mode of the kibana web page, you can add this index to the kibana management, and then you can operate it as freely as other data. this road has been completely open ever since 。

That

is,

if elk is installed and running normally,
logstash continuously reads data from mysql from data sources such as mysql for a time determined by cron expressions ==> Write data to ES ==> Kibana manages these data, you can customize a variety of statistical chart statistical tables, export statistics, etc

.

is very arbitrary.

PS: If the article is helpful, like, read, and retweet! You can also reward authors and encourage originality!
———————————————–








Click on the card to follow us, more technical dry goods, timely delivery for you!

Past recommendations

Windows Efficient Development Environment Configuration (I)
.” [dry goods] thoroughly understand the difference between
Eureka and Nacos
[Dry Goods] The data structure and algorithm principle
behind MySQL
indexes combined with practical operations to take you through the content published by the Redis persistent

public number “Logic Code” indicates the source, the copyright belongs to the original source (the copyright that cannot be verified or the source is not indicated is from the Internet and is reprinted, The purpose of reproduction is to convey more information, and the copyright belongs to the original author. If there is any infringement, please contact, the author will delete it as soon as possible!

Buy Me A Coffee