Install ELK_F in Docker

Install ELK_F in Docker
September 25, 2020
The ability to search and report analytics on an organization’s data is crucial to any digital strategy. While there are several platforms that enable this kind of data processing, such as ingesting, searching, analysing, and visualizing organizational data, the ELK (Elastic) Stack might be the most popular of them all and your organization should be using it.

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications by providing extremely lightweight operating-system-level virtualization, also known as containers.

What is Elastic Stack?

The ELK Stack is a collection of three open-source products: Elasticsearch, Logstash, and Kibana. Elasticsearch is a NoSQL database that is based on the Lucene search engine. Logstash is a log pipeline tool that accepts inputs from various sources, executes different transformations, and exports the data to various targets. Kibana is a visualization layer that works on top of the Elasticsearch.

Filebeat is an application that quickly ships data directly to either Logstash or Elasticsearch. Filebeat also has to be used because it helps to distribute loads from single servers by separating where logs are generated from where they are processed. Consequently, Filebeat helps to reduce CPU overhead by using prospectors to locate log files in specified paths, leveraging harvesters to read each log file, and sending new content to a spooler that combines and sends out the data to an output that you have configured:

  • Building an Image for each component.
  • Parameterizing configuration & avoid hardcoding credentials.
  • Setting up Elasticsearch as a production single node cluster ready to be scaled.
  • Setting up Logstash configuration and pipelines.
  • Setting up Filebeat configuration to read log file(s).
  • Compose everything together in a Docker-Compose.

The best method is to use docker-compose.yml to run all ELKF apps, like this:

{% c-block language="markdown" %}
version: '3.2'

services:
 filebeat:
   build:
     context: filebeat/
     args:
       FILEBEAT_VERSION: $FILEBEAT_VERSION
   volumes:
   - type: bind
     source: ./$PWD/config/filebeat.yml
     target: /usr/share/filebeat/filebeat.yml
     read_only: true
   ports:
     - "5044:5044"
   command: ["--strict.perms=false"]
   depends_on:
   - logstash
   - elasticsearch
   - kibana
   networks:
   - elk

 elasticsearch:
   build:
     context: elasticsearch/
     args:
       ELK_VERSION: $ELK_VERSION
   volumes:
   - type: bind
     source: ./elasticsearch/config/elasticsearch.yml
     target: /usr/share/elasticsearch/config/elasticsearch.yml
     read_only: true
   - type: volume
     source: elasticsearch
     target: /usr/share/elasticsearch/data
   restart: always
   ipc: host
   ports:
   - "9200:9200"
   - "9300:9300"
   environment:
     ES_JAVA_OPTS: "-Xmx256m -Xms256m"
     ELASTIC_PASSWORD: changeme
     discovery.type: single-node
   networks:
   - elk

 kibana:
   build:
     context: kibana/
     args:
       ELK_VERSION: $ELK_VERSION
   volumes:
   - type: bind
     source: ./kibana/config/kibana.yml
     target: /usr/share/kibana/config/kibana.yml
     read_only: true
   restart: always
   ports:
   - "5601:5601"
   networks:
   - elk
   depends_on:
   - elasticsearch

 logstash:
   build:
     context: logstash/
     args:
       ELK_VERSION: $ELK_VERSION
   volumes:
   - type: bind
     source: ./logstash/config/logstash.yml
     target: /usr/share/logstash/config/logstash.yml
     read_only: true
   - type: bind
     source: ./logstash/pipeline
     target: /usr/share/logstash/pipeline
     read_only: true
   ipc: host
   restart: always
   ports:
   - "5000:5000/tcp"
   - "5000:5000/udp"
   - "9600:9600"
   environment:
     LS_JAVA_OPTS: "-Xmx256m -Xms256m"
   networks:
   - elk
   depends_on:
   - elasticsearch

networks:
 elk:
   driver: bridge

volumes:
 elasticsearch:
{% c-block-end %}

You have to add the parameters $ELK_VERSION and $FILEBEAT_VERSION to a separate “.env” file.

Now you can run “docker-compose.up -d” command which will download 4 images and create 4 running containers and create the elk docker network, but your data has not uploaded yet.

Example: You have a log (JSON format) in a log file my.log in /var/log/mylog/my.log

Now you have to update docker-compose.yml file to load your my.log like this:

{% c-block language="markdown" %}
version: '3.2'

services:
 filebeat:
   build:
     context: filebeat/
     args:
       FILEBEAT_VERSION: $FILEBEAT_VERSION
   volumes:
   - type: bind
     source: ./filebeat/config/filebeat.yml
     target: /usr/share/filebeat/filebeat.yml
     read_only: true
   - /var/log/my:/data/logs:ro





{% c-block-end %}

Now you will have to re-create the container Filebeat with the command “docker-compose up -d”.

You can check if your data is inside like this:

docker exec -ti <containerID> /bin/bash

ls/data/logs

Your list of files is: my.log

Now command “exit” to your file system.

Configure Filebeat:

To configure your Filebeat it is necessary to edit filebeat.yml in config folder filebeat/config like this:

{% c-block language="markdown" %}
filebeat.inputs:
 - type: log
   enabled: true
   paths:
     - /data/logs/my.log
   tags: [“my-awesome-log”]
{% c-block-end %}

The output is easy to guess. You want Logstash output configuration to be this:

{% c-block language="markdown" %}
hosts: ["logstash:5044"]
{% c-block-end %}

Configure Logstash:

The complete logstash/config/logstash.yml looks like this:

{% c-block language="markdown" %}
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
{% c-block-end %}

Now you can configure your Logstash pipelines in logstash/pipeline/logstash.conf like this:

{% c-block language="markdown" %}
input {
   beats {
       port => 5044
   }
}
output {
   elasticsearch {
       hosts => "elasticsearch:9200"
       user => "elastic"
       password => "changeme"
   }
}
{% c-block-end %}

For more than one Filebeat form x server, you can use multiple inputs:

{% c-block language="markdown" %}
input {
   beats {
       port => 5044
   }
   beats {
       port => 5045
   }
   beats {
       port => 5046
   }
}
{% c-block-end %}

But your elastic doesn’t listen on ports 5045 and 5046, so you will have to make them public.

Edit your docker-compose.yml file like this:

{% c-block language="markdown" %}
..
..
  logstash:
..
..
  ipc: host
   restart: always
   ports:
   - "5000:5000/tcp"
   - "5000:5000/udp"
   - "9600:9600"
   - "5045:5045"
   - "5046:5046"
   environment:
     LS_JAVA_OPTS: "-Xmx256m -Xms256m"
   networks:
..
..
{% c-block-end %}

Do not forget to set up your firewall from the Logstash side. For Ubuntu: "sudo ufw allow from xxx.xxx.xxx.xxx to any port 504x”.

Now you are ready for re-run command “docker-compose up -d”. Docker will recreate containers Filebeat and Logstash; the rest will be up-to-date.

Now you can go to Kibana url and set up Kibana to view your log. Set up a beat index to begin the searching of data using Kibana.

Run the ELK stack on Docker Container – Filebeat Index.

Click on Discover to start searching the latest logs from Filebeat.

Run the ELK stack on Docker Container – Logs. That’s all for now.

If you have made up to this point: Congratulations! You have successfully Dockerized the ELK stack. Proceed with creating a few more Logstash pipelines to stream log events from different sources.

On the internet, there are a lot of manuals on localhost installation and fewer for Docker. If you want to use Docker for ELK_F, you will have to be prepared for some trouble, because more Opts guys use normal installation on localhost. But what is better than Docker? I like clean installations and clean OS on my servers, and this is the reason why I used installation for Docker.

What surprised me?

  • Define volumes to read log file on compose for Filebeat
  • Filebeat doesn’t listen on 504x, but Logstash yes as a back-off server
  • Into private net you don’t have to publish 5044 (basic port for Filebeat)
  • Don't forget to publish the other port for Logstash back-off without using localhost.
  • If you use localhost you will have to play with IPTables and that is ….. bad :-)
  • READING logs from docker logs for all containers (very important)

I would like to thank Ondra Karas from CROWNINGARTS, who introduced me to this idea.

Share:
Pavel is a trained DevOps guru and keen SysAdmin in charge of the application lifecycle, continuous integration, continuous deployment, and continuous delivery for both customer and internal SABO projects. He trains software developers in our team and coordinates all DevOps activities. He always has some puns up his sleeve, unpredictably switches to Polish, and likes to cook and bake.

Article collaborators

SABO Newsletter icon

SABO NEWSLETTER

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

About SABO Mobile IT

We focus on developing specialized software for our customers in the automotive, supplier, medical and high-tech industries in Germany and other European countries. We connect systems, data and users and generate added value for our customers with products that are intuitive to use.
Learn more about sabo