Elastic Stack, previously known as ELK stack, is a tech stack consisting of Elasticsearch, Logstash, Kibana and Beats. Elasticsearch is a distributed search and analytics engine based on Lucene. Logstash is a server-side data processing pipeline that ingests and transform data, then sends it to a destination. Kibana is used for visualization, providing GUI that makes it easy for users to find and analyze data they need. The Beats of elastic stack is a data shipper that sends data to Logstash or Elasticsearch. Currently there are six kinds of Beats: Filebeat, Metricbeat, Packetbeat, Winlogbeat, Auditbeat and Heartbeat.
For automted Elastic Stack set up process, we can use any configuration management tools, one of them is Ansible. This tutorial gives you example of Ansible playbook for setting up the stack. As for the Beats, the example in this tutorial is only for Filebeat.
Structure
hosts
elastic.hosts
playbooks
roles
elastic_stack
elasticsearch
filebeat
java8
kibana
logstash
elk_setup.yml
filebeat.yml
Each roles have a tasks folder which has a main.yml file. Some of them also has templates folder containg a configuration example in j2 format.
First, add the IP addresses of the machine you want to install the stack.
hosts/elastic.hosts
[elk_stack]
127.0.0.1
[filebeat]
127.0.0.1
If you're going to run the playbook for installing in a remote server, just replace 127.0.0.1 with the IP address you want to install ELK (Elasticsearch, Logstash, and Kibana) and Filebeat respectively. For this tutorial, the ELK will be installed on the same server. But you can easily break down the playbook for installing ELK into multiple files and select the appropriate roles for each.
Logstash and Elasticsearch require Java 8. Therefore you need Java 8 installed first. Below is the role for installing Java 8.
roles/java8/tasks/main.yml
---
########################################################
# Set up Java 8
########################################################
- name: Install add-apt-repostory
apt: name=software-properties-common state=latest
- name: Add Oracle Apt Repository
apt_repository: repo='ppa:webupd8team/java' update_cache=yes
- name: Accept Oracle License
shell: echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | tee /etc/oracle-java-8-licence-acceptance | /usr/bin/debconf-set-selections
args:
creates: /etc/oracle-java-8-licence-acceptance
- name: Install Oracle Java 8
apt: name={{item}} state=latest
with_items:
- oracle-java8-installer
- ca-certificates
- oracle-java8-set-default
- name: Symlink Default Java
file: src=/usr/lib/jvm/java-8-oracle dest=/usr/lib/jvm/default-java state=link
- name: Set JAVA_HOME
lineinfile: dest=/etc/environment
regexp=''
insertafter=EOF
line="JAVA_HOME=/usr/lib/jvm/default-java"
- name: Verify Java Version
action: shell /usr/bin/java -version
register: java
changed_when: no
tags:
- verify
- assert:
that:
- "'java version' in java.stderr"
We're going to install each of the stack members using APT. Before proceeding, we need to i mport Elastic public signing key, install apt-transport-https and add Elastic repository into source list.
---
########################################################
# Prepare to setup Elastic Stack
########################################################
- name: Import Elastic public signing key
shell: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
- name: Install apt-transport-https
apt: name=apt-transport-https state=present
- name: Add Elastic repo into source list
apt_repository:
repo: deb https://artifacts.elastic.co/packages/6.x/apt stable main
state: present
Next, create a role for installing Logstash by adding the file below.
playbooks/roles/logstash/tasks/main.yml
---
########################################################
# Set up Logstash
########################################################
- name: Install Logstash from APT repository
apt: name=logstash state=present force=true update_cache=yes
- name: Copy config
template: src=logstashConf.j2 dest=/etc/logstash/conf.d/logstash.conf
- name: Force systemd to reread configs
systemd: daemon_reload=yes
- name: Enable logstash.service
systemd:
name: logstash.service
enabled: yes
- name: Start logstash.service
systemd: name=logstash.service state=restarted
become: true
The Logstash config below will listen for incoming input from beats on port 5044. Then, it filters every incoming data and if it matches a certain format, it will add a tag and rename one of the fields. After that, the processed data will be sent to Elasticsearch on port 9200. More about configuring Logstash on this guide.
playbooks/roles/logstash/templates/logstashConf.j2
input {
beats {
port => 5044
host => "127.0.0.1"
}
}
filter {
# nginx access log
if [source] =~ /\/(access)\d{0,10}\.(log)/ {
grok {
match => {"message" => "%{COMBINEDAPACHELOG}"}
add_tag => ["nginx_access_log"]
}
mutate {
rename => {"timestamp" => "log_timestamp"}
}
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
Below is the role for setting up Elasticsearch.
playbooks/roles/elasticsearch/tasks/main.yml
---
########################################################
# Set up ElasticSearch
########################################################
- name: Install elasticsearch from APT repository
apt: name=elasticsearch state=present
update_cache: yes
- name: Copy config
template: src=elasticsearchConf.j2 dest=/etc/elasticsearch/elasticsearch.yml
- name: Force systemd to reread configs
systemd: daemon_reload=yes
- name: Enable elasticsearch.service
systemd:
name: elasticsearch.service
enabled: yes
- name: Start elasticsearch.service
systemd: name=elasticsearch.service state=restarted
become: true
And here is the config example for Elasticsearch. For more advanced configuration, you can read on this page.
playbooks/roles/elasticsearch/templates/elasticsearchConf.j2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
Below is the role for setting up Kibana.
playbooks/roles/kibana/tasks/main.yml
---
########################################################
# Set up Kibana
########################################################
- name: Install kibana from APT repository
apt: name=kibana state=present force=yes
update_cache: yes
- name: Copy config
template: src=kibanaConf.j2 dest=/etc/kibana/kibana.yml
- name: Force systemd to reread configs
systemd: daemon_reload=yes
- name: Enable kibana.service
systemd:
name: kibana.service
enabled: yes
- name: Start kibana.service
systemd: name=kibana.service state=restarted
become: true
The following is a very basic Kibana configuration. Read this guide for configuring Kibana
playbooks/roles/kibana/templates/kibanaConf.j2
elasticsearch.url: "http://localhost:9200"
elasticsearch.requestTimeout: 180000
And this is the main playbook you need to run for installing the ELK
playbooks/elk_setup.yml
---
# This playbook is for setting up Elasticsearch, Logstash and Kibana
# You can run this playbook with the following command:
# sudo ansible-playbook -i hosts/elastic.hosts playbook/elk_setup.yml
- hosts: elk_stack
connection: local
remote_user: root
become: true
roles:
- role: java8
- role: elastic_stack
- role: logstash
- role: elasticsearch
- role: kibana
Then, we install Filebeat. Here is the role for installing Filebeat.
playbooks/roles/filebeat/tasks/main.yml
---
########################################################
# Set up Filebeat
########################################################
- name: Install filebeat from APT repository
apt: name=filebeat state=present force=true
update_cache: yes
- name: Copy config
template: src=filebeatConf.j2 dest=/etc/filebeat/filebeat.yml
- name: Force systemd to reread configs
systemd: daemon_reload=yes
- name: Enable filebeat.service
systemd:
name: filebeat.service
enabled: yes
- name: Start filebeat.service
systemd: name=filebeat.service state=restarted
become: true
- name: Export Filebeat template
shell: filebeat export template > /tmp/filebeat.template.json
Below is a basic configuration example for Filebeat. It reads log from files whose path matches /var/log/*.log or /var/log/supervisor/*.out and sends the data to Logstash. You can read this guide for advanced configuration.
playbooks/roles/filebeat/templates/filebeatConf.j2
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
- /var/log/supervisor/*.out
#============================= Filebeat modules ===============================
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#============================== Kibana =====================================
setup.kibana:
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["127.0.0.1:5044"]
Below is the playbook you need to run for installing and configuring Filebeat.
playbooks/filebeat.yml
---
# This playbook is for setting up filebeat
# You can run this playbook with the following command:
# sudo ansible-playbook -i hosts/elastic.hosts playbooks/filebeat.yml
- hosts: filebeat
connection: local
remote_user: root
become: true
roles:
- role: filebeat
tasks:
# If you're using filebeat installed on another server(s), you need to load the filebeat template
- name: Load Filebeat template
uri:
url: http://localhost:9200/_template/filebeat-6.3.1
method: PUT
headers:
Content-Type: "application/json"
body: "{{ lookup('file','/tmp/filebeat.template.json') }}"
body_format: json
Files on this tutorial is available to download from Github.