ELK (Elasticsearch+Logstash+Kibana) with GeoIP Using Docker

Jim Wu
4 min readJun 12, 2018

--

Nowadays, docker is an easier approach to launch services you want and those launched services are more lightweight. ELK is an acronym from Elasticsearch+Logstash+Kibana. Elasticsearch is RESTful search and analytics engine and it can also be distributed. Logstash is data pipeline process on the server side and also supports a variety of inputs. According to their official introduction, Logstash filter can parse and transform your data on the fly. Kibana lets users visualize their data from Elasticsearch and provide powerful dashboard. The combination of those three services together can produce the amazing analysis result.

Prerequisites

Prior to install docker ELK, you need to install the following applications.

Create Docker Containers

After installation of Docker and Docker Compose. You can create a file docker-compose.yml under your work space and fill statements as follows.

version: '1'
services:
logstash:
build: ./logstash
volumes:
- ./your_db_dir/db:/opt/geoip
-./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5044:5044"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
depends_on:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.2.4
links:
- elasticsearch
ports:
- 5601:5601

This file will create three docker containers and launch services including Elasticsearch, Logstash and Kibana. Note that logstash you have to create directory (./logstash) under your current work space and create three files. First one is to create a file logstash.conf under ./logstash/pipeline as follows.

http.host: “0.0.0.0”
path.config: /usr/share/logstash/pipeline

The second file is to create logstash.yml under ./logstash/conf and fill the following statements. The file will output filtered logstash data to Elasticsearch.

input { 
tcp { port => 5000 }
}
## Add your filters / logstash plugins configuration here output {
elasticsearch {
hosts => “elasticsearch:9200” }
}

Last one is create a file Dockerfile under the ./logstash directory as follows.

FROM fluent/fluentd:v0.12-debianRUN  apt-get update && apt-get install -y \
build-essential \
libgeoip-dev \
zlib1g-dev \
libssl-dev \
libreadline-dev
RUN apt-get -qq -y install curl# Download the ruby-build code
RUN \curl -L https://github.com/sstephenson/ruby-build/archive/v20180601.tar.gz | tar -zxvf - -C /tmp/
# Install ruby-build
RUN cd /tmp/ruby-build-* && ./install.sh && cd / && rm -rfv /tmp/ruby-build-master
# Install ruby
RUN ruby-build -v 2.5.1 /usr/local
# Install gems
RUN gem install bundler rubygems-bundler --no-rdoc --no-ri
# Regenerate binstubs
RUN gem regenerate_binstubs
RUN gem install fluent-plugin-geoip -v 0.8.0 --no-ri --no-rdoc
RUN gem install fluent-plugin-elasticsearch --no-rdoc --no-ri version 1.9.2

I think that is almost ready to start ELK. You can easily to start it by using the following command.

docker-compose up

docker-compose up that will create all containers and start service as well. Note that if you want to stop service, please use docker-compose stop “your service name”. Of cause, you can create and launch service one by one. For example, you can create a container logstash by using docker-compose command below.

docker-compose -d up --no-deps logstash

After launched successfully, you can check those containers are working or not by using command below and you can find three services are started.

docker ps

Launch GeoIP for Logstash

You must be curious about what is “- ./your_db_dir/db:/opt/geoip” on the docker-compose.yml. That is mount the path or directory into the container. That means your local directory ./your_db_dir/ mapping to container directory /opt/geoip.

Fortunately, Logstash already has a geo ip location by default. We can easily use Logstash filter to transform data and add new geo ip field. Next we can redirect the filter result to Elasticsearch. In order to inactivate geo ip filtering, you need to download GeoIP database from MaxMind and then add local directory mapping container directory in the configuration. Below is an example of logstash.conf. This filter assume log is json format. One of field is src_ip that is ip address and can be parsed via curly brackets json{}. You can find details from here.

filter {
json {
source => message
}
date {
match => [ “timestamp”, “ISO8601” ]
}
if [src_ip] { geoip {
source => “src_ip” # With the src_ip field
target => “geoip” # Add the geoip one
database => “/opt/geoip/GeoLite2-City.mmdb”
add_field => [ “[geoip][coordinates]”, “%{[geoip][longitude]}” ]
add_field => [ “[geoip][coordinates]”, “%{[geoip][latitude]}” ]
}
mutate {
convert => [ “[geoip][coordinates]”, “float” ]
}

}

After you completed it, you can use the geoip field on the Kibana map view.

References

Docker-ELK

Logstash plugin filter geoip

Geoip in the Elastic stack

--

--

No responses yet