Thit github repository is:
https://gist.github.com/faxm0dem/40660faee68bec148b7df25369f72b28
- netdata-elasticsearch.png:
README.md
Exploratory around Netdata and Elasticsearch
=============================================
Introduction
------------
***
The aim is to send both metrics and logs to an ElasticSearch instance and then access it via Kibana.
This workflow uses two servers. Server A is the application server, where metrics and logs are collected and sent to Server B which hosts the ElasticSearch and Kibana instances.
Workflow
--------
***
![Netdata - ElasticticSearch workflow](netdata-elasticsearch.png "Netdata - ElasticticSearch workflow")
Install (on Server A)
-------
***
**Netdata**
following these instructions :
https://github.com/firehol/netdata/wiki/Installation#1-prepare-your-system
```bash
# CentOS / Red Hat Enterprise Linux
yum install autoconf automake curl gcc git libmnl-devel libuuid-devel lm_sensors make MySQL-python nc pkgconfig python python-psycopg2 PyYAML zlib-devel
# clone the netdata github repo
git clone https://github.com/firehol/netdata.git --depth=1
cd netdata
# run script with root privileges to build, install, start netdata
./netdata-installer.sh
# netdata as a service
# stop netdata
killall netdata
# copy netdata.service to systemd
cp system/netdata.service /etc/systemd/system/
# let systemd know there is a new service
systemctl daemon-reload
# enable netdata at boot
systemctl enable netdata
# start netdata
systemctl start netdata
```
A web server is started and can be accessed at http://serverA:19999 by default
<br />
**Syslog-ng**
```bash
# install syslog-ng
cat <<EOF | tee /etc/yum.repos.d/czanik-syslog-ng39-epel-7.repo
[czanik-syslog-ng39]
name=Copr repo for syslog-ng39 owned by czanik
baseurl=https://copr-be.cloud.fedoraproject.org/results/czanik/syslog-ng39/epel-7-$basearch/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/czanik/syslog-ng39/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1
EOF
yum install syslog-ng
systemctl enable syslog-ng
systemctl start syslog-ng
```
<br />
Install (on Server B)
-------
***
**ElasticSearch**
```bash
# install java JRE
yum install java-1.8.0-openjdk.x86_64
# install elasticsearch
cat <<EOF | tee /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
yum install elasticsearch
# elasticsearch as a service
systemctl enable elasticsearch.service
systemctl start elasticsearch.service
```
ElasticSearch instance can be accessed via RestAPI at http://serverB:9200 by default
**Kibana**
```bash
# install kibana
cat <<EOF | tee /etc/yum.repos.d/kibana.repo
[kibana-5.x]
name=Kibana repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum install kibana
# kibana as a service
systemctl enable kibana.service
systemctl start kibana.service
```
Kibana instance can be accessed at http://serverB:5601 by default
Config (on Server A)
------
***
**Netdata**
Replace [backend] statement in /etc/netdata/netdata.conf
by [netdata.conf](netdata.conf)
**Syslog-ng**
Syslog-ng will receive netdata and logs flow and send it to elasticsearch
add these lines to /etc/syslog-ng/syslog-ng.conf
[syslog-ng.conf](syslog-ng.conf)
Config (on Server B)
------
***
Edit elasticsearch config file:
```bash
vi /etc/elasticsearch/elasticsearch.yml
```
set the network.host parameter such as:
```bash
network.host:
- [serverB_IP]
- 127.0.0.1
```
Elasticsearch controls how data is stored and indexed using [index templates](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html).
The following two templates will ensure netdata and syslog data have the correct settings:
* [netdata_template.json](netdata-template.json)
* [syslog_template.json](syslog-template.json)
One can use the REST API to push them to ElasticSearch:
```
curl -XPUT 0:9200/_template/netdata -d@netdata-template.json
curl -XPUT 0:9200/_template/syslog -d@syslog-template.json
```
Edit kibana config file:
```bash
vi /etc/kibana/kibana.yml
```
set parameters such as:
```bash
server.port: 5601
server.host: "[serverB_name]"
server.name: "A_GREAT_TITLE_FOR_MY_LOGS"
elasticsearch.url: "http://127.0.0.1:9200"
```
--
CaryWhitneyExternal - 2017-10-12