Create our own plugins for Check_MK and WATO

For a long time I used Nagios Core without Check_MK or any other GUI for configuration. I used pynag for massive changes, but nothing else.

When I needed to check for something specific, I just wrote what I needed on bash, put as many arguments/parameters/variables I wanted and added it to the commands.cfg file.

But with Check_MK and WATO, that’s a little different. We could add whatever we want as script but configure the arguments it’s not so easy (it’s not hard either). Mathias Kettner explains it very well on the documentation, but I wanted to have my own experience on my blog.

I won’t write about how to do a script. I will just give an example about what I did.

Necessary files

We will create 3 files: the plugin itself – the check function – the manual page


We also will modify this one:

Continue Reading

Configure Rsyslog to send our logs to ELK


We have seen before how to add filters and indexes for Filebeat and Topbeat. But in some cases, we won’t be able to install additional software to manage our logs. That’s when Rsyslog is our best option. In this post we will configure an external log from Apache that is not manage by default for Rsyslog.

Configuring Rsyslog (client side)

We are going to create a new file on /etc/rsyslog.d that will contain our new input log configuration.

$InputFileName /var/log/apache2/access.log #can NOT use wildcards – this is where logstash-forwarder would be nice
$InputFileTag apache-access-rs:  #Logstash throws grok errors if the “:” is anywhere besides at the end; shows up as “Program” in Logstash
$InputFileStateFile apache-access-rs  #can be anything; unique id used by rsyslog
$InputFileSeverity info
$InputFileFacility apacheaccess
$InputFilePollInterval 10
$InputFilePersistStateInterval 1000

apacheaccess.* @@ELK_server_private_IP:5544  #the 2 “@” signs tells rsyslog to use TCP; 1 “@” sign 
Continue Reading

Gather Infrastructure Metrics with Topbeat and ELK on CentOS 7


Topbeat, which is one of the several “Beats” data shippers that helps send various types of server data to an Elasticsearch instance, allows you to gather information about the CPU, memory, and process activity on your servers. In conjunction with an ELK server (Elasticsearch, Logstash, and Kibana), the data that Topbeat gathers can be used to easily visualize metrics so that you can see the status of your servers in a centralized place.

In this tutorial, we will show you how to use an ELK stack to gather and visualize infrastructure metrics by using Topbeat on a CentOS 7 server.


Load Topbeat Index Template in Elasticsearch

Because we are planning on using Topbeat to ship logs to Elasticsearch, we should load the Topbeat index template. The index template will configure Elasticsearch to analyze incoming Topbeat fields in an intelligent way.

First, download the Topbeat index template on your

Continue Reading

Adding Filters to Logstash (ELK stack)


This post has a couple of configuration I needed to a particular environment. I already have my stack working. There is a lot of other filters, patterns and configurations. I will be adding more in time.

Default PATHS

Logstash configuration directory: /etc/logstash/conf.d
Logstash patterns directory: /opt/logstash/patterns

Specific Configuration


Prospector (client side – Filebeat)

This block must be beneath of prospectors section and maintaining the indentation.

        - /var/log/auth.log
        - /var/log/syslog
      input_type: log
      document_type: syslog

Log example

Jun  3 12:17:01 server01 /USR/SBIN/CRON[15365]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)


There is no specific pattern you should add.


This configuration is inside 10-syslog-filter.conf

filter { 
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", 
Continue Reading

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7


In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.3.x, Logstash 2.3.x, and Kibana 4.5.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.1.x.

Logstash is an open source tool for collecting, parsing, and storing logs for future use.

Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during

Continue Reading