JMX Data to the Elastic Stack (ELK) – JMX数据到Elastic Stack (ELK)

最后修改: 2017年 11月 15日

中文/混合/英文(键盘快捷键:t)

1. Overview

1.概述

In this quick tutorial, we’re going to have a look at how to send JMX data from our Tomcat server to the Elastic Stack (formerly known as ELK).

在这个快速教程中,我们将看看如何从我们的Tomcat服务器发送JMX数据到Elastic Stack(以前称为ELK)。

We’ll discuss how to configure Logstash to read data from JMX and send it to Elasticsearch.

我们将讨论如何配置Logstash以从JMX读取数据并将其发送到Elasticsearch。

2. Install the Elastic Stack

2.安装Elastic Stack

First, we need to install Elastic stack (ElasticsearchLogstashKibana)

首先,我们需要安装Elastic stack(ElasticsearchLogstashKibana)

Then, to make sure everything is connected and working properly, we’ll send the JMX data to Logstash and visualize it over on Kibana.

然后,为了确保所有的连接和工作正常,我们将把JMX数据发送到Logstash,并在Kibana上将其可视化。

2.1. Test Logstash

2.1.测试Logstash

First, we will go to the Logstash installation directory which varies by the operating system (in our case Ubuntu):

首先,我们将进入Logstash的安装目录,该目录因操作系统而异(在我们的例子中是Ubuntu)。

cd /opt/logstash

We can set a simple configuration to Logstash from the command line:

我们可以从命令行中给Logstash设置一个简单的配置。

bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost:9200"] } }'

Then, we can simply type some sample data in the console – and use the CTRL-D command to close pipeline when we’re done.

然后,我们可以简单地在控制台中输入一些样本数据–完成后使用CTRL-D命令来关闭管道。

2.2. Test Elasticsearch

2.2.测试Elasticsearch

After adding the sample data, a Logstash index should be available on Elasticsearch – which we can check as follows:

添加完样本数据后,Elasticsearch上应该有一个Logstash索引–我们可以按以下方式检查。

curl -X GET 'http://localhost:9200/_cat/indices'

Sample Output:

输出样本。

yellow open logstash-2017.11.10 5 1 3531 0 506.3kb 506.3kb 
yellow open .kibana             1 1    3 0   9.5kb   9.5kb 
yellow open logstash-2017.11.11 5 1 8671 0   1.4mb   1.4mb

2.3. Test Kibana

2.3.测试Kibana

Kibana runs by default on port 5601 – we can access the homepage at:

Kibana默认运行在5601端口 – 我们可以访问主页。

http://localhost:5601/app/kibana

We should be able to create a new index with the pattern “logstash-*” – and see our sample data there.

我们应该能够以”logstash-*“模式创建一个新的索引–并在那里看到我们的样本数据。

3. Configure Tomcat

3.配置Tomcat

Next, we need to enable JMX by adding the following to CATALINA_OPTS:

接下来,我们需要在CATALINA_OPTS中添加以下内容,以启用JMX。

-Dcom.sun.management.jmxremote
  -Dcom.sun.management.jmxremote.port=9000
  -Dcom.sun.management.jmxremote.ssl=false
  -Dcom.sun.management.jmxremote.authenticate=false

Note that:

请注意,。

  • You can configure CATALINA_OPTS by modifying setenv.sh
  • For Ubuntu users setenv.sh can be found in ‘/usr/share/tomcat8/bin’

4. Connect JMX and Logstash

4.连接JMX和Logstash

Now, let’s connect our JMX metrics to Logstash – for which we’ll need to have the JMX input plugin installed there (more on that later).

现在,让我们把我们的JMX指标连接到Logstash–为此,我们需要在那里安装JMX输入插件(稍后会有更多介绍)。

4.1. Configure JMX Metrics

4.1.配置JMX指标

First, we need to configure the JMX metrics we want to stash; we’ll provide the configuration in JSON format.

首先,我们需要配置我们想要藏匿的JMX指标;我们将以JSON格式提供配置。

Here’s our jmx_config.json:

这是我们的jmx_config.json

{
  "host" : "localhost",
  "port" : 9000,
  "alias" : "reddit.jmx.elasticsearch",
  "queries" : [
  {
    "object_name" : "java.lang:type=Memory",
    "object_alias" : "Memory"
  }, {
    "object_name" : "java.lang:type=Threading",
    "object_alias" : "Threading"
  }, {
    "object_name" : "java.lang:type=Runtime",
    "attributes" : [ "Uptime", "StartTime" ],
    "object_alias" : "Runtime"
  }]
}

Note that:

请注意,。

  • We used the same port for JMX from CATALINA_OPTS
  • We can provide as many configuration files as we want, but we need them to be in the same directory (in our case, we saved jmx_config.json in ‘/monitor/jmx/’)

4.2. JMX Input Plugin

4.2.JMX输入插件

Next, let’s install JMX input plugin by running the following command in the Logstash installation directory:

接下来,让我们在Logstash安装目录下运行以下命令,安装JMX输入插件。

bin/logstash-plugin install logstash-input-jmx

Then, we need to create a Logstash configuration file (jmx.conf), where the input is JMX metrics and output directed to Elasticsearch:

然后,我们需要创建一个Logstash配置文件(jmx.conf),其中输入是JMX指标,输出指向Elasticsearch。

input {
  jmx {
    path => "/monitor/jmx"
    polling_frequency => 60
    type => "jmx"
    nb_thread => 3
  }
}

output {
    elasticsearch {
        hosts => [ "localhost:9200" ]
    }
}

Finally, we need to run Logstash and specify our configuration file:

最后,我们需要运行Logstash并指定我们的配置文件。

bin/logstash -f jmx.conf

Note that our Logstash configuration file jmx.conf is saved in the Logstash home directory (in our case /opt/logstash)

注意我们的Logstash配置文件jmx.conf被保存在Logstash主目录下(在我们的例子中/opt/logstash)。

5. Visualize JMX Metrics

5.实现JMX指标的可视化

Finally, let’s create a simple visualization of our JMX metrics data, over on Kibana. We’ll create a simple chart – to monitor the heap memory usage.

最后,让我们在Kibana上为我们的JMX指标数据创建一个简单的可视化。我们将创建一个简单的图表–来监控堆内存的使用。

5.1. Create New Search

5.1.创建新的搜索

First, we’ll create a new search to get metrics related to heap memory usage:

首先,我们将创建一个新的搜索来获取与堆内存使用相关的指标。

  • Click on “New Search” icon in search bar
  • Type the following query
    metric_path:reddit.jmx.elasticsearch.Memory.HeapMemoryUsage.used
  • Press Enter
  • Make sure to add ‘metric_path‘ and ‘metric_value_number‘ fields from sidebar
  • Click on ‘Save Search’ icon in search bar
  • Name the search ‘used memory’

In case any fields from sidebar marked as unindexed, go to ‘Settings’ tab and refresh the field list in the ‘logstash-*‘ index.

如果侧边栏的任何字段被标记为未索引,进入 “设置 “选项卡,刷新”logstash-*“索引中的字段列表。

5.2. Create Line Chart

5.2.创建折线图

Next, we’ll create a simple line chart to monitor our heap memory usage over time:

接下来,我们将创建一个简单的折线图来监测我们的堆内存使用情况。

  • Go to ‘Visualize’ tab
  • Choose ‘Line Chart’
  • Choose ‘From saved search’
  • Choose ‘used memory’ search that we created earlier

For Y-Axis, make sure to choose:

对于Y-轴,请确保选择。

  • Aggregation: Average
  • Field: metric_value_number

For the X-Axis, choose ‘Date Histogram’ – then save the visualization.

对于X轴,选择 “日期直方图” – 然后保存可视化。

5.3. Use Scripted Field

5.3.使用脚本字段

As the memory usage is in bytes, it’s not very readable. We can convert the metric type and value by adding a scripted field in Kibana:

由于内存使用量是以字节为单位的,它的可读性不强。我们可以通过在Kibana中添加一个脚本字段来转换度量的类型和值。

  • From ‘Settings’, go to indices and choose ‘logstash-*‘ index
  • Go to ‘Scripted fields’ tab and click ‘Add Scripted Field’
  • Name: metric_value_formatted
  • Format: Bytes
  • For Script, we will simply use the value of ‘metric_value_number‘:
    doc['metric_value_number'].value

Now, you can change your search and visualization to use field ‘metric_value_formatted‘ instead of ‘metric_value_number‘ – and the data is going to be properly displayed.

现在,你可以改变你的搜索和可视化,使用字段’metric_value_formatted‘而不是’metric_value_number‘–并且数据会被正确显示。

Here’s what this very simple dashboard looks like:

下面是这个非常简单的仪表板的样子。

kibana tomcat simple example

6. Conclusion

6.结论

And we’re done. As you can see, the configuration isn’t particularly difficult, and getting the JMX data to be visible in Kibana allows us to do a lot of interesting visualization work to create a fantastic production monitoring dashboard.

然后我们就完成了。正如你所看到的,配置并不特别困难,而且让JMX数据在Kibana中可见,使我们可以做很多有趣的可视化工作,以创建一个奇妙的生产监控仪表板。