在Elasticsearch中进行聚合查询并返回所有字段

5

我有一个20GB的csv文件,格式如下。

date,ip,dev_type,env,time,cpu_usage 
2015-11-09,10.241.121.172,M2,production,11:01,8 
2015-11-09,10.241.121.172,M2,production,11:02,9 
2015-11-09,10.241.121.243,C1,preproduction,11:01,4 
2015-11-09,10.241.121.243,C1,preproduction,11:02,8
2015-11-10,10.241.121.172,M2,production,11:01,3 
2015-11-10,10.241.121.172,M2,production,11:02,9 
2015-11-10,10.241.121.243,C1,preproduction,11:01,4 
2015-11-10,10.241.121.243,C1,preproduction,11:02,8

并将其以如下格式导入 ElasticSearch:
{
  "_index": "cpuusage",
  "_type": "logs",
  "_id": "AVFOkMS7Q4jUWMFNfSrZ",
  "_score": 1,
  "_source": {
    "date": "2015-11-10",
    "ip": "10.241.121.172",
    "dev_type": "M2",
    "env": "production",
    "time": "11:02",
    "cpu_usage": "9"
  },
  "fields": {
    "date": [
      1447113600000
    ]
  }
}
...

当我查找每天每个IP的cpu_usage的最大值时,如何输出所有字段(日期、IP、dev_type、env、cpu_usage)

curl -XGET localhost:9200/cpuusage/_search?pretty -d '{
    "size": 0,
        "aggs": {
                 "by_date": {
                    "date_histogram": {
                       "field": "date",
                       "interval": "day"
                    },
                   "aggs" : {
                           "genders" : {
                               "terms" : {
                                   "field" : "ip",
                                   "size": 100000,
                                    "order" : { "_count" : "asc" }
                               },

                               "aggs" : {
                                   "cpu_usage" : { "max" : { "field" : "cpu_usage" } }
                               }
                           }
                       }
                    }
              } 

}'

---cut---

 ----output ----   
 "aggregations" : {
        "events_by_date" : {
          "buckets" : [ {
            "key_as_string" : "2015-11-09T00:00:00.000Z",
            "key" : 1447027200000,
            "doc_count" : 4,
            "genders" : {
              "doc_count_error_upper_bound" : 0,
              "sum_other_doc_count" : 0,
              "buckets" : [ {
                "key" : "10.241.121.172",
                "doc_count" : 2,
                "cpu_usage" : {
                  "value" : 9.0
                }
              }, {
                "key" : "10.241.121.243",
                "doc_count" : 2,
                "cpu_usage" : {
                  "value" : 8.0
                }
              } ]
            }
          },
1个回答

12
你可以使用 top hits 聚合 实现。尝试一下。
{
  "size": 0,
  "aggs": {
    "by_date": {
      "date_histogram": {
        "field": "date",
        "interval": "day"
      },
      "aggs": {
        "genders": {
          "terms": {
            "field": "ip",
            "size": 100000,
            "order": {
              "_count": "asc"
            }
          },
          "aggs": {
            "cpu_usage": {
              "max": {
                "field": "cpu_usage"
              }
            },
            "include_source": {
              "top_hits": {
                "size": 1,
                "_source": {
                  "include": [
                    "date", "ip", "dev_type", "env", "cpu_usage"
                  ]
                }
              }
            }
          }
        }
      }
    }
  }
}

这有帮助吗?


感谢@ChintanShah25的帮助。非常接近我的需求,但这里仍然有一个小问题:include_source获取的不是cpu_usage最大值的那个,而是排除了cpu_usage字段,其他字段是正确的,所以我想知道是否可以指定要返回的字段(dev_type、env是字符串类型)。 - mk_
我不确定我完全理解了你的评论,如果你只想包含特定的字段,那么你可以使用源过滤。我已经更新了我的答案,这样可以吗? - ChintanShah25

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接