Elasticsearch高CPU使用率

3
我有5个集群节点,每个节点都有1个副本。总文档大小为216 M,共853,000个文档。我遇到了非常高的CPU使用率,大约在每小时和每天早上05:00 ~ 09:00之间达到60%〜80%。此服务器上只有elasticsearch。我认为es进程出了问题,但在CPU峰值时间只有很少的服务器请求,甚至没有cron作业。每小时和每天早上05:00 ~ 09:00我不知道elasticsearch发生了什么!!有人帮帮我,告诉我那里发生了什么。谢谢..
$ ./elasticsearch -v 
Version: 1.1.1, Build: f1585f0/2014-04-16T14:27:12Z, JVM: 1.7.0_55 

$ java -version 
java version "1.7.0_55" 
Java(TM) SE Runtime Environment (build 1.7.0_55-b13) 
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) 

我已经在elasticsearch上安装了插件:HQ、bigdesk、head、kopf和sense。

在CPU峰值期间,es日志如下:

[2014-07-03 08:01:00,045][DEBUG][action.search.type       ] [node1] [search][4], node[GJjzCrLvQQ-ZRRoqL13MrQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@451f9e7c] lastShard [true] 
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution (queue capacity 300) on org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$4@68ab486b 
    at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62) 
    at java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source) 
    at java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source) 
    at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:293) 
    at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:300) 
    at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.start(TransportSearchTypeAction.java:190) 
    at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:59) 
    at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:49) 
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63) 
    at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:108) 
    at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:43) 
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63) 
    at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:92) 
    at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:212) 
    at org.elasticsearch.rest.action.search.RestSearchAction.handleRequest(RestSearchAction.java:98) 
    at org.elasticsearch.rest.RestController.executeHandler(RestController.java:159) 
    at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:142) 
    at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:121) 
    at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:83) 
    at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:291) 
    at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:43) 
    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) 
    at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145) 
    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) 
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) 
    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459) 
    at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536) 
    at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) 
    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) 
    at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74) 
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) 
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) 
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) 
    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) 
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) 
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) 
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 
    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) 
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) 
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
    at java.lang.Thread.run(Unknown Source)
1个回答

2

您确定在这种情况下只有少量请求吗?日志显示运行的查询非常多,导致新查询被拒绝,我预计bigdesk会显示这些查询洪水。

一定有某种批处理/自动化流程正在使用大量查询来淹没您的系统。 我曾经遇到过几次这样的情况。

您应该检查索引慢日志,并可能调整时间,以便在短时间内记录大多数查询。 有关更多详细信息,请参见此处: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-slowlog.html


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接