PostgreSQL:LIMIT越小,查询速度越慢。

8

我有以下查询

SELECT translation.id
FROM "TRANSLATION" translation
   INNER JOIN "UNIT" unit
     ON translation.fk_id_unit = unit.id
   INNER JOIN "DOCUMENT" document
     ON unit.fk_id_document = document.id
WHERE document.fk_id_job = 3665
ORDER BY translation.id asc
LIMIT 50

它运行了可怕的110秒

表格尺寸:

+----------------+-------------+
| Table          | Records     |
+----------------+-------------+
| TRANSLATION    |  6,906,679  |
| UNIT           |  6,906,679  |
| DOCUMENT       |     42,321  |
+----------------+-------------+

然而,当我将LIMIT参数从50更改为1000时,查询在2秒钟内完成。
以下是慢查询的查询计划。
Limit (cost=0.00..146071.52 rows=50 width=8) (actual time=111916.180..111917.626 rows=50 loops=1)
  ->  Nested Loop (cost=0.00..50748166.14 rows=17371 width=8) (actual time=111916.179..111917.624 rows=50 loops=1)
      Join Filter: (unit.fk_id_document = document.id)
    ->  Nested Loop (cost=0.00..39720545.91 rows=5655119 width=16) (actual time=0.051..15292.943 rows=5624514 loops=1)
          ->  Index Scan using "TRANSLATION_pkey" on "TRANSLATION" translation (cost=0.00..7052806.78 rows=5655119 width=16) (actual time=0.039..1887.757 rows=5624514 loops=1)
          ->  Index Scan using "UNIT_pkey" on "UNIT" unit (cost=0.00..5.76 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=5624514)
              Index Cond: (unit.id = translation.fk_id_translation_unit)
    ->  Materialize  (cost=0.00..138.51 rows=130 width=8) (actual time=0.000..0.006 rows=119 loops=5624514)
          ->  Index Scan using "DOCUMENT_idx_job" on "DOCUMENT" document (cost=0.00..137.86 rows=130 width=8) (actual time=0.025..0.184 rows=119 loops=1)
              Index Cond: (fk_id_job = 3665)

而对于快速的那个

Limit (cost=523198.17..523200.67 rows=1000 width=8) (actual time=2274.830..2274.988 rows=1000 loops=1)
  ->  Sort (cost=523198.17..523241.60 rows=17371 width=8) (actual time=2274.829..2274.895 rows=1000 loops=1)
      Sort Key: translation.id
      Sort Method:  top-N heapsort  Memory: 95kB
      ->  Nested Loop (cost=139.48..522245.74 rows=17371 width=8) (actual time=0.095..2252.710 rows=97915 loops=1)
          ->  Hash Join (cost=139.48..420861.93 rows=17551 width=8) (actual time=0.079..2005.238 rows=97915 loops=1)
              Hash Cond: (unit.fk_id_document = document.id)
              ->  Seq Scan on "UNIT" unit  (cost=0.00..399120.41 rows=5713741 width=16) (actual time=0.008..1200.547 rows=6908070 loops=1)
              ->  Hash (cost=137.86..137.86 rows=130 width=8) (actual time=0.065..0.065 rows=119 loops=1)
                  Buckets: 1024  Batches: 1  Memory Usage: 5kB
                  ->  Index Scan using "DOCUMENT_idx_job" on "DOCUMENT" document (cost=0.00..137.86 rows=130 width=8) (actual time=0.009..0.041 rows=119 loops=1)
                      Index Cond: (fk_id_job = 3665)
          ->  Index Scan using "TRANSLATION_idx_unit" on "TRANSLATION" translation (cost=0.00..5.76 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=97915)
              Index Cond: (translation.fk_id_translation_unit = unit.id)

显然,执行计划非常不同,第二个查询结果是一个快50倍的查询。

我在所有涉及到的字段上都有索引,并且在运行查询之前对所有表进行了ANALYZE

有人能看出第一个查询有什么问题吗?

更新:表定义

CREATE TABLE "public"."TRANSLATION" (
  "id" BIGINT NOT NULL, 
  "fk_id_translation_unit" BIGINT NOT NULL, 
  "translation" TEXT NOT NULL, 
  "fk_id_language" INTEGER NOT NULL, 
  "relevance" INTEGER, 
  CONSTRAINT "TRANSLATION_pkey" PRIMARY KEY("id"), 
  CONSTRAINT "TRANSLATION_fk" FOREIGN KEY ("fk_id_translation_unit")
    REFERENCES "public"."UNIT"("id")
    ON DELETE CASCADE
    ON UPDATE NO ACTION
    DEFERRABLE
    INITIALLY DEFERRED, 
  CONSTRAINT "TRANSLATION_fk1" FOREIGN KEY ("fk_id_language")
    REFERENCES "public"."LANGUAGE"("id")
    ON DELETE NO ACTION
    ON UPDATE NO ACTION
    NOT DEFERRABLE
) WITHOUT OIDS;

CREATE INDEX "TRANSLATION_idx_unit" ON "public"."TRANSLATION"
  USING btree ("fk_id_translation_unit");

CREATE INDEX "TRANSLATION_language_idx" ON "public"."TRANSLATION"
  USING hash ("translation");

CREATE TABLE "public"."UNIT" (
  "id" BIGINT NOT NULL, 
  "text" TEXT NOT NULL, 
  "fk_id_language" INTEGER NOT NULL, 
  "fk_id_document" BIGINT NOT NULL, 
  "word_count" INTEGER DEFAULT 0, 
  CONSTRAINT "UNIT_pkey" PRIMARY KEY("id"), 
  CONSTRAINT "UNIT_fk" FOREIGN KEY ("fk_id_document")
    REFERENCES "public"."DOCUMENT"("id")
    ON DELETE CASCADE
    ON UPDATE NO ACTION
    NOT DEFERRABLE, 
  CONSTRAINT "UNIT_fk1" FOREIGN KEY ("fk_id_language")
    REFERENCES "public"."LANGUAGE"("id")
    ON DELETE NO ACTION
    ON UPDATE NO ACTION
    NOT DEFERRABLE
) WITHOUT OIDS;

CREATE INDEX "UNIT_idx_document" ON "public"."UNIT"
  USING btree ("fk_id_document");

CREATE INDEX "UNIT_text_idx" ON "public"."UNIT"
  USING hash ("text");

CREATE TABLE "public"."DOCUMENT" (
  "id" BIGINT NOT NULL, 
  "fk_id_job" BIGINT, 
  CONSTRAINT "DOCUMENT_pkey" PRIMARY KEY("id"), 
  CONSTRAINT "DOCUMENT_fk" FOREIGN KEY ("fk_id_job")
    REFERENCES "public"."JOB"("id")
    ON DELETE SET NULL
    ON UPDATE NO ACTION
    NOT DEFERRABLE   
) WITHOUT OIDS;

更新:数据库参数

shared_buffers = 2048MB
effective_cache_size = 4096MB
work_mem = 32MB

Total memory: 32GB
CPU: Intel Xeon X3470 @ 2.93 GHz, 8MB cache

你能发布表定义吗? - John Woo
你的安装进行了调整吗?共享缓存、有效缓存大小、工作内存和系统规格的设置是什么? - eevar
@eevar,我们调整了一些参数,但没有花太多时间来调整系统。我已经在我的帖子中更新了基本参数。 - twoflower
可能将 random_page_cost 降低至 1...2 会有所帮助(可以通过会话内的 SET 语句完成)。(也许可以降低 shared_buffers 并增加 effective_cache_size; 但这是我个人的痴迷点 ;-)) 增加 statistics_target 并运行 vacuum analyze? - wildplasser
离题了,但我很好奇为什么“shared_buffers”相对于系统内存(32GB)如此之低(2GB)?我的理解是,对于专用数据库服务器,这应该设置为系统内存的50%。我很好奇。 - n8gard
2个回答

1

这是ANALYZE官方文档中的一个有趣部分。

对于大型表,ANALYZE会随机抽取表内容的样本,而不是检查每一行。 [...] 可以通过调整default_statistics_target配置变量来控制分析的范围,或者通过使用ALTER TABLE ... ALTER COLUMN ... SET STATISTICS在列级别上设置每列统计目标来进行控制。

显然,这是改善糟糕查询计划的常见方法。分析会稍微慢一些,但查询计划可能会更好。

ALTER TABLE


0
在第一个查询中,优化器采用绕过通过主键扫描的排序的策略。问题是符合document.fk_id条件的结果太少了。因此,索引扫描和nl join应该远离以填充结果桶。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接