Postgres慢查询(慢索引扫描)

4

我有一张包含300万行数据,大小为1.3GB的表格。我的笔记本电脑上运行着4GB RAM的Postgres 9.3。

explain analyze
select act_owner_id from cnt_contacts where act_owner_id = 2

我在 cnt_contacts.act_owner_id 上创建了 B 树索引,定义如下:

CREATE INDEX cnt_contacts_idx_act_owner_id 
   ON public.cnt_contacts USING btree (act_owner_id, status_id);

查询大约需要5秒钟

位图堆扫描cnt_contacts(成本= 2598.79..86290.73行=6208宽度=4)(实际时间= 5865.617..5875.302行=5444循环=1)
重新检查条件:(act_owner_id = 2)
-> cnt_contacts_idx_act_owner_id上的位图索引扫描(成本= 0.00..2597.24行=6208宽度=0)(实际时间= 5865.407..5865.407行=5444循环=1)
        索引条件:(act_owner_id = 2)
总运行时间:5875.684毫秒"
为什么要花这么长时间?

work_mem = 1024MB; 
shared_buffers = 128MB;
effective_cache_size = 1024MB
seq_page_cost = 1.0         # measured on an arbitrary scale
random_page_cost = 15.0         # same scale as above
cpu_tuple_cost = 3.0

在public.cnt_contacts表上创建一个索引cnt_contacts_idx_act_owner_id,使用btree算法,并以act_owner_id和status_id作为索引的列。 - Vasil Atanasov
你应该创建另一个只包含 act_owner_id 的索引。 - frlan
尝试只使用“act_owner_id”作为键,但是查询结果相同,没有任何区别,这并没有帮助。 - Vasil Atanasov
2
你为什么要这么大幅度地增加random_page_cost?(如果我没记错的话,默认值是4.0)。这样做会告诉Postgres你有一个非常慢的硬盘和非常高的延迟。而cpu_tuple_cost似乎也很奇怪(默认值为0.01)。即使在我的旧式缓慢桌面电脑上,将random_page_cost降低到2.5也改善了Postgres创建的执行计划。 - user330315
1
而且work_mem=1GB也是荒谬的。 - wildplasser
显示剩余2条评论
2个回答

12

好的,你有一个大表格、索引和长时间执行计划(Plan)需要改进以减少时间。你会写入和删除行数据,PG会写入和删除元组,而表格和索引可能会变得过度膨胀。为便于搜索,PG将索引加载到共享缓存中,因此你需要尽可能地保持索引的干净整洁。对于选择操作,PG会先读取到共享缓存中然后再进行搜索。你可以尝试设置缓冲区内存,并减少索引和表格的膨胀,保持数据库的清洁。

针对此问题,你应该考虑以下事项:

1)仅检查索引重复项并确保你的索引具有良好的选择性:

 WITH table_scans as (
    SELECT relid,
        tables.idx_scan + tables.seq_scan as all_scans,
        ( tables.n_tup_ins + tables.n_tup_upd + tables.n_tup_del ) as writes,
                pg_relation_size(relid) as table_size
        FROM pg_stat_user_tables as tables
),
all_writes as (
    SELECT sum(writes) as total_writes
    FROM table_scans
),
indexes as (
    SELECT idx_stat.relid, idx_stat.indexrelid,
        idx_stat.schemaname, idx_stat.relname as tablename,
        idx_stat.indexrelname as indexname,
        idx_stat.idx_scan,
        pg_relation_size(idx_stat.indexrelid) as index_bytes,
        indexdef ~* 'USING btree' AS idx_is_btree
    FROM pg_stat_user_indexes as idx_stat
        JOIN pg_index
            USING (indexrelid)
        JOIN pg_indexes as indexes
            ON idx_stat.schemaname = indexes.schemaname
                AND idx_stat.relname = indexes.tablename
                AND idx_stat.indexrelname = indexes.indexname
    WHERE pg_index.indisunique = FALSE
),
index_ratios AS (
SELECT schemaname, tablename, indexname,
    idx_scan, all_scans,
    round(( CASE WHEN all_scans = 0 THEN 0.0::NUMERIC
        ELSE idx_scan::NUMERIC/all_scans * 100 END),2) as index_scan_pct,
    writes,
    round((CASE WHEN writes = 0 THEN idx_scan::NUMERIC ELSE idx_scan::NUMERIC/writes END),2)
        as scans_per_write,
    pg_size_pretty(index_bytes) as index_size,
    pg_size_pretty(table_size) as table_size,
    idx_is_btree, index_bytes
    FROM indexes
    JOIN table_scans
    USING (relid)
),
index_groups AS (
SELECT 'Never Used Indexes' as reason, *, 1 as grp
FROM index_ratios
WHERE
    idx_scan = 0
    and idx_is_btree
UNION ALL
SELECT 'Low Scans, High Writes' as reason, *, 2 as grp
FROM index_ratios
WHERE
    scans_per_write <= 1
    and index_scan_pct < 10
    and idx_scan > 0
    and writes > 100
    and idx_is_btree
UNION ALL
SELECT 'Seldom Used Large Indexes' as reason, *, 3 as grp
FROM index_ratios
WHERE
    index_scan_pct < 5
    and scans_per_write > 1
    and idx_scan > 0
    and idx_is_btree
    and index_bytes > 100000000
UNION ALL
SELECT 'High-Write Large Non-Btree' as reason, index_ratios.*, 4 as grp 
FROM index_ratios, all_writes
WHERE
    ( writes::NUMERIC / ( total_writes + 1 ) ) > 0.02
    AND NOT idx_is_btree
    AND index_bytes > 100000000
ORDER BY grp, index_bytes DESC )
SELECT reason, schemaname, tablename, indexname,
    index_scan_pct, scans_per_write, index_size, table_size
FROM index_groups;

2) 检查是否存在表和索引膨胀问题?

     SELECT
        current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/
        ROUND((CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages::FLOAT/otta END)::NUMERIC,1) AS tbloat,
        CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::BIGINT END AS wastedbytes,
      iname, /*ituples::bigint, ipages::bigint, iotta,*/
      ROUND((CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages::FLOAT/iotta END)::NUMERIC,1) AS ibloat,
      CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes
    FROM (
      SELECT
        schemaname, tablename, cc.reltuples, cc.relpages, bs,
        CEIL((cc.reltuples*((datahdr+ma-
          (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::FLOAT)) AS otta,
        COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
        COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::FLOAT)),0) AS iotta -- very rough approximation, assumes all cols
      FROM (
        SELECT
          ma,bs,schemaname,tablename,
          (datawidth+(hdr+ma-(CASE WHEN hdr%ma=0 THEN ma ELSE hdr%ma END)))::NUMERIC AS datahdr,
          (maxfracsum*(nullhdr+ma-(CASE WHEN nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
        FROM (
          SELECT
            schemaname, tablename, hdr, ma, bs,
            SUM((1-null_frac)*avg_width) AS datawidth,
            MAX(null_frac) AS maxfracsum,
            hdr+(
              SELECT 1+COUNT(*)/8
              FROM pg_stats s2
              WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename
            ) AS nullhdr
          FROM pg_stats s, (
            SELECT
              (SELECT current_setting('block_size')::NUMERIC) AS bs,
              CASE WHEN SUBSTRING(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,
              CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma
            FROM (SELECT version() AS v) AS foo
          ) AS constants
          GROUP BY 1,2,3,4,5
        ) AS foo
      ) AS rs
      JOIN pg_class cc ON cc.relname = rs.tablename
      JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema'
      LEFT JOIN pg_index i ON indrelid = cc.oid
      LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
    ) AS sml
    ORDER BY wastedbytes DESC

3) 您是否清理硬盘上未使用的元组?是时候执行 VACUUM 了吗?

SELECT 
    relname AS TableName
    ,n_live_tup AS LiveTuples
    ,n_dead_tup AS DeadTuples
FROM pg_stat_user_tables;

4) 仔细考虑一下。如果您的数据库中有10条记录,其中8条的id为2,那么这意味着索引的选择性不好,PG将会扫描所有8条记录。但是如果您尝试使用id != 2索引,它就会工作得很好。请尽量设置具有良好选择性的索引。

5) 使用合适的列类型可以节省数据空间。如果您的列可以使用较少kb的类型,请进行转换。

6) 只需检查您的数据库和条件。从页面开始检查未使用的表中的数据,清理索引,检查索引的选择性。尝试使用其他BRIN索引来处理数据,尝试重新创建索引。


4
你正在选择一张1.3GB的表格中分散的5444条记录,使用的是笔记本电脑。你认为需要多长时间呢?
看起来你的索引没有被缓存,可能是因为它无法在缓存中保持或者是因为这是你第一次使用它的某个部分。如果你重复运行完全相同的查询会发生什么?使用不同的常量的相同查询呢?
在"explain (analyze,buffers)"下运行查询将有助于获取其他信息,特别是如果你先打开track_io_timing选项。

使用 EXPLAIN (ANALYZE, BUFFERS) ... 的良好提示,可以输出共享缓存命中情况。它帮助我意识到,如果瓶颈确实存在,增加共享缓存的大小可以提高性能,在我的情况下是真的。 - Stevan

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接