6 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article explains the impact of excessive indexes on Postgres performance, detailing how they slow down writes and reads, waste disk space, and increase maintenance overhead. It emphasizes the importance of regularly dropping unused and redundant indexes to optimize database efficiency.
If you do, here's more
Too many indexes on a database table can seriously degrade performance. The article outlines the potential pitfalls of having excessive indexes, citing issues like write amplification, slower SELECT queries, wasted disk space, and increased autovacuum overhead. For instance, every time you insert or update a record, all indexes must be updated, which adds significant overhead. This is particularly problematic when tables have a lot of indexes, as it can lead to a noticeable drop in API performance.
The article emphasizes that extra indexes can complicate query planning. The PostgreSQL planner evaluates each index to determine the best execution path for a query, which can result in planning overhead that scales poorly with the number of indexes. Testing found that this overhead can reach O(NΒ²) complexity, especially when combining indexes in bitmap scans. Itβs not just about the write operations; SELECT performance can also suffer as the planner spends more time evaluating unnecessary indexes.
Disk space management is another concern. Indexes consume storage, and sometimes the space they take up can exceed that of the actual table data. The author mentions a specific case where an engineer freed up 20 GiB by dropping unused indexes. The article provides queries to identify unused indexes and those that may contribute to bloating, which can occur when index pages become fragmented over time. The complexities of managing these indexes highlight the need for regular monitoring and maintenance to keep a database running efficiently.
Questions about this article
No questions yet.