5 MySQL Indexing Tips That Actually Improve Performance
Your API endpoint is taking 3 seconds to respond. The database has 2 million rows in the orders table. You added indexes months ago, but performance is still bad. The problem is almost never "missing indexes" -- it is wrong indexes. Here is how to diagnose and fix MySQL query performance using EXPLAIN output as your guide.
The examples below use these tables:
CREATE TABLE users (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(100),
email VARCHAR(255),
status ENUM('active', 'inactive', 'banned'),
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE orders (
id INT PRIMARY KEY AUTO_INCREMENT,
user_id INT,
product_id INT,
status ENUM('pending', 'shipped', 'delivered', 'cancelled'),
total DECIMAL(10,2),
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id)
);
CREATE TABLE products (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(200),
category VARCHAR(50),
price DECIMAL(10,2),
stock INT
);
1. How to Read EXPLAIN Output
Run EXPLAIN before any optimization attempt. The three columns that matter most:
- type: How MySQL accesses the table. From worst to best:
ALL(full table scan) >index(full index scan) >range(index range scan) >ref(index lookup on non-unique key) >eq_ref(unique index lookup) >const(single row by primary key). - rows: MySQL's estimate of how many rows it must examine. Lower is better.
- Extra: Watch for
Using filesort(sorting without an index),Using temporary(temp table created), andUsing index(covering index -- good).
BEFORE -- no index on user_id + status:
EXPLAIN SELECT * FROM orders WHERE user_id = 42 AND status = 'pending';
+----+------+------+------+---------+
| id | type | rows | key | Extra |
+----+------+------+------+---------+
| 1 | ALL | 2.1M | NULL | Using where |
+----+------+------+------+---------+
AFTER -- add CREATE INDEX idx_user_status ON orders (user_id, status):
+----+------+------+-----------------+---------+
| id | type | rows | key | Extra |
+----+------+------+-----------------+---------+
| 1 | ref | 12 | idx_user_status | NULL |
+----+------+------+-----------------+---------+
The scan went from 2.1 million rows to 12. That is the difference between a 3-second response and a 2-millisecond one.
2. EXPLAIN ANALYZE for Actual Execution Times
EXPLAIN gives estimates. EXPLAIN ANALYZE (MySQL 8.0.18+) runs the query and reports actual execution metrics:
EXPLAIN ANALYZE
SELECT * FROM orders WHERE user_id = 42 AND status = 'pending';
BEFORE (no index):
-> Filter: ((orders.user_id = 42) AND (orders.`status` = 'pending'))
(cost=213744 rows=2100000) (actual time=1823..3241 rows=12 loops=1)
-> Table scan on orders
(cost=213744 rows=2100000) (actual time=0.087..2914 rows=2100000 loops=1)
AFTER (with idx_user_status):
-> Index lookup on orders using idx_user_status (user_id=42, status='pending')
(cost=4.3 rows=12) (actual time=0.042..0.061 rows=12 loops=1)
Actual time dropped from 3241ms to 0.061ms. Use EXPLAIN ANALYZE when you need proof, not estimates.
3. Composite Indexes -- Column Order Matters
The order of columns in a composite index determines which queries it can serve. The rule: equality columns first, range columns second, sort columns third.
Query: find recent pending orders for a user, sorted by date.
SELECT * FROM orders
WHERE user_id = 42 AND status = 'pending' AND created_at > '2026-01-01'
ORDER BY created_at DESC;
BEFORE -- wrong column order (created_at, user_id, status):
+----+-------+--------+-----------------------------+-------------------------------+
| id | type | rows | key | Extra |
+----+-------+--------+-----------------------------+-------------------------------+
| 1 | range | 840000 | idx_created_user_status | Using index condition; Using where |
+----+-------+--------+-----------------------------+-------------------------------+
MySQL uses the range on created_at but cannot efficiently filter user_id and status through the index.
AFTER -- correct order (user_id, status, created_at):
CREATE INDEX idx_user_status_created ON orders (user_id, status, created_at);
+----+-------+------+---------------------------+--------------------------+
| id | type | rows | key | Extra |
+----+-------+------+---------------------------+--------------------------+
| 1 | range | 8 | idx_user_status_created | Using index condition |
+----+-------+------+---------------------------+--------------------------+
Equality on user_id and status narrows the B-tree walk. The range on created_at works within that subset. The ORDER BY created_at DESC is free because the index is already ordered. No filesort.
4. Covering Indexes -- Avoid Table Lookups
When an index contains every column the query needs, MySQL reads the index only and skips the table data entirely. This appears as Using index in the Extra column.
SELECT id, total, created_at FROM orders
WHERE user_id = 42 AND status = 'shipped';
BEFORE -- index on (user_id, status) only:
+----+------+------+-----------------+------+
| id | type | rows | key | Extra|
+----+------+------+-----------------+------+
| 1 | ref | 30 | idx_user_status | NULL |
+----+------+------+-----------------+------+
MySQL finds 30 index entries, then does 30 random I/O lookups into the table to fetch total and created_at.
AFTER -- covering index (user_id, status, total, created_at):
CREATE INDEX idx_user_status_covering ON orders (user_id, status, total, created_at);
+----+------+------+--------------------------+-------------+
| id | type | rows | key | Extra |
+----+------+------+--------------------------+-------------+
| 1 | ref | 30 | idx_user_status_covering | Using index |
+----+------+------+--------------------------+-------------+
Same 30 rows, but zero table lookups. On spinning disks or cold buffer pools, covering indexes can be 10x faster. Use them for your most frequent queries.
5. Index Killers
These patterns silently prevent MySQL from using your indexes.
Functions on columns:
-- BEFORE: full table scan
EXPLAIN SELECT * FROM orders WHERE YEAR(created_at) = 2026;
-- type: ALL, rows: 2100000
-- AFTER: rewrite as range
EXPLAIN SELECT * FROM orders
WHERE created_at >= '2026-01-01' AND created_at < '2027-01-01';
-- type: range, rows: 410000
Type mismatches:
-- BEFORE: user_id is INT, but passed as string
EXPLAIN SELECT * FROM orders WHERE user_id = '42';
-- MySQL silently casts. Works on INT columns but kills indexes on VARCHAR columns:
EXPLAIN SELECT * FROM users WHERE email = 42;
-- type: ALL (index on email is ignored due to implicit cast)
Leading wildcards:
-- BEFORE: can never use an index
WHERE name LIKE '%khan'
-- type: ALL
-- AFTER: if possible, restructure
WHERE name LIKE 'khan%'
-- type: range
OR conditions across different columns:
-- BEFORE: MySQL cannot use a single index for both conditions
WHERE user_id = 42 OR product_id = 99;
-- type: ALL
-- AFTER: use UNION instead
SELECT * FROM orders WHERE user_id = 42
UNION ALL
SELECT * FROM orders WHERE product_id = 99 AND user_id != 42;
-- Both queries use their respective indexes
6. When NOT to Index
Not every column benefits from an index:
- Low cardinality columns: A
statuscolumn with 4 possible values on a 2M-row table. MySQL may choose a full table scan anyway because the index would return too many rows. The exception: composite indexes wherestatusis combined with high-cardinality columns. - Tiny tables: A
settingstable with 20 rows. Full scans are faster than index lookups at this scale. - Write-heavy tables: Every index adds overhead to
INSERT,UPDATE, andDELETE. A logging table with 10,000 inserts/second and rare reads should have minimal indexes. Benchmark the write penalty before adding indexes to hot write paths.
7. Invisible Indexes -- Test Before You Drop
MySQL 8.0+ lets you make an index invisible without dropping it. The optimizer ignores it, but you can instantly restore it:
-- Make the index invisible
ALTER TABLE orders ALTER INDEX idx_user_status INVISIBLE;
-- Run your workload, check for regressions
-- ...
-- If something breaks, restore instantly
ALTER TABLE orders ALTER INDEX idx_user_status VISIBLE;
-- If nothing broke after a week, safe to drop
DROP INDEX idx_user_status ON orders;
BEFORE (index visible):
EXPLAIN SELECT * FROM orders WHERE user_id = 42;
-- type: ref, rows: 150, key: idx_user_status
AFTER (index invisible):
EXPLAIN SELECT * FROM orders WHERE user_id = 42;
-- type: ALL, rows: 2100000, key: NULL
If making it invisible causes slow queries, you know it is still needed. This is far safer than dropping and recreating indexes on large production tables, which can take hours and lock the table.
8. Practical Workflow
This is the repeatable process for every slow query:
- Identify: Check the slow query log (
long_query_time = 1) or your APM tool. Get the exact SQL. - EXPLAIN: Run
EXPLAINon the query. Notetype,rows,key, andExtra. - Diagnose: Is it
type: ALL? Missing index. Is itUsing filesort? Wrong index column order. Is it scanning too many rows despite an index? Index is not selective enough. - Fix: Add or adjust the index. Follow the equality-range-sort rule for column order.
- Verify: Run
EXPLAINagain. Confirmtypeimproved androwsdropped. UseEXPLAIN ANALYZEto compare actual execution times. - Monitor: Watch query performance for the next few days. Check that write performance has not degraded.
Checklist
- Run
EXPLAINon every slow query before changing anything - Use
EXPLAIN ANALYZEto get actual (not estimated) execution times - Order composite index columns: equality, range, sort
- Add covering indexes for your top 5 most frequent queries
- Check for index killers: functions on columns, type mismatches, leading wildcards, OR across columns
- Skip indexes on low-cardinality columns, tiny tables, and write-heavy tables
- Use invisible indexes to safely test index removal in production
- Review the slow query log weekly
- Drop unused indexes -- they cost write performance for zero benefit