MongoDB Performance
Query Optimization with explain()
The first step in performance tuning is understanding how MongoDB executes your queries. The explain() method reveals the query plan, index usage, and execution statistics.
// Get execution stats for a query
db.users.find({ email: "alice@example.com" }).explain("executionStats")
// Key metrics to check:
// stage: "IXSCAN" = index used (good), "COLLSCAN" = full scan (bad)
// totalDocsExamined: should equal nReturned for efficient queries
// executionTimeMillis: total query time
// COVERED QUERY — all fields in query and projection are in the index
// No document fetch needed — fastest possible query
db.users.createIndex({ email: 1, name: 1, role: 1 })
// This query is "covered" — all fields are in the index
db.users.find(
{ email: "alice@example.com" },
{ name: 1, role: 1, _id: 0 } // _id: 0 is required for covered query
).explain("executionStats")
// stage: "PROJECTION_COVERED" — no FETCH stage needed
// Use projection to limit returned fields (reduces network transfer)
db.users.find({ active: true }, { name: 1, email: 1, _id: 0 })
MongoDB Profiler
// Profiling levels:
// 0 = off (default)
// 1 = log only slow operations (above slowms threshold)
// 2 = log all operations
// Enable profiling for operations slower than 100ms
db.setProfilingLevel(1, { slowms: 100 })
// Enable profiling for all operations
db.setProfilingLevel(2)
// Check current profiling level
db.getProfilingStatus()
// Query the system.profile collection for slow queries
db.system.profile.find().sort({ ts: -1 }).limit(10).pretty()
// Find the slowest queries
db.system.profile.find(
{ millis: { $gt: 100 } },
{ op: 1, ns: 1, millis: 1, query: 1, ts: 1 }
).sort({ millis: -1 }).limit(5)
// Disable profiling
db.setProfilingLevel(0)
Bulk Operations and Connection Pooling
// bulkWrite() — batch multiple operations in a single round trip
db.products.bulkWrite([
{ insertOne: { document: { sku: "NEW-001", name: "Widget", price: 9.99, stock: 100 } } },
{ updateOne: {
filter: { sku: "LAPTOP-001" },
update: { $inc: { stock: -1 }, $set: { updatedAt: new Date() } }
}},
{ updateMany: {
filter: { price: { $lt: 5 } },
update: { $set: { clearance: true } }
}},
{ deleteOne: { filter: { sku: "OLD-999" } } }
], { ordered: false }) // ordered: false = continue on error, faster
// Connection pool settings (in connection string or MongoClient options)
const client = new MongoClient(uri, {
maxPoolSize: 50, // max connections in pool (default: 100)
minPoolSize: 5, // min connections to maintain
maxIdleTimeMS: 30000, // close idle connections after 30s
waitQueueTimeoutMS: 5000 // timeout waiting for a connection
})
Performance Best Practices
// Check WiredTiger cache statistics
db.serverStatus().wiredTiger.cache
// Key cache metrics:
// "bytes currently in the cache" — current cache usage
// "maximum bytes configured" — cache size limit
// "pages read into cache" — disk reads (want this low)
// "unmodified pages evicted" — pages evicted from cache
// Configure WiredTiger cache size in mongod.conf
// storage:
// wiredTiger:
// engineConfig:
// cacheSizeGB: 4 // set to ~50% of available RAM
// Performance checklist:
// 1. Create indexes for all frequently queried fields
// 2. Use compound indexes that match your query patterns
// 3. Use projection to return only needed fields
// 4. Avoid $where (uses JavaScript, very slow)
// 5. Use $regex with anchors (^pattern) to leverage indexes
// 6. Avoid negation operators ($ne, $nin, $not) on large collections
// 7. Use bulkWrite() for batch operations
// 8. Keep working set (hot data) in RAM
// 9. Use SSDs for storage
// 10. Monitor with db.currentOp() for long-running operations
// Kill a long-running operation
db.currentOp()
db.killOp(opId)