IndexedDB: Structured Storage at Scale
The Browser's Built-In Database
IndexedDB is the most powerful storage API the browser gives you. It is a transactional, indexed, object-oriented database that stores structured clones of JavaScript objects — not just strings. You can query by index, iterate with cursors, store blobs and ArrayBuffers, and run all of it inside a Web Worker.
And yet, most developers avoid it because the API looks like it was designed in 2011. Because it was. The callback-based, event-driven API feels ancient compared to modern async/await code. But here is the truth: once you understand the mental model, IndexedDB is straightforward. And for client-side data at scale, nothing else comes close (except SQLite WASM, which often uses IndexedDB or OPFS as its backing store anyway).
IndexedDB is a key-value store with superpowers. Think of it as a filing cabinet. Each database is a cabinet. Each object store is a drawer in that cabinet. Each drawer has a key path (how items are labeled — like filing by name or ID). You can attach indexes to a drawer — sticky tabs on the side that let you find items by different properties without pulling everything out. Transactions are the rule that you must open a drawer before reading or writing — and while a transaction is open, the data is consistent (no one else can modify what you are reading). Cursors are your finger sliding through the files one by one.
Opening a Database
Every IndexedDB operation starts with opening a database. You specify a name and a version number. If the database does not exist, or the version is higher than the current one, the upgradeneeded event fires — this is the only place where you can create or modify object stores and indexes.
function openDB(name, version, onUpgrade) {
return new Promise((resolve, reject) => {
const request = indexedDB.open(name, version);
request.onupgradeneeded = (event) => onUpgrade(event.target.result, event.oldVersion, event.target.transaction);
request.onsuccess = (event) => resolve(event.target.result);
request.onerror = (event) => reject(event.target.error);
});
}
const db = await openDB("shop", 3, (db, oldVersion, tx) => {
if (oldVersion < 1) {
const products = db.createObjectStore("products", { keyPath: "id" });
products.createIndex("category", "category");
products.createIndex("price", "price");
}
if (oldVersion < 2) {
const orders = db.createObjectStore("orders", { keyPath: "orderId" });
orders.createIndex("userId", "userId");
orders.createIndex("date", "createdAt");
}
if (oldVersion < 3) {
const products = tx.objectStore("products");
products.createIndex("name", "name", { unique: false });
}
});
The upgradeneeded handler is the only place you can create or delete object stores and indexes. If you try to call db.createObjectStore() outside of an upgrade transaction, it throws InvalidStateError. This is why the version number matters — bumping it triggers the upgrade, and you use oldVersion to run only the migrations that have not run yet. Get this wrong and you corrupt your schema.
Transactions: The Core Concept
Every read and write in IndexedDB happens inside a transaction. Transactions guarantee consistency — you will never read partially written data. They also auto-commit when all requests in the transaction have completed and there are no pending microtasks keeping the transaction alive.
function addProduct(db, product) {
return new Promise((resolve, reject) => {
const tx = db.transaction("products", "readwrite");
tx.objectStore("products").put(product);
tx.oncomplete = () => resolve();
tx.onerror = (event) => reject(event.target.error);
});
}
function getProduct(db, id) {
return new Promise((resolve, reject) => {
const tx = db.transaction("products", "readonly");
const request = tx.objectStore("products").get(id);
request.onsuccess = () => resolve(request.result);
request.onerror = (event) => reject(event.target.error);
});
}
Transaction Modes
| Mode | Can Read | Can Write | Concurrent |
|---|---|---|---|
"readonly" | Yes | No | Multiple readonly transactions can run simultaneously |
"readwrite" | Yes | Yes | Only one readwrite per object store at a time |
This is critical for performance: if you only need to read data, always use "readonly". Multiple readonly transactions can run in parallel, but a "readwrite" transaction locks the object store.
Transaction Lifetime
Transactions auto-commit when their event loop task completes and there are no outstanding requests. This means you cannot keep a transaction alive across an await for a network request or a setTimeout.
const tx = db.transaction("products", "readwrite");
const store = tx.objectStore("products");
store.put({ id: 1, name: "Widget" }); // OK — request is in the transaction
await fetch("/api/products"); // Transaction commits during this await!
store.put({ id: 2, name: "Gadget" }); // TransactionInactiveError
Object Stores and Keys
Object stores are like tables in a relational database, except they store JavaScript objects (anything that can be structured-cloned). Each object has a key — either an inline key (a property on the object itself) or an out-of-line key (generated or provided separately).
// Inline key — the object's "id" property IS the key
db.createObjectStore("products", { keyPath: "id" });
store.put({ id: 1, name: "Widget" }); // key is 1
// Auto-increment with inline key
db.createObjectStore("logs", { keyPath: "logId", autoIncrement: true });
store.put({ message: "User logged in" }); // logId auto-assigned
// Out-of-line key — you provide the key separately
db.createObjectStore("blobs");
store.put(binaryData, "avatar-123"); // key is "avatar-123"
Supported Key Types
IndexedDB keys can be: numbers, strings, Dates, ArrayBuffers, or arrays of these types. Arrays as keys enable compound keys for complex sorting.
// Compound key using key path array
db.createObjectStore("events", { keyPath: ["userId", "timestamp"] });
store.put({ userId: "u1", timestamp: Date.now(), type: "click" });
// Query: all events for user "u1"
const range = IDBKeyRange.bound(["u1"], ["u1\uffff"]);
store.openCursor(range); // iterates all events where userId starts with "u1"
Indexes: Fast Lookups Without Full Scans
Without an index, finding all products in the "electronics" category means scanning every single record. Indexes give you O(log n) lookups on any property.
// During upgradeneeded:
const store = db.createObjectStore("products", { keyPath: "id" });
store.createIndex("by_category", "category"); // single field
store.createIndex("by_price", "price");
store.createIndex("by_category_price", ["category", "price"]); // compound index
// Querying by index:
const tx = db.transaction("products", "readonly");
const index = tx.objectStore("products").index("by_category");
const request = index.getAll("electronics"); // all electronics products
Compound Indexes
Compound indexes are where IndexedDB gets powerful. A compound index on ["category", "price"] lets you efficiently query "all electronics products under $50" with a single key range:
const index = tx.objectStore("products").index("by_category_price");
const range = IDBKeyRange.bound(
["electronics", 0], // lower bound: category "electronics", price 0
["electronics", 50] // upper bound: category "electronics", price 50
);
const results = await promisifyRequest(index.getAll(range));
Cursors: Iterating at Scale
When you need to process records one by one — or in batches — cursors are the tool. They are more memory-efficient than getAll() for large datasets because they do not load everything into memory at once.
function iterateWithCursor(db, storeName, indexName, range, callback) {
return new Promise((resolve, reject) => {
const tx = db.transaction(storeName, "readonly");
const source = indexName
? tx.objectStore(storeName).index(indexName)
: tx.objectStore(storeName);
const request = source.openCursor(range);
request.onsuccess = (event) => {
const cursor = event.target.result;
if (cursor) {
callback(cursor.value);
cursor.continue(); // advance to next record
} else {
resolve(); // no more records
}
};
request.onerror = (event) => reject(event.target.error);
});
}
// Process all orders from the last 7 days
const weekAgo = new Date(Date.now() - 7 * 24 * 60 * 60 * 1000);
const range = IDBKeyRange.lowerBound(weekAgo);
await iterateWithCursor(db, "orders", "date", range, (order) => {
processOrder(order);
});
Key Ranges
Key ranges define the subset of records a cursor iterates over:
IDBKeyRange.only("electronics") // exactly "electronics"
IDBKeyRange.lowerBound(10) // key >= 10
IDBKeyRange.lowerBound(10, true) // key > 10 (open bound)
IDBKeyRange.upperBound(100) // key <= 100
IDBKeyRange.bound(10, 100) // 10 <= key <= 100
IDBKeyRange.bound(10, 100, true, false) // 10 < key <= 100
Cursor Direction
// Forward (default): ascending order
store.openCursor(range, "next");
// Backward: descending order
store.openCursor(range, "prev");
// Skip duplicates (index cursors only)
index.openCursor(range, "nextunique");
Batch Operations for Performance
Individual put() calls inside separate transactions are slow. Each transaction has overhead — the browser must flush to disk. Batching writes into a single transaction is dramatically faster.
// Slow: one transaction per item (N transactions)
for (const product of products) {
const tx = db.transaction("products", "readwrite");
tx.objectStore("products").put(product);
await new Promise(r => { tx.oncomplete = r; });
}
// Fast: one transaction for all items (1 transaction)
function batchPut(db, storeName, items) {
return new Promise((resolve, reject) => {
const tx = db.transaction(storeName, "readwrite");
const store = tx.objectStore(storeName);
for (const item of items) {
store.put(item);
}
tx.oncomplete = () => resolve();
tx.onerror = (event) => reject(event.target.error);
});
}
await batchPut(db, "products", products); // 10-100x faster for large batches
Building a Promise-Based Wrapper
The raw IndexedDB API is verbose. Here is a minimal wrapper that makes it pleasant to use:
function promisifyRequest(request) {
return new Promise((resolve, reject) => {
request.onsuccess = () => resolve(request.result);
request.onerror = () => reject(request.error);
});
}
function promisifyTransaction(tx) {
return new Promise((resolve, reject) => {
tx.oncomplete = () => resolve();
tx.onerror = () => reject(tx.error);
tx.onabort = () => reject(tx.error || new DOMException("Transaction aborted", "AbortError"));
});
}
class SimpleDB {
constructor(db) {
this.db = db;
}
async get(storeName, key) {
const tx = this.db.transaction(storeName, "readonly");
return promisifyRequest(tx.objectStore(storeName).get(key));
}
async getAll(storeName, query, count) {
const tx = this.db.transaction(storeName, "readonly");
return promisifyRequest(tx.objectStore(storeName).getAll(query, count));
}
async put(storeName, value) {
const tx = this.db.transaction(storeName, "readwrite");
tx.objectStore(storeName).put(value);
return promisifyTransaction(tx);
}
async delete(storeName, key) {
const tx = this.db.transaction(storeName, "readwrite");
tx.objectStore(storeName).delete(key);
return promisifyTransaction(tx);
}
async batchPut(storeName, items) {
const tx = this.db.transaction(storeName, "readwrite");
const store = tx.objectStore(storeName);
for (const item of items) store.put(item);
return promisifyTransaction(tx);
}
async queryIndex(storeName, indexName, range) {
const tx = this.db.transaction(storeName, "readonly");
const index = tx.objectStore(storeName).index(indexName);
return promisifyRequest(index.getAll(range));
}
}
idb by Jake Archibald wraps IndexedDB with a clean Promise-based API in about 1.2KB. Dexie.js adds query building, live queries, and a more expressive API. For most production apps, using a lightweight wrapper is better than writing raw IndexedDB — but understanding the underlying API matters for debugging and performance tuning.
Production Scenario: Full-Text Search in IndexedDB
Building search over thousands of records in IndexedDB requires a different approach than SQL LIKE queries. IndexedDB has no built-in full-text search — you build it yourself with inverted indexes.
function tokenize(text) {
return text.toLowerCase().split(/\W+/).filter(w => w.length > 2);
}
async function indexDocument(db, docId, text) {
const tokens = tokenize(text);
const tx = db.transaction("searchIndex", "readwrite");
const store = tx.objectStore("searchIndex");
for (const token of new Set(tokens)) {
const existing = await promisifyRequest(store.get(token));
const docIds = existing ? existing.docIds : [];
if (!docIds.includes(docId)) {
docIds.push(docId);
store.put({ token, docIds });
}
}
return promisifyTransaction(tx);
}
async function search(db, query) {
const tokens = tokenize(query);
const tx = db.transaction("searchIndex", "readonly");
const store = tx.objectStore("searchIndex");
const results = await Promise.all(
tokens.map(t => promisifyRequest(store.get(t)))
);
const docIdSets = results.filter(Boolean).map(r => new Set(r.docIds));
if (docIdSets.length === 0) return [];
return [...docIdSets.reduce((a, b) => new Set([...a].filter(x => b.has(x))))];
}
For production apps with large datasets, consider moving full-text search to SQLite WASM (which has FTS5) or using a dedicated library like MiniSearch.
| What developers do | What they should do |
|---|---|
| Opening a new database connection for every operation Opening a database is expensive — it may trigger version checks, upgrade handlers, and disk I/O. Keep one IDBDatabase reference and reuse it. Only reopen if the connection is closed (e.g., due to a versionchange event from another tab). | Open the database once on app start and reuse the connection throughout the app's lifetime |
| Using readwrite transactions for read-only operations readwrite transactions lock the object store — only one can run at a time. readonly transactions can run concurrently. Using readwrite for reads serializes all your data access unnecessarily, creating a bottleneck. | Always use readonly transactions when you only need to read data |
| Calling getAll() on a store with millions of records getAll() loads every matching record into memory at once. With a million records averaging 1KB each, that is 1GB of RAM. Cursors process records one at a time, and getAll(range, 100) limits to 100 results. | Use cursors with key ranges to paginate, or limit results with the count parameter to getAll() |
| Trying to keep a transaction alive across an await for a network call Transactions auto-commit when the event loop task completes. An await for a fetch or setTimeout returns control to the event loop, committing the transaction. Any subsequent operations on that transaction throw TransactionInactiveError. | Gather all your data in one transaction, close it, do the async work, then open a new transaction for writes |
| Not handling the versionchange event on open connections When another tab opens the same database with a higher version, your connection receives a versionchange event. If you do not close the connection, the other tab's upgradeneeded is blocked indefinitely, potentially freezing the app. | Listen for versionchange on your database connection and close it gracefully |
Challenge: Pagination with Cursors
Try to solve it before peeking at the answer.
// Implement a paginated query for an IndexedDB object store.
// Requirements:
// - Return 'pageSize' items starting after 'lastKey' (for cursor-based pagination)
// - Sort by an index (not by primary key)
// - Return both the items and a 'nextKey' for the next page
// - Handle the case where there are no more results
//
// Function signature:
// async function getPage(db, storeName, indexName, pageSize, lastKey)
// Returns: { items: Array, nextKey: IDBValidKey | null }Key Rules
- 1Schema changes (createObjectStore, createIndex) can only happen inside the upgradeneeded handler. Always use version-guarded migrations with oldVersion checks.
- 2Transactions auto-commit when their event loop task ends. Never await a network call or setTimeout inside a transaction — batch your store operations first.
- 3Use readonly transactions for reads — they run concurrently. readwrite transactions lock the object store and serialize access.
- 4Batch writes into a single transaction. 10,000 puts in one transaction is 10-100x faster than 10,000 individual transactions.
- 5Create indexes for every field you query frequently. Without an index, every query is a full scan — O(n) instead of O(log n).
- 6Use cursors with key ranges for large datasets instead of getAll(). Cursors stream records without loading everything into memory.
- 7Handle the versionchange event on your database connection — close it gracefully so other tabs can upgrade the schema.