IndexedDB and Client Storage
The Storage Landscape Is a Mess
The browser has too many ways to store data: localStorage, sessionStorage, cookies, Cache API, IndexedDB, and now the Storage Buckets API. Each has different size limits, different APIs, different persistence guarantees. Most developers default to localStorage for everything because the API is simple. Then they hit the 5MB wall, or block the main thread with a synchronous read, or lose data because the browser evicted it.
IndexedDB is the answer for serious client-side storage. It's a full transactional database in the browser — async, indexed, queryable, and capable of storing hundreds of megabytes. The API is ugly (callback-based from the 2012 era), but with the idb wrapper library, it becomes pleasant.
Think of IndexedDB as a filing cabinet in your office. Each drawer is an object store (like a database table). Inside each drawer, files are organized by a key (like a primary key). You can add labels to the sides of files (indexes) so you can find them without opening every file. To modify anything, you open a transaction (like checking out files) — if something goes wrong mid-transaction, everything rolls back and the cabinet is unchanged.
IndexedDB Fundamentals
Opening a Database
const request = indexedDB.open('myApp', 1);
request.onupgradeneeded = (event) => {
const db = event.target.result;
const store = db.createObjectStore('users', { keyPath: 'id' });
store.createIndex('email', 'email', { unique: true });
store.createIndex('role', 'role', { unique: false });
};
request.onsuccess = (event) => {
const db = event.target.result;
};
The version number (1) triggers onupgradeneeded when the database is first created or when you increment the version. This is the only place you can modify the schema (create/delete stores and indexes).
With the idb Library
The raw IndexedDB API is callback-based and verbose. The idb library (by Jake Archibald) wraps it with Promises:
import { openDB } from 'idb';
const db = await openDB('myApp', 1, {
upgrade(db) {
const store = db.createObjectStore('users', { keyPath: 'id' });
store.createIndex('email', 'email', { unique: true });
store.createIndex('role', 'role');
},
});
Every example from here on uses idb. There's no good reason to use the raw API in 2026.
CRUD Operations
Create / Update
await db.put('users', {
id: 'user-1',
name: 'Alice',
email: 'alice@example.com',
role: 'admin',
});
put inserts or updates (upsert). add inserts only — it throws if the key already exists.
Read
const user = await db.get('users', 'user-1');
const allUsers = await db.getAll('users');
const admins = await db.getAllFromIndex('users', 'role', 'admin');
Delete
await db.delete('users', 'user-1');
await db.clear('users');
Count
const total = await db.count('users');
const adminCount = await db.countFromIndex('users', 'role', 'admin');
Transactions
Every IndexedDB operation runs inside a transaction. With idb, simple operations create implicit transactions. For multiple operations that must succeed or fail together, use explicit transactions:
const tx = db.transaction(['users', 'logs'], 'readwrite');
await tx.objectStore('users').put({ id: 'user-1', name: 'Bob' });
await tx.objectStore('logs').put({ id: Date.now(), action: 'renamed user-1' });
await tx.done;
If any operation fails, the entire transaction rolls back. tx.done is a Promise that resolves when the transaction commits.
Transaction Types
| Type | Can Read | Can Write | Can Modify Schema |
|---|---|---|---|
readonly | Yes | No | No |
readwrite | Yes | Yes | No |
versionchange | Yes | Yes | Yes |
Use readonly when you only need to read — it allows multiple concurrent transactions and is faster. readwrite transactions are exclusive per object store.
Transactions auto-commit when there are no pending requests. If you do something async between requests (like a fetch), the transaction may commit before you're done with it. Keep transactions short and synchronous. Do your async work before opening the transaction, or open a new transaction after.
Cursors: Iterating Large Datasets
For large datasets where getAll() would use too much memory, use cursors:
const tx = db.transaction('logs', 'readonly');
const store = tx.objectStore('logs');
let cursor = await store.openCursor();
while (cursor) {
processLog(cursor.value);
cursor = await cursor.continue();
}
Cursors iterate one record at a time, keeping memory usage constant. You can also use key cursors (openKeyCursor) to iterate keys without loading values.
Range Queries
const range = IDBKeyRange.bound('2025-01-01', '2025-12-31');
const logs = await db.getAllFromIndex('logs', 'date', range);
let cursor = await store.index('date').openCursor(range, 'prev');
IDBKeyRange supports:
IDBKeyRange.only(value)— exact matchIDBKeyRange.lowerBound(value, open?)— greater than (or equal)IDBKeyRange.upperBound(value, open?)— less than (or equal)IDBKeyRange.bound(lower, upper, lowerOpen?, upperOpen?)— range
Versioning and Migrations
When your schema needs to change, increment the version number:
const db = await openDB('myApp', 3, {
upgrade(db, oldVersion, newVersion, tx) {
if (oldVersion < 1) {
const store = db.createObjectStore('users', { keyPath: 'id' });
store.createIndex('email', 'email', { unique: true });
}
if (oldVersion < 2) {
const store = tx.objectStore('users');
store.createIndex('role', 'role');
}
if (oldVersion < 3) {
db.createObjectStore('settings', { keyPath: 'key' });
}
},
});
Each migration step checks oldVersion and applies incremental changes. A user upgrading from v1 to v3 runs the v2 and v3 migrations. A new user (v0 to v3) runs all three.
If other tabs have the database open when you try to upgrade, the open request is blocked. Listen for the blocked event and prompt the user to close other tabs. Or, in those other tabs, listen for versionchange and close the connection: db.addEventListener('versionchange', () => db.close()).
Performance Patterns
Batch Writes
Writing 1000 records one at a time means 1000 transactions (each with disk sync overhead). Batch them:
const tx = db.transaction('items', 'readwrite');
const store = tx.objectStore('items');
for (const item of items) {
store.put(item);
}
await tx.done;
One transaction, one disk sync. For 10,000 records, this is 10-100x faster than individual puts.
Index Design
Indexes speed up reads but slow down writes (every write updates all indexes). Design them like database indexes:
- Index fields you query frequently
- Don't index fields you never filter or sort by
- Compound keys work for multi-field queries: use arrays as keys
store.createIndex('role-name', ['role', 'name']);
const admins = await db.getAllFromIndex('users', 'role-name',
IDBKeyRange.bound(['admin'], ['admin', []])
);
Storage Quota
Browsers limit how much data each origin can store. Check your quota:
const estimate = await navigator.storage.estimate();
console.log(`Used: ${(estimate.usage / 1e6).toFixed(1)} MB`);
console.log(`Quota: ${(estimate.quota / 1e6).toFixed(1)} MB`);
console.log(`Available: ${((estimate.quota - estimate.usage) / 1e6).toFixed(1)} MB`);
Typical quotas:
- Chrome: up to 80% of total disk space per origin
- Firefox: up to 50% of free disk space per group of origins
- Safari: ~1GB, with prompts for more
Persisted Storage
By default, browser storage is best-effort — the browser can evict it under storage pressure (low disk space). Request persistent storage to prevent eviction:
const persisted = await navigator.storage.persist();
if (persisted) {
console.log('Storage will not be evicted');
}
Chrome grants persistence automatically for installed PWAs and sites with high engagement. Firefox and Safari prompt the user.
Storage Buckets API
The Storage Buckets API (Chrome 122+) lets you create independent storage buckets with different persistence and quota policies:
const bucket = await navigator.storageBuckets.open('important-data', {
persisted: true,
});
const db = await bucket.indexedDB.open('myDB', 1);
const cache = await bucket.caches.open('myCache');
Each bucket can be independently persisted or evicted. This lets you protect critical data while letting the browser evict less important caches.
Comparing Client Storage Options
| Feature | localStorage | sessionStorage | IndexedDB | Cache API | Cookies |
|---|---|---|---|---|---|
| Async | No (blocks main thread) | No (blocks main thread) | Yes | Yes | No |
| Size limit | ~5 MB | ~5 MB | Hundreds of MB+ | Hundreds of MB+ | ~4 KB per cookie |
| Data types | Strings only | Strings only | Structured data, blobs, files | Request/Response pairs | Strings only |
| Indexed/queryable | No | No | Yes (indexes, cursors, ranges) | By request URL only | No |
| Transactions | No | No | Yes (ACID) | No | No |
| Persistence | Until cleared | Until tab closes | Until cleared/evicted | Until cleared/evicted | Expiry date |
| Best for | Small key-value config | Tab-scoped temp data | App data, offline storage | HTTP response caching | Server-sent state, auth |
When to Use What
- IndexedDB: Structured app data, offline-first storage, large datasets, anything that needs queries or transactions
- Cache API: HTTP responses, precached assets, service worker caching strategies
- localStorage: Tiny preferences (theme, language). Never for app data.
- sessionStorage: Tab-scoped temporary state (form drafts, wizard progress)
- Cookies: Authentication tokens, server-readable state. Not for client-side storage.
| What developers do | What they should do |
|---|---|
| Using localStorage for app data in a production app localStorage is synchronous (blocks the main thread for every read/write), limited to ~5MB, and stores only strings (requiring JSON.parse/stringify for objects). IndexedDB is async, stores structured data natively, and handles hundreds of megabytes. | Use IndexedDB for any structured data, especially if the dataset can grow |
| Not handling the onblocked event during database upgrades If another tab has the database open, your upgrade request blocks until it closes. Without handling this, your app hangs silently. The other tab should listen for versionchange and call db.close() to allow the upgrade to proceed. | Listen for versionchange in all database connections and close them promptly |
| Assuming client-side storage is permanent Browser storage is best-effort by default. Under storage pressure, the browser can silently delete all data for your origin. Your app must gracefully handle missing data — re-fetch from the server, show an empty state, or prompt re-sync. | Always handle the case where data has been evicted. Use navigator.storage.persist() for critical data. |
- 1IndexedDB is the only viable option for serious client-side storage — async, transactional, indexed, and stores hundreds of MB
- 2Use the idb library instead of the raw callback-based API
- 3Schema changes (createObjectStore, createIndex) can only happen in onupgradeneeded
- 4Batch writes into a single transaction for 10-100x better performance
- 5Request navigator.storage.persist() for data you cannot afford to lose to eviction