Code Splitting Strategies
Why Code Splitting Matters
Picture this: a user visits your home page and their browser quietly downloads code for the settings page, the admin dashboard, and the profile editor — pages they may never visit. A single-page React app with 50 routes can easily compile into a 2MB bundle, and the user pays for all of it upfront.
Code splitting fixes this. You break the monolithic bundle into smaller chunks loaded on demand — the user downloads only the code for the current route. Navigate to the dashboard? That chunk loads then. The result: faster initial load, faster Time to Interactive, and way less wasted bandwidth.
Think of code splitting like a restaurant menu vs. a buffet. A buffet (monolithic bundle) puts every dish on the table — you carry the cost of preparing everything even if you eat three items. An à la carte menu (code splitting) prepares each dish when ordered. You pay only for what you consume. The kitchen (browser) isn't overwhelmed cooking everything at once, and the table (main thread) isn't cluttered with unused dishes.
The Dynamic import() Mechanism
So how does this actually work under the hood? Code splitting is powered by dynamic import() — an ES2020 feature that returns a Promise resolving to the module's exports.
// Static import — bundled into the main chunk
import { HeavyChart } from './components/HeavyChart';
// Dynamic import — creates a separate chunk loaded on demand
const HeavyChart = React.lazy(() => import('./components/HeavyChart'));
When the bundler (Webpack, Turbopack, Rollup) encounters a dynamic import(), it creates a split point: everything reachable from that import is placed into a separate chunk file. At runtime, calling import() triggers an HTTP request for that chunk.
Build output:
main-abc123.js → 85KB (shell, routing, shared components)
chunk-dashboard-def.js → 62KB (dashboard page + dependencies)
chunk-settings-ghi.js → 28KB (settings page)
chunk-chart-jkl.js → 95KB (chart library + wrapper)
Route-Based Splitting
This is the biggest bang for your buck. Route-based splitting is the highest-impact, lowest-effort strategy — each route gets its own chunk.
Next.js (Automatic)
Next.js automatically code-splits at the page/route level. Every file in app/ or pages/ becomes its own chunk. You get route-based splitting for free.
app/
page.tsx → chunk for /
dashboard/
page.tsx → chunk for /dashboard
settings/
page.tsx → chunk for /settings
admin/
page.tsx → chunk for /admin
When the user navigates to /dashboard, Next.js loads only the dashboard chunk. The settings and admin chunks are never downloaded unless the user visits those routes.
Manual Route Splitting (React Router)
import { lazy, Suspense } from 'react';
import { Routes, Route } from 'react-router-dom';
// Each route creates a split point
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Settings = lazy(() => import('./pages/Settings'));
const Admin = lazy(() => import('./pages/Admin'));
function App() {
return (
<Suspense fallback={<PageSkeleton />}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/settings" element={<Settings />} />
<Route path="/admin" element={<Admin />} />
</Routes>
</Suspense>
);
}
Component-Based Splitting
Route splitting covers the broad strokes, but what about that 95KB chart library sitting inside your dashboard route? Component splitting handles the heavy stuff within a route — modals, charts, rich text editors, code editors — things the user doesn't need on initial render.
import { lazy, Suspense } from 'react';
// Heavy chart library (95KB) loaded only when the tab is active
const AnalyticsChart = lazy(() => import('./AnalyticsChart'));
function DashboardPage() {
const [activeTab, setActiveTab] = useState('overview');
return (
<div>
<TabBar active={activeTab} onChange={setActiveTab} />
{activeTab === 'overview' && <OverviewPanel />}
{activeTab === 'analytics' && (
<Suspense fallback={<ChartSkeleton />}>
<AnalyticsChart />
</Suspense>
)}
</div>
);
}
The chart library's 95KB chunk is only downloaded when the user clicks the analytics tab. If they never click it, they never pay the cost.
When to Component-Split
Here's the thing — not every component should be lazy-loaded. The overhead of a separate HTTP request (DNS, connection, download) means tiny components are actually faster bundled inline.
Split when:
- The component is > 30KB (compressed) on its own
- The component is below the fold or behind an interaction (tab, modal, accordion)
- The component requires a heavy dependency (chart library, code editor, PDF renderer)
Don't split when:
- The component is small (< 10KB)
- The component is visible on initial render (above the fold)
- The split would create a noticeable loading delay for a critical UI element
Library Splitting
You'd be surprised how large some npm packages are. Loading them on every page when only one route needs them is pure waste.
// Bad — imports the entire library on every page
import { format, parseISO, differenceInDays } from 'date-fns';
// Better — dynamic import when needed
async function formatDate(dateString: string) {
const { format, parseISO } = await import('date-fns');
return format(parseISO(dateString), 'MMM d, yyyy');
}
// Best for components — lazy wrapper
const DatePicker = lazy(() => import('./DatePicker'));
// DatePicker.tsx internally imports date-fns
// The entire date-fns dependency goes into DatePicker's chunk
Run npx import-cost or check bundlephobia.com before adding any dependency. Libraries that seem small often aren't: moment is 72KB minified, lodash (full) is 72KB, date-fns (full) is 80KB. Even "lightweight" libraries like uuid ship 5KB for something achievable with crypto.randomUUID() in 0KB.
Prefetching Strategies
There's a catch with code splitting: you've traded initial load time for navigation latency. The user clicks a link and... waits for the chunk to download. Not great. Prefetching eliminates this by loading chunks before the user needs them.
Prefetch on Hover
function NavLink({ to, children }: { to: string; children: React.ReactNode }) {
const prefetchRoute = () => {
// Next.js router.prefetch or manual import()
import(`./pages${to}`);
};
return (
<a
href={to}
onMouseEnter={prefetchRoute}
onFocus={prefetchRoute}
>
{children}
</a>
);
}
Average hover-to-click time is 200-300ms. A small chunk downloads in that window. The navigation feels instant.
Prefetch on Viewport Entry
function LazySection({ importFn, fallback, children }) {
const ref = useRef<HTMLDivElement>(null);
const Component = lazy(importFn);
useEffect(() => {
const observer = new IntersectionObserver(
([entry]) => {
if (entry.isIntersecting) {
importFn(); // start loading before the component mounts
observer.disconnect();
}
},
{ rootMargin: '200px' } // start 200px before it enters viewport
);
if (ref.current) observer.observe(ref.current);
return () => observer.disconnect();
}, []);
return (
<div ref={ref}>
<Suspense fallback={fallback}>
<Component />
</Suspense>
</div>
);
}
Prefetch on Idle
// After initial load completes, prefetch likely next routes
function usePrefetchOnIdle(routes: string[]) {
useEffect(() => {
if ('requestIdleCallback' in window) {
const id = requestIdleCallback(() => {
routes.forEach(route => {
import(`./pages${route}`);
});
});
return () => cancelIdleCallback(id);
}
}, [routes]);
}
// In the shell
usePrefetchOnIdle(['/dashboard', '/settings']);
Named Chunks and Magic Comments
Webpack has a neat trick: magic comments that let you control chunk naming and loading behavior right in your import statements:
// Named chunk — appears as 'chart' in build output and network tab
const Chart = lazy(() =>
import(/* webpackChunkName: "chart" */ './components/Chart')
);
// Prefetch hint — browser downloads this chunk during idle time
const Settings = lazy(() =>
import(/* webpackPrefetch: true */ './pages/Settings')
);
// Preload hint — browser downloads this chunk immediately (high priority)
const Dashboard = lazy(() =>
import(/* webpackPreload: true */ './pages/Dashboard')
);
webpackPrefetch adds a <link rel="prefetch"> to the document after the parent chunk loads. The browser fetches it at lowest priority during idle time. webpackPreload adds <link rel="preload"> and fetches alongside the parent chunk at high priority.
Do not over-use webpackPreload. Preloading downloads chunks in parallel with the current chunk at high priority. If you preload 10 chunks, you're competing for bandwidth with the chunk the user actually needs right now. Use preload only for chunks that are immediately needed after the parent loads (e.g., a component that renders right after the route mounts). Use prefetch for everything else — it downloads at low priority and doesn't compete for bandwidth.
Common Chunks vs Granular Chunks
When multiple routes import the same dependency, the bundler can extract it into a shared "common" chunk. But this is where it gets interesting — there's a real trade-off between simplicity and cache efficiency:
Vendor Chunk (Traditional)
// webpack.config.js
optimization: {
splitChunks: {
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name: 'vendor',
chunks: 'all',
},
},
},
}
All node_modules code goes into one vendor.js. It's large but cached aggressively — it only changes when dependencies update.
Granular Chunks (Modern)
optimization: {
splitChunks: {
chunks: 'all',
maxInitialRequests: 25,
minSize: 20000,
cacheGroups: {
default: {
minChunks: 2,
priority: -20,
reuseExistingChunk: true,
},
},
},
}
Dependencies are split into smaller chunks based on usage. React goes into one chunk, lodash into another, your design system into a third. When only React updates, only the React chunk cache busts — everything else stays cached.
Measuring the Impact
You've done the work — now prove it paid off. After implementing code splitting, measure the results:
// Log chunk load times
const originalImport = window.__webpack_require__;
// Or use the Performance API:
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (entry.name.includes('.js') && entry.initiatorType === 'script') {
console.log(`Chunk loaded: ${entry.name} — ${entry.duration.toFixed(0)}ms`);
}
}
});
observer.observe({ type: 'resource', buffered: true });
Key metrics to track:
- Initial JS size: should drop significantly (target: < 200KB compressed)
- Route navigation time: chunk load + parse + render. Target: < 300ms on 4G
- Cache hit rate: with granular chunks, cache hits should increase over time
- 1Code splitting breaks a monolithic bundle into chunks loaded on demand — users download only the code they need.
- 2Route-based splitting is automatic in Next.js and the highest-impact starting point for any application.
- 3Component-split heavy components (> 30KB) that are below-fold or behind interactions. Don't split tiny components.
- 4Dynamic import() is the split point mechanism — the bundler creates a separate chunk for everything reachable from it.
- 5Prefetch strategies (hover, viewport entry, idle) eliminate navigation latency by loading chunks before the user needs them.
- 6Use webpackPrefetch (low priority, idle time) over webpackPreload (high priority, competes for bandwidth) for non-critical routes.
- 7Granular chunk splitting improves cache hit rates over monolithic vendor chunks — a small dependency update doesn't bust the entire cache.
Q: A React SPA has a 1.8MB bundle. Time to Interactive is 8 seconds on 4G. Walk me through your code-splitting strategy.
A strong answer: Start with route-based splitting — each route gets a lazy import, reducing initial load to only the current route's code. Then audit the current route for heavy components below the fold (charts, editors, modals) and lazy-load them with Suspense. Check for oversized libraries that only one route needs and move them into that route's chunk. Add prefetching: on-hover for navigation links, on-idle for the most likely next routes. Configure granular splitChunks to improve caching. Measure the result: target < 200KB initial JS, < 300ms route navigation on 4G. The 1.8MB should drop to ~150-200KB initial, with the rest loaded on demand.