Overview
Code splitting divides an application's codebase into separate bundles or chunks that load independently based on user interaction, route changes, or specific conditions. Rather than serving a single large JavaScript bundle containing all application code, code splitting creates multiple smaller bundles that download only when needed.
This approach addresses a fundamental problem in modern web applications: as codebases grow, the initial payload increases, leading to slower load times and delayed interactivity. A typical single-page application might include hundreds of thousands of lines of code, but users only need a fraction of that code for any given page or feature.
Code splitting operates at the module level, identifying logical boundaries where code can separate. These boundaries might correspond to application routes, feature modules, third-party libraries, or conditional functionality. The bundler analyzes dependency graphs and creates separate chunks based on these boundaries.
The browser loads the critical path first—the minimum code required for initial render and interactivity. Additional code chunks download asynchronously as the user navigates the application or triggers features requiring that code. This creates a progressive loading pattern where functionality becomes available incrementally rather than requiring everything upfront.
// Without code splitting - everything loads immediately
import { UserDashboard } from './dashboard';
import { AdminPanel } from './admin';
import { Reports } from './reports';
import { Analytics } from './analytics';
// With code splitting - loads on demand
const dashboard = () => import('./dashboard');
const admin = () => import('./admin');
const reports = () => import('./reports');
const analytics = () => import('./analytics');
Modern web applications typically split code at route boundaries, where each page or major section loads its specific code only when accessed. A user visiting the login page doesn't need the admin panel code, and someone viewing their dashboard doesn't need the reporting module until they navigate to reports.
Key Principles
Dynamic Import Mechanism: Code splitting relies on dynamic imports, which differ from static imports by returning promises that resolve to modules. Static imports execute at parse time and bundle together, while dynamic imports create split points where bundlers generate separate chunks. The import function acts as a split point marker during the build process.
# In Ruby with lazy loading (conceptual parallel)
class Application
def admin_panel
@admin_panel ||= require_lazy('admin_panel')
end
def reports
@reports ||= require_lazy('reports')
end
end
Bundle Graph Generation: The bundler constructs a dependency graph by analyzing import statements throughout the codebase. Each dynamic import creates a potential split point. The bundler then determines optimal chunk sizes and which modules should group together, considering factors like shared dependencies and bundle size constraints. Modules imported by multiple chunks might extract into a shared chunk to avoid duplication.
Async Loading Orchestration: When code encounters a dynamic import at runtime, the browser initiates an HTTP request for the corresponding chunk. The import promise resolves once the chunk downloads and parses. Application code must handle the asynchronous nature of these operations, typically through promise chains or async/await patterns.
Cache Optimization: Code splitting improves caching efficiency. When application code changes, only modified chunks require re-download. If the user dashboard code changes but admin code remains static, returning visitors with cached admin chunks only download the updated dashboard chunk. This minimizes bandwidth usage and speeds up subsequent loads.
Critical Path Prioritization: The initial bundle contains only code necessary for first paint and time-to-interactive. This critical path includes the HTML parser, initial render code, and immediate interactivity handlers. Everything else defers to separate chunks that load based on priority and likelihood of use.
# Rails lazy loading example
class ReportsController < ApplicationController
def index
# Load heavy analytics library only when needed
require_dependency 'analytics_engine' unless defined?(AnalyticsEngine)
@analytics = AnalyticsEngine.new
end
end
Chunk Dependency Management: Chunks often depend on other chunks. The bundler maintains a manifest that tracks chunk relationships. When loading a chunk that depends on others, the runtime loads dependencies first, maintaining correct execution order. This dependency tree ensures modules initialize in the proper sequence.
Preloading and Prefetching: Beyond basic lazy loading, code splitting supports preloading (loading chunks needed for current page) and prefetching (loading chunks likely needed soon). These hints tell the browser to download chunks during idle time, balancing performance with resource usage.
// Prefetch during idle time
const analyticsChunk = () => import(
/* webpackPrefetch: true */ './analytics'
);
// Preload immediately but non-blocking
const criticalChunk = () => import(
/* webpackPreload: true */ './critical'
);
Design Considerations
Granularity Trade-offs: Determining split granularity involves balancing numerous competing concerns. Too many small chunks increase HTTP overhead, as each chunk requires a separate request with its own latency. Too few large chunks diminish the benefits of splitting by requiring larger downloads. The optimal approach typically splits at major feature boundaries while keeping shared dependencies in common chunks.
Route-based splitting provides natural boundaries but may create imbalanced chunks. An admin section with extensive functionality generates a large chunk, while simpler pages produce small ones. Consider splitting large features further based on sub-features or tabs within a section.
# Rails route-based code organization
Rails.application.routes.draw do
# Each namespace might map to separate JavaScript chunks
namespace :admin do
resources :users # admin/users bundle
resources :reports # admin/reports bundle
resources :settings # admin/settings bundle
end
namespace :customer do
resources :dashboard # customer/dashboard bundle
resources :orders # customer/orders bundle
end
end
User Flow Analysis: Split points should align with user behavior patterns. Analytics data reveals which features users access frequently and which remain unused by most visitors. High-traffic paths warrant optimization and should load quickly, while rarely-used admin features can defer.
First-time visitors follow different paths than returning users. New users might explore marketing pages and sign-up flows, while established users go directly to their dashboard. Tailor splitting strategies to these patterns, ensuring critical paths for each user type load efficiently.
Bundle Size Targets: Aim for initial bundles under 200KB gzipped for reasonable load times on standard connections. Subsequent chunks can vary based on feature complexity but should generally stay under 500KB. Chunks exceeding 1MB indicate opportunities for further splitting. These guidelines adjust based on target audience connection speeds and device capabilities.
Monitor the dependency graph to identify unexpectedly large chunks. A single large library imported in multiple places might need extraction to a shared chunk. Analyze which modules contribute most to bundle size and whether they're truly necessary for the features they support.
Shared Dependency Strategy: When multiple chunks import the same module, bundlers can extract it to a shared chunk loaded once and cached. However, too many shared chunks increase request overhead. Balance extraction against request efficiency by setting minimum chunk size thresholds and limiting the number of shared chunks.
// Webpack configuration for shared chunks
optimization: {
splitChunks: {
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name: 'vendors',
chunks: 'all',
minSize: 30000
},
common: {
minChunks: 2,
priority: -20,
reuseExistingChunk: true
}
}
}
}
Loading State Management: Code splitting introduces asynchronous boundaries requiring loading states. Design decisions include whether to show loading spinners, skeleton screens, or progressive enhancement where basic functionality appears immediately with enhancements loading afterward. Each approach affects perceived performance differently.
Consider error states when chunks fail to load due to network issues or deployment changes. Provide retry mechanisms and fallback behavior. Avoid leaving users in broken states when dynamic imports fail.
Ruby Implementation
Rails Asset Pipeline Splitting: Rails applications traditionally served concatenated JavaScript files through Sprockets. While Sprockets supports some splitting through multiple manifest files, modern Rails applications typically integrate Webpacker (now deprecated) or jsbundling-rails with esbuild, webpack, or rollup for sophisticated code splitting.
# app/javascript/packs/application.js - Rails with jsbundling
import * as Turbo from "@hotwired/turbo-rails"
// Dynamic imports for heavy components
document.addEventListener('turbo:load', () => {
const chartElements = document.querySelectorAll('[data-chart]')
if (chartElements.length > 0) {
import('./charts').then(module => {
module.initializeCharts(chartElements)
})
}
})
Controller-Based Code Loading: Rails controllers can trigger specific JavaScript chunks through data attributes or meta tags. This approach ties code loading to server-side routing, ensuring each page loads only its required JavaScript.
# app/controllers/admin/reports_controller.rb
class Admin::ReportsController < ApplicationController
def index
# Set meta tag for JavaScript chunk
set_meta_tags javascript: 'admin/reports'
end
end
# app/views/layouts/application.html.erb
<%= javascript_pack_tag 'application' %>
<% if content_for?(:javascript_pack) %>
<%= javascript_pack_tag content_for(:javascript_pack) %>
<% end %>
Lazy Module Loading in Ruby: While Ruby doesn't use bundlers like JavaScript, it supports lazy loading through autoloading and explicit requires. Rails autoloading defers class loading until first use, providing similar benefits to code splitting by not loading unused code into memory.
# config/application.rb
config.autoload_paths += %W(#{config.root}/lib)
# Lazy-loaded service objects
class ReportGenerator
def generate_pdf
# Only loads PDF library when this method executes
require 'prawn' unless defined?(Prawn)
Prawn::Document.new do |pdf|
# PDF generation
end
end
def generate_excel
require 'axlsx' unless defined?(Axlsx)
Axlsx::Package.new do |p|
# Excel generation
end
end
end
Turbo Frame Integration: Hotwire Turbo Frames provide a Ruby-centric approach to lazy loading interface sections. Each frame loads independently, fetching HTML from the server only when needed. This shifts complexity from JavaScript bundles to server-side rendering with on-demand loading.
# app/views/dashboards/show.html.erb
<%= turbo_frame_tag "statistics", src: statistics_path, loading: :lazy do %>
<p>Loading statistics...</p>
<% end %>
<%= turbo_frame_tag "recent_activity", src: activity_path, loading: :lazy do %>
<p>Loading activity...</p>
<% end %>
# app/controllers/dashboards_controller.rb
class DashboardsController < ApplicationController
def statistics
render turbo_stream: turbo_stream.replace(
"statistics",
partial: "statistics",
locals: { stats: calculate_statistics }
)
end
end
Gem-Based Lazy Loading: Some Ruby gems support lazy loading of components. ActiveRecord associations use lazy loading by default, not querying related records until accessed. This pattern extends to other areas where expensive operations defer until necessary.
class User < ApplicationRecord
has_many :orders
has_many :notifications
# Associations load lazily
def recent_activity
# Only queries when method called
orders.recent.includes(:items)
end
end
# Usage
user = User.find(params[:id])
# No queries yet for orders or notifications
if show_activity?
user.recent_activity # Now queries orders
end
Stimulus Controller Lazy Loading: Stimulus controllers can load dynamically based on DOM presence, similar to JavaScript code splitting. This defers controller registration until elements with matching data attributes appear.
# app/javascript/controllers/index.js
import { application } from "./application"
// Eager load essential controllers
import HelloController from "./hello_controller"
application.register("hello", HelloController)
// Lazy load heavy controllers
const lazyControllers = {
chart: () => import("./chart_controller"),
map: () => import("./map_controller"),
editor: () => import("./editor_controller")
}
// Register controllers when their elements appear
const observer = new MutationObserver((mutations) => {
mutations.forEach((mutation) => {
mutation.addedNodes.forEach((node) => {
if (node.nodeType === 1) {
Object.keys(lazyControllers).forEach(async (name) => {
if (node.querySelector(`[data-controller*="${name}"]`)) {
const module = await lazyControllers[name]()
application.register(name, module.default)
}
})
}
})
})
})
observer.observe(document.body, { childList: true, subtree: true })
Practical Examples
Route-Based Splitting in Single-Page Application: A dashboard application splits code by major sections, loading each only when users navigate to that route. The initial bundle contains the router, authentication, and shell UI. Each route's code loads on demand.
// router.js
import { createRouter } from './routing'
const routes = [
{
path: '/',
component: () => import('./views/Home.vue')
},
{
path: '/dashboard',
component: () => import('./views/Dashboard.vue')
},
{
path: '/admin',
component: () => import('./views/Admin.vue'),
children: [
{
path: 'users',
component: () => import('./views/admin/Users.vue')
},
{
path: 'reports',
component: () => import('./views/admin/Reports.vue')
}
]
},
{
path: '/analytics',
component: () => import('./views/Analytics.vue')
}
]
export default createRouter(routes)
This configuration creates separate chunks for each route. When a user visits /dashboard, only the dashboard chunk downloads. Navigation to /admin/users loads both the admin shell and users components. The browser caches these chunks, so subsequent visits to cached routes load instantly.
Modal Dialog Lazy Loading: Modal dialogs often contain complex forms or data visualizations that most users never see. Loading modal code only when users click to open the dialog reduces the initial bundle significantly.
# app/javascript/controllers/modal_controller.js
import { Controller } from "@hotwired/stimulus"
export default class extends Controller {
static targets = ["container"]
async open() {
// Load modal content dynamically
const modalType = this.element.dataset.modalType
try {
const module = await import(`./modals/${modalType}_modal`)
const modal = new module.default(this.containerTarget)
modal.render()
modal.show()
} catch (error) {
console.error(`Failed to load ${modalType} modal:`, error)
this.showError("Unable to load dialog")
}
}
showError(message) {
this.containerTarget.innerHTML = `
<div class="error-message">${message}</div>
`
}
}
# app/views/users/show.html.erb
<button
data-controller="modal"
data-modal-type="user-edit"
data-action="click->modal#open">
Edit User
</button>
<div data-modal-target="container"></div>
Third-Party Library Splitting: Large third-party libraries split into separate chunks, especially when used conditionally. A charting library might be several hundred kilobytes but only needed on analytics pages.
// charts.js - loaded only on analytics page
let Chart = null
export async function initializeCharts(elements) {
if (!Chart) {
const module = await import('chart.js/auto')
Chart = module.default
}
elements.forEach(element => {
const config = JSON.parse(element.dataset.chartConfig)
new Chart(element, config)
})
}
// analytics.js - main analytics page code
document.addEventListener('DOMContentLoaded', async () => {
const chartElements = document.querySelectorAll('[data-chart]')
if (chartElements.length > 0) {
const { initializeCharts } = await import('./charts')
await initializeCharts(chartElements)
}
})
Feature Flag-Based Loading: Applications with feature flags load feature code only for users with access. This keeps experimental features from bloating the bundle for users who can't access them.
# app/javascript/feature_loader.js
export class FeatureLoader {
constructor(enabledFeatures) {
this.enabledFeatures = new Set(enabledFeatures)
this.loadedFeatures = new Map()
}
async loadFeature(featureName) {
if (!this.enabledFeatures.has(featureName)) {
console.warn(`Feature ${featureName} not enabled`)
return null
}
if (this.loadedFeatures.has(featureName)) {
return this.loadedFeatures.get(featureName)
}
try {
const module = await import(`./features/${featureName}`)
this.loadedFeatures.set(featureName, module.default)
return module.default
} catch (error) {
console.error(`Failed to load feature ${featureName}:`, error)
return null
}
}
}
// Usage in Rails view
const features = <%= raw current_user.enabled_features.to_json %>
const loader = new FeatureLoader(features)
document.querySelectorAll('[data-feature]').forEach(async element => {
const featureName = element.dataset.feature
const feature = await loader.loadFeature(featureName)
if (feature) {
feature.initialize(element)
}
})
Progressive Enhancement Pattern: Load basic functionality immediately while enhancing with advanced features through lazy-loaded chunks. This ensures core functionality works quickly while advanced features load in the background.
# app/javascript/components/data_table.js
export class DataTable {
constructor(element) {
this.element = element
this.enhancementsLoaded = false
this.initializeBasicTable()
}
initializeBasicTable() {
// Basic sorting and filtering - included in main bundle
this.addBasicSorting()
this.addBasicFiltering()
// Load enhancements in background
this.loadEnhancements()
}
async loadEnhancements() {
try {
const { AdvancedFilters, Export, VirtualScrolling } =
await import('./data_table_enhancements')
this.advancedFilters = new AdvancedFilters(this.element)
this.exporter = new Export(this.element)
this.virtualScroll = new VirtualScrolling(this.element)
this.enhancementsLoaded = true
this.element.classList.add('enhanced')
} catch (error) {
console.warn('Advanced table features unavailable:', error)
// Table remains functional with basic features
}
}
}
Performance Considerations
Initial Load Time Impact: Code splitting reduces initial JavaScript bundle size, which directly correlates with parse and execution time. Parsing 500KB of JavaScript on a mid-range mobile device can take 1-2 seconds. Splitting this into a 150KB initial bundle and several lazy-loaded chunks reduces initial parse time to 300-600ms, improving time-to-interactive significantly.
The browser must download, parse, and execute JavaScript before the page becomes interactive. Larger bundles delay this process. Code splitting prioritizes critical code, getting users to an interactive state faster even if total download time increases slightly due to multiple requests.
# Performance monitoring in Rails
class PerformanceMonitor
def self.track_chunk_load(chunk_name, load_time)
Rails.logger.info(
"Chunk loaded: #{chunk_name}, Time: #{load_time}ms"
)
# Track to metrics service
Metrics.timing("javascript.chunk.#{chunk_name}", load_time)
end
end
# In JavaScript
const startTime = performance.now()
await import('./analytics')
const loadTime = performance.now() - startTime
// Send to Rails backend
fetch('/performance/chunk_load', {
method: 'POST',
body: JSON.stringify({
chunk: 'analytics',
load_time: loadTime
})
})
Cache Efficiency: Splitting code by update frequency maximizes cache efficiency. Application code changes frequently during active development, while third-party libraries remain stable for months. Separating these into distinct chunks means library chunks stay cached even when application code updates.
Hash-based filenames enable aggressive caching. When chunk content changes, the filename changes, automatically invalidating caches. Unchanged chunks retain their filenames and cached versions, reducing bandwidth on subsequent visits.
// Webpack output configuration for optimal caching
output: {
filename: '[name].[contenthash:8].js',
chunkFilename: '[name].[contenthash:8].chunk.js'
}
// Results in files like:
// main.a8b2c9d1.js
// vendor.f3e4a6b2.js
// analytics.d9c4b7e8.chunk.js
Network Request Overhead: Each chunk requires an HTTP request with associated latency. On high-latency connections, multiple round trips accumulate. HTTP/2 multiplexing mitigates this by allowing parallel requests over a single connection, but latency still affects total load time.
Balance chunk count against chunk size. Ten 50KB chunks might load slower than three 150KB chunks on high-latency connections despite smaller total size. Consider typical user connection characteristics when determining granularity.
Waterfall Loading: Dynamic imports create request waterfalls where one chunk's code triggers loading of another chunk. If chunk A imports chunk B dynamically, the browser can't request chunk B until chunk A downloads and executes. Deep import chains extend load times significantly.
// Problematic waterfall
// main.js imports dashboard.js
const dashboard = await import('./dashboard')
// dashboard.js imports widgets.js
const widgets = await import('./widgets')
// widgets.js imports charts.js
const charts = await import('./charts')
// Better approach - parallel loading where possible
const [dashboard, widgets, charts] = await Promise.all([
import('./dashboard'),
import('./widgets'),
import('./charts')
])
Prefetching and Preloading: Modern browsers support resource hints that optimize chunk loading. Prefetch downloads chunks during idle time, warming the cache for likely-needed code. Preload downloads chunks with higher priority for imminent use.
# Rails helper for generating preload links
module AssetHelper
def preload_javascript_chunk(chunk_name)
asset_path = asset_pack_path("#{chunk_name}.js")
tag.link(rel: 'preload', href: asset_path, as: 'script')
end
def prefetch_javascript_chunk(chunk_name)
asset_path = asset_pack_path("#{chunk_name}.js")
tag.link(rel: 'prefetch', href: asset_path, as: 'script')
end
end
# In view
<%= preload_javascript_chunk('critical-feature') %>
<%= prefetch_javascript_chunk('likely-next-page') %>
Bundle Analysis: Regular analysis of bundle composition identifies optimization opportunities. Tools visualize chunk sizes, module inclusion, and dependency relationships. This reveals unexpectedly large dependencies or opportunities for better splitting.
// webpack-bundle-analyzer configuration
const BundleAnalyzerPlugin =
require('webpack-bundle-analyzer').BundleAnalyzerPlugin
module.exports = {
plugins: [
new BundleAnalyzerPlugin({
analyzerMode: 'static',
reportFilename: 'bundle-report.html',
openAnalyzer: false
})
]
}
Analyze which modules contribute most to bundle size. A date formatting library might be 50KB but only used in one small feature. This suggests either finding a lighter alternative or ensuring aggressive code splitting around that feature.
Compression and Minification: Code splitting works synergistically with compression. Gzip and Brotli compress more effectively when code contains similar patterns. Splitting by feature or library creates chunks with more homogeneous content, improving compression ratios. A chunk containing only chart-related code compresses better than a mixed bundle.
Tools & Ecosystem
Webpack: The most widely-adopted bundler supporting sophisticated code splitting through dynamic imports, chunk optimization, and extensive configuration. Webpack's SplitChunksPlugin automatically extracts shared dependencies into common chunks based on configurable rules.
// webpack.config.js
module.exports = {
optimization: {
splitChunks: {
chunks: 'all',
cacheGroups: {
vendors: {
test: /[\\/]node_modules[\\/]/,
priority: -10,
name(module) {
const packageName = module.context.match(
/[\\/]node_modules[\\/](.*?)([\\/]|$)/
)[1]
return `vendor.${packageName.replace('@', '')}`
}
},
common: {
minChunks: 2,
priority: -20,
reuseExistingChunk: true
}
}
},
runtimeChunk: 'single'
}
}
esbuild: A fast bundler written in Go, significantly faster than JavaScript-based alternatives. esbuild supports code splitting through format settings and entry points but offers less sophisticated optimization than Webpack.
// esbuild configuration
require('esbuild').build({
entryPoints: [
'src/main.js',
'src/admin.js',
'src/analytics.js'
],
bundle: true,
splitting: true,
format: 'esm',
outdir: 'dist',
chunkNames: '[name]-[hash]'
})
Rollup: Specialized for library bundling but supports application code splitting. Rollup excels at tree-shaking and produces cleaner output than Webpack. Often chosen for libraries and smaller applications.
# Rails integration with jsbundling-rails
# package.json
{
"scripts": {
"build": "rollup -c",
"build:css": "tailwindcss -i ./app/assets/stylesheets/application.css -o ./app/assets/builds/application.css"
}
}
# rollup.config.js
export default {
input: {
application: 'app/javascript/application.js',
admin: 'app/javascript/admin.js'
},
output: {
dir: 'app/assets/builds',
format: 'es',
sourcemap: true,
manualChunks: {
vendor: ['react', 'react-dom'],
utilities: ['lodash', 'date-fns']
}
}
}
Vite: Modern build tool using native ES modules during development and Rollup for production. Vite supports code splitting through dynamic imports and provides fast development server with instant module replacement.
// vite.config.js
import { defineConfig } from 'vite'
export default defineConfig({
build: {
rollupOptions: {
output: {
manualChunks: {
'chart-vendors': ['chart.js', 'd3'],
'form-utilities': ['validator', 'date-fns']
}
}
},
chunkSizeWarningLimit: 600
}
})
Rails jsbundling-rails: Rails gem providing integration with multiple JavaScript bundlers. Supports esbuild, webpack, and rollup, handling build process integration with Rails asset pipeline.
# Gemfile
gem 'jsbundling-rails'
# bin/dev configuration runs both Rails and build process
web: bin/rails server
js: yarn build --watch
# package.json build script
{
"scripts": {
"build": "esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds --public-path=assets --splitting --format=esm"
}
}
Import Maps: Alternative to bundling that serves modules directly to the browser, which handles module resolution. Import maps reduce build complexity but provide less optimization than bundlers. Code splitting occurs through dynamic imports but without bundle optimization.
# config/importmap.rb
pin "application", preload: true
pin "dashboard", to: "dashboard.js"
pin "admin", to: "admin.js"
# app/javascript/application.js
// Dynamic import with import maps
async function loadDashboard() {
const { Dashboard } = await import('dashboard')
return new Dashboard()
}
Turbo and Stimulus: Hotwire's Turbo provides HTML-over-the-wire approach where the server handles routing and rendering. Stimulus controllers load dynamically based on DOM presence. This shifts code splitting concerns from JavaScript bundles to HTML fragments and Stimulus controllers.
# Stimulus with dynamic loading
import { Application } from "@hotwired/stimulus"
import { definitionsFromContext } from "@hotwired/stimulus-webpack-helpers"
const application = Application.start()
const context = require.context("./controllers", true, /\.js$/)
const lazyContext = require.context(
"./controllers/lazy",
true,
/\.js$/
)
// Load eager controllers immediately
application.load(definitionsFromContext(context))
// Load lazy controllers on demand
export function loadLazyController(name) {
const definitions = definitionsFromContext(lazyContext)
const definition = definitions.find(d => d.identifier === name)
if (definition) {
application.load([definition])
}
}
Reference
Code Splitting Strategies
| Strategy | Use Case | Benefits | Trade-offs |
|---|---|---|---|
| Route-based | Multi-page applications | Natural split points, clear boundaries | May create unbalanced chunks |
| Component-based | Large UI libraries | Fine-grained control | Requires careful planning |
| Feature-based | Feature flags, A/B tests | Load only enabled features | Complex dependency management |
| Vendor splitting | Third-party libraries | Stable chunks, good caching | Shared dependencies complexity |
| Entry points | Multiple applications | Complete separation | Potential duplication |
Dynamic Import Syntax
| Pattern | Description | Example Use Case |
|---|---|---|
| Basic import | Returns promise resolving to module | Loading any module dynamically |
| Named imports | Destructure specific exports | Importing specific functions |
| Default import | Import default export | Loading component classes |
| Error handling | Try-catch around import | Graceful degradation |
| Conditional import | Import based on runtime condition | Feature detection |
| Parallel imports | Promise.all with multiple imports | Loading related modules |
Webpack Configuration Options
| Option | Purpose | Typical Value |
|---|---|---|
| chunks | Which chunks to split | all, async, initial |
| minSize | Minimum size for chunk creation | 20000-30000 bytes |
| maxSize | Maximum chunk size hint | 244000-500000 bytes |
| minChunks | Minimum sharing for extraction | 1-2 |
| maxAsyncRequests | Parallel request limit | 5-30 |
| maxInitialRequests | Initial page request limit | 3-30 |
| automaticNameDelimiter | Chunk name separator | ~ or - |
| cacheGroups | Chunk grouping rules | vendor, common, etc |
Performance Metrics
| Metric | Target | Measurement |
|---|---|---|
| Initial bundle size | Under 200KB gzipped | Network panel, bundle analyzer |
| Time to interactive | Under 3 seconds | Lighthouse, Web Vitals |
| First contentful paint | Under 1.5 seconds | Performance API |
| Chunk load time | Under 500ms | Performance marks |
| Cache hit rate | Above 80% | Server logs, analytics |
| Number of requests | 5-15 for initial load | Network panel |
Rails Integration Patterns
| Pattern | Implementation | When to Use |
|---|---|---|
| Pack per page | Separate pack for each controller | Clear page boundaries |
| Pack per section | One pack per major section | Shared functionality within sections |
| Shared common pack | Extract shared dependencies | Multiple packs with overlap |
| Lazy Stimulus | Load controllers on demand | Heavy controllers, rare features |
| Turbo Frames | Server-rendered lazy sections | Complex server logic |
| View-triggered loads | Data attributes trigger imports | Conditional feature loading |
Common Chunk Patterns
| Pattern | Configuration | Result |
|---|---|---|
| Vendor chunk | Split node_modules | Stable third-party code |
| Common chunk | Split shared application code | Reused utilities |
| Runtime chunk | Extract webpack runtime | Enables long-term caching |
| Per-route chunks | Import at route level | Route-specific code |
| Async chunks | Dynamic imports | On-demand features |
Browser Support Considerations
| Feature | Support | Fallback |
|---|---|---|
| Dynamic import | Modern browsers | Static imports with polyfill |
| ES modules | ES6+ browsers | Transpile to CommonJS |
| HTTP/2 | Most browsers | Works but slower with HTTP/1.1 |
| Preload/Prefetch | Modern browsers | Degrade gracefully |
| Top-level await | Newer browsers | Use async functions |
Troubleshooting Checklist
| Issue | Possible Cause | Solution |
|---|---|---|
| Chunks not splitting | Missing dynamic imports | Add dynamic import syntax |
| Large initial bundle | Too much eager loading | Move imports to dynamic |
| Many small chunks | Over-splitting | Increase minSize threshold |
| Slow subsequent loads | No prefetching | Add prefetch hints |
| Cache not working | No content hashing | Enable contenthash in filenames |
| Dependencies duplicated | Misconfigured cache groups | Review splitChunks config |
| Waterfall loading | Nested dynamic imports | Parallel loading with Promise.all |
Code Splitting Commands
| Tool | Command | Purpose |
|---|---|---|
| Webpack | webpack --analyze | Generate bundle analysis |
| esbuild | esbuild --splitting | Enable code splitting |
| Rollup | rollup -c --watch | Watch mode with splitting |
| Rails | rails assets:precompile | Build production assets |
| Vite | vite build --report | Build with size report |
| npm | npm run build -- --profile | Profile build performance |