Building Modular Synths in the Browser with Web Audio
Published: November 27, 2025
🌐 What is Web Audio?
The Web Audio API is a high-level JavaScript interface for processing and synthesizing audio in web browsers. Unlike simple HTML5 <audio> playback, Web Audio provides a powerful graph-based architecture where you connect nodes—oscillators, filters, gain controls, analyzers—just like patching a modular synthesizer.
First introduced in 2011 and now supported across all modern browsers, Web Audio runs audio processing in a dedicated high-priority thread, separate from JavaScript's main event loop. This means your synth can maintain stable, low-latency performance even while the UI updates or network requests run. It's the foundation for everything from browser-based DAWs like Soundtrap to creative coding environments like Tone.js and educational tools like Chrome Music Lab.
The API's modular philosophy makes it particularly well-suited for synthesizer development. You're not constrained by prebuilt instruments—you have direct access to oscillators, biquad filters, delay lines, dynamics processors, and the ability to write custom DSP in AudioWorklets. This article shows you how to harness that flexibility to build expressive, performant modular-style patches entirely in the browser.
Graph architecture: explicit connections
Model your synth as a directed graph of AudioNodes: oscillators, filters, envelopes, mixers, and effects. Treat each module as an object that exposes input/output properties. Avoid hidden state—connections should be explicit, and parameter ownership centralized. This makes the system testable, debuggable, and easy to serialize for preset management.
For example, a simple voice might be: oscillator.connect(filter).connect(vca).connect(output). Wrap native nodes in lightweight classes that manage lifecycle (start/stop) and expose parameter getters/setters. When patching becomes dynamic (user-driven modular routing), maintain a connection registry that tracks source → destination mappings and validates cycles before connecting.
⏱️ Musical scheduling: AudioContext time is king
Never schedule events based on JavaScript Date.now() or setTimeout—they drift relative to audio playback and jitter under load. Instead, use audioContext.currentTime as your clock. Schedule parameter changes and node start/stop calls with absolute timestamps: osc.start(ctx.currentTime + 0.1). Chris Wilson's classic article "A Tale of Two Clocks" explains this timing model in depth.
For sequencers and groove-based systems, quantize events to beats but compute those beat times in AudioContext seconds. Maintain a tempo clock that translates beats to absolute time, then schedule ahead by a small buffer (50–100 ms). This look-ahead scheduling keeps timing rock-solid even if the main thread blocks temporarily. Batch parameter updates—set multiple values before calling setValueAtTime to reduce function call overhead and synchronize changes frame-perfect.
🎛️ Parameter smoothing: no clicks allowed
UI controls (sliders, knobs) can change hundreds of times per second during a sweep. Directly setting an AudioParam causes zippering—audible steps as the value jumps sample-to-sample. Instead, use param.setTargetAtTime(value, ctx.currentTime, timeConstant) for exponential smoothing, or linearRampToValueAtTime for linear interpolation over a short duration (5–20 ms).
For parameters that modulate at audio rate (FM index, filter resonance), apply smoothing inside an AudioWorklet processor or via a chained one-pole filter. A time constant of 1–5 ms removes zippering without audibly smearing fast modulation. Test by rapidly scrubbing controls—if you hear crackling, increase smoothing; if response feels sluggish, reduce it. The goal is transparent de-clicking that doesn't compromise musical expression.
🚀 Performance: profile before optimizing
Use AudioWorklets for custom DSP—they run on the audio thread with guaranteed low latency, unlike ScriptProcessorNode which is deprecated and prone to glitches. Pre-allocate AudioNodes where possible: creating and destroying nodes mid-performance can trigger garbage collection pauses. If you need many voices, implement a voice pool that reuses stopped oscillators rather than allocating new ones.
Profile with Chrome DevTools (Performance tab) or Firefox's profiler. Look for main-thread jank—long frames that block the event loop and starve the audio thread. Avoid layout thrash: batch DOM reads and writes, and minimize style recalculations. If your UI updates cause audio glitches, debounce or throttle render cycles, or move updates to requestAnimationFrame callbacks. Watch GC pauses; if you see frequent collections, reduce allocation rate by reusing objects and typed arrays.
🧪 Testing and validation
Build small harnesses that render known signals offline using OfflineAudioContext, then analyze the output with FFT. Generate a 440 Hz sine and verify the spectrum shows a clean peak with harmonics below -80 dB. Render an envelope and check attack/decay timing against expected sample counts. These tests catch regressions and ensure cross-browser consistency.
Automate timing assertions for sequencers: schedule a series of events, render offline, and verify they land sample-accurate. Use continuous integration to run these tests on Chrome, Firefox, and Safari—Web Audio implementations vary subtly, and what works on one may glitch on another. Listen critically across browsers and devices; quantitative tests catch obvious bugs, but your ears catch the musical ones.