BoothJS (or booth.js) is a zero-dependency wrapper around the Web Audio API to make working with audio on the web super easy.
Instead of following super long tutorials on how to record on the web, just use BoothJS:
import { Recorder, getMediaStream } from "booth.js";
const stream = await getMediaStream({ audio: true });
const recorder = new Recorder(stream);
recorder.start();
// A little while later...
const data = await recorder.stop();
// Yay we have our recorded data as a blob object!
console.log(data);
Wasn't that easy?
Getting up and running with Booth is as simple as:
$ npm install booth.js
$ yarn add booth.js
$ pnpm add booth.js
$ bun add booth.js
If you have multiple devices plugged in, Booth makes it super easy to choose a single device to record:
import { getMediaStream, getDevices } from "booth.js";
const devices = await getDevices({ kind: "audioinput" });
const stream = getMediaStream({ deviceId: devices[0].id });
const recorder = new AudioRecorder(stream);
recorder.start();
Booth also provides wrappers around the AnalyserNode
API to make monitoring frequency data and volume easier.
The Monitor
and Recorder
classes both have built-in analysers that can be used to monitor frequency data and volume. As Recorder
is itself a subclass of Monitor
, you can use the same API on both classes:
import { getMediaStream, Monitor } from "booth.js";
const stream = getMediaStream({ deviceId: devices[0].id });
const monitor = new Monitor(stream); // Monitor begins listening to the stream as soon as it's created
function showVolume() {
const volume = monitor.volume;
// Some kind of drawing code...
requestAnimationFrame(showVolume);
}
function showWaveform() {
const frequencyData = monitor.byteTimeDomainData;
// Some kind of drawing code...
requestAnimationFrame(showWaveform);
}
showVolume();
showWaveform();
You can also use the Analyser
wrapper directly, which makes it easy to monitor frequency data without an external array. Take for example this StackOverflow answer written in Booth:
import { Analyser } from "booth.js";
const audioContext = new AudioContext(); // browser built-in
const samplebutton = document.createElement("button");
samplebutton.innerText = "sample";
samplebutton.addEventListener("click", async () => {
const response = await fetch("testsong.wav");
const soundBuffer = await response.arrayBuffer();
const sampleBuffer = await audioContext.decodeAudioData(soundBuffer);
const sampleSource = audioContext.createBufferSource();
const analyser = new Analyser(audioContext, { fftSize: 2048 });
sampleSource.buffer = sampleBuffer;
sampleSource.connect(analyser);
analyser.connectToDestination();
sampleSource.start();
function calculateVolume() {
console.log(analyser.volume);
requestAnimationFrame(caclculateVolume);
}
calculateVolume();
});
Analyser
gives you access to the underlying AnalyserNode
instance, allowing you to connect it to other sources or worklets. Take for example this snippet from MDN's WebAudio voicechanger example:
import { Analyser } from "booth.js";
const audioCtx = new AudioContext(); // browser built-in
const analyser = new Analyser(audioCtx, {
minDecibels: -90,
maxDecibels: -10,
smoothingTimeConstant: 0.85,
});
const distortion = audioCtx.createWaveShaper();
const gainNode = audioCtx.createGain();
const biquadFilter = audioCtx.createBiquadFilter();
const convolver = audioCtx.createConvolver();
const source = audioCtx.createMediaStreamSource(stream);
source.connect(distortion);
distortion.connect(biquadFilter);
biquadFilter.connect(gainNode);
convolver.connect(gainNode);
echoDelay.placeBetween(gainNode, analyser.node);
analyser.connectToDestination();
To access the same analyser for a Monitor
or Recorder
, you can use the analyser
property. The built-in analyser is by default connected from the source to the destination of the audio stream. If you need to customize this logic entirely, such as above, you can pass a setupAnalyser
callback to manually connect the analyser to your own nodes. Just remember to connect it to the source and destination as well!
import { Recorder, getMediaStream } from "booth.js";
const stream = await getMediaStream();
const recorder = new Recorder(stream, {
setupAnalyser: ({ analyser, source, context }) => {
const distortion = context.createWaveShaper();
const gainNode = context.createGain();
const biquadFilter = context.createBiquadFilter();
const convolver = context.createConvolver();
source.connect(distortion);
distortion.connect(biquadFilter);
biquadFilter.connect(gainNode);
convolver.connect(gainNode);
echoDelay.placeBetween(gainNode, analyser.node);
analyser.connectToDestination();
},
});
// Some time later...
recorder.analyzer; // Access the Analyser
Booth also supports custom worklets in case it doesn't do everything you need out-of-the-box. Let's take a look at registering a custom worklet that prints its data whenever it dispatches a new message:
import { getMediaStream, Monitor } from "booth.js";
const stream = await getMediaStream();
const monitor = new Monitor(stream);
const node = await monitor.installWorklet(
"my-custom-worklet",
"/worklets/my-custom-worklet.js",
);
node.port.addEventListener("message", ({ data }) => {
console.log("Received new worklet data: " + JSON.stringify(data));
});
monitor.source.connect(node).connect(monitor.destination);
The short answer: it's 2025, and recording audio still requires a lot more code than it should. Notably, having to collect data chunks as recording data comes in is a pain (at least to me). I wanted to build a more intuitive API over the existing one to make my life a little easier.
Yes, it's true this is not the first library to tackle this problem. It is, however, the newest and most up-to-date (see "Similar Projects" below).
I originally wanted to name this project record.js, as I thought it sounded much cooler, but apparently NPM won't let you create packages that are too similar to other packages. Seeing record-js and recordjs beat me out, I settled for booth, as in isolation booth.
This library came out of my own needs for web audio, so it will be definitely maintained for the time being. I'd like to see it eventually grow to encapsulate other kinds of web media management, such as video and screen recording as well.
Unlike Booth's sister library PushJS, Booth was created to provide a more intuitive way to use the Web Audio API, not provide backwards-compatibility for it.
There is no guarantee BoothJS will work on older browsers, but if you need to fill the gap in some way, I encourage you to use Google's audio worklet polyfill for the time being.
- RecordRTC (last published 2+ years ago)
- recorderjs (last published 8+ years ago)