Why Structured Audio Can Feel Different From Streaming Tracks
A technical look at why stereo separation, real-time generation, and session structure can create a different experience from generic focus playlists.
00. THE HISTORICAL VECTOR
Physicist Heinrich Wilhelm Dove discovers that playing two different frequencies into each ear creates a perceived "beat" inside the head. He identifies it as a neurological curiosity, not yet understanding the mechanism.
Wolfgang Metzger discovers that when the visual field is flooded with uniform, static noise (a "Ganzfeld"), the brain starts hallucinating patterns to find structure. This lays the groundwork for Visual Entrainment.
Dr. Gerald Oster publishes "Auditory Beats in the Brain," a widely cited paper that discusses binaural beats, brainstem processing, and possible diagnostic uses in auditory research.
The Monroe Institute develops and popularizes "Hemi-Sync" protocols. Related government-era documents and consciousness research are part of the historical background around these ideas, but they should be read as context rather than validation.
Phantas uses the Web Audio API to generate session audio directly in the browser instead of relying on pre-rendered streaming files. The practical idea is to reduce another layer of file handling between the session design and playback.
01. THE MECHANISM (FFR)
To understand why compression may matter here, it helps to look at the Frequency Following Response (FFR).
The brain is an electrochemical machine that operates on rhythm. These rhythms (Alpha, Beta, Theta) are not random; they are the "clock speed" of the cortex.
Sensory input travels to the Thalamus (the brain's router). In entrainment research, strong rhythmic input is often discussed as one way cortical timing may become more synchronized to an external signal.
This part of the brainstem helps process tiny timing differences between the ears. Binaural beat literature often relates that timing sensitivity to the perception of an internal beat when each ear receives a slightly different frequency.
If phase relationships are blurred by compression, the perceived internal beat may become less distinct. Phantas is not attempting to validate that chain directly. It is a practical session you can try and evaluate yourself.
02. THE COMPRESSION ARTIFACT
This is one reason streaming platforms may behave differently. Codecs like MP3, AAC (YouTube), and Ogg Vorbis (Spotify) use Joint Stereo (Mid/Side Coding) to save bandwidth.
Instead of storing L and R separately, they store:
- MID CHANNEL = (Left + Right) / 2
- SIDE CHANNEL = (Left - Right) / 2
Because binaural beats use two very similar frequencies (200Hz vs 210Hz), some listeners and researchers treat stereo handling as important. Compression may reduce or blur some of the phase detail those setups depend on.
Result: The "Hum" remains, but the "Driver" is flattened.
03. CLIENT-SIDE SYNTHESIS
Phantas takes a different route and treats the browser as a real-time synthesizer.
Using the Web Audio API, we instantiate raw OscillatorNodes directly on your device's CPU. The audio is generated at playback time rather than delivered as a pre-rendered streaming file.
const ctx = new AudioContext();
// 1. INSTANTIATE OSCILLATORS (PURE SINE)
const oscL = ctx.createOscillator();
const oscR = ctx.createOscillator();
// 2. DEFINE CARRIER & OFFSET (ALPHA 10HZ)
const carrier = 200;
const target = 10;
oscL.frequency.value = carrier;
oscR.frequency.value = carrier + target;
// 3. HARD-PAN FOR CHANNEL ISOLATION
const panL = ctx.createStereoPanner();
const panR = ctx.createStereoPanner();
panL.pan.value = -1; // Full Left
panR.pan.value = 1; // Full Right
// 4. CONNECT GRAPH (ZERO LATENCY)
oscL.connect(panL).connect(ctx.destination);
oscR.connect(panR).connect(ctx.destination);04. THE VISUAL CO-PROCESSOR
Phantas optionally pairs audio with visual rhythm. This sits alongside older visual entrainment ideas, but the product is still meant to be judged by direct experience rather than authority claims.
Phantas includes a strobe driver (Canvas API) that flashes the screen at the same frequency as the audio beat. This draws from Ganzfeld-style and rhythmic visual stimulation ideas, but it is best treated as part of the session design rather than as a validated mechanism claim.