Friendly Fourier: Hands-On Exercise Analyzing the Frequency Content of a Pop Single
signal-processinglabaudio

Friendly Fourier: Hands-On Exercise Analyzing the Frequency Content of a Pop Single

UUnknown
2026-03-10
10 min read
Advertisement

Hands-on lab: record a short clip, compute an FFT, identify harmonics, and link them to timbre — inspired by Mitski's single.

Hook: Stop Guessing What You Hear — Measure It

Struggling to connect the sound in your head with the math on the page? If you feel lost when instructors say “look at the Fourier transform” or can’t tell why two singers sound different even at the same pitch, this hands-on lab is for you. In this guided exercise you will record a short clip, compute a Fourier transform, identify harmonics, and tie them to timbre — all using free tools or a few lines of Python. Inspired by the 2026 buzz around Mitski’s new single, we’ll use the idea of a modern pop vocal as a motivating example while respecting copyright by working with your own recorded clip.

The Big Picture (Most Important Things First)

This lab teaches you three transferable skills:

  • How to capture a clean audio sample that’s suitable for spectral analysis (gain staging, sampling rate, avoiding clipping).
  • How to compute and interpret an FFT and spectrogram (windowing, zero-padding, resolution trade-offs).
  • How to identify harmonics and connect them to timbre using peak detection and simple spectral descriptors (spectral centroid, harmonic-to-noise ratio).

By the end you’ll be able to say, with measurements, why a voice sounds “bright” or “warm” and how harmonics contribute to a pop single’s character — a timely skill for students and creators in 2026’s audio-forward digital landscape.

Why This Matters in 2026

Late 2025 and early 2026 brought more accessible audio analysis tools into classrooms: browser-based WebAudio DSP playgrounds, lightweight ML pitch models running in-edge browsers, and improved open-source toolchains (librosa 0.10+, widespread Colab examples). Musicians like Mitski sparked renewed interest in production details — students want to understand not just lyrics and melody, but the sonic fingerprints that make a voice unique. This lab gives you practical, modern workflows to analyze those fingerprints.

Quick Tools You Can Use (Pick One)

  • Audacity (free) — easy recording and built-in spectrogram.
  • Web browser + WebAudio analyzer — real-time FFT without installing software.
  • Python (Colab/Jupyter) with numpy, scipy, matplotlib, librosa — programmable, reproducible analysis.
  • MATLAB / Octave — academic workflows common in labs and courses.

Before You Start: Materials & Settings

Gather these items and settings to make your measurements reliable:

  • A microphone (USB mic or smartphone will do).
  • Quiet room, soft furnishings to reduce early reflections.
  • Recording software (Audacity recommended for beginners).
  • Sample rate: 44100 Hz or 48000 Hz. Use 16‑24 bit depth.
  • Recording format: uncompressed WAV.
  • Target clip length: 3–8 seconds (a short sung phrase or hummed melody inspired by the single).

Step 1 — Record a Clean Clip

Practical checklist:

  1. Set mic about 15–25 cm from mouth. Avoid plosives; use a pop filter if possible.
  2. Keep input gain low enough to avoid clipping (peaks should sit -6 dBFS to -3 dBFS).
  3. Record a short sustained vowel or a one-line phrase in a single take — aim for consistent loudness.
  4. Save as WAV, 44100 Hz, 16 or 24 bit.

Troubleshooting: If your recording contains too much noise, try a closer mic position and lower the room noise. If it clips, reduce gain and re-record.

Step 2 — Quick Listening & Preprocessing

Open your clip in Audacity or Python and do minimal preprocessing:

  • Trim leading/trailing silence.
  • Apply a light fade-in/out to remove clicks at boundaries.
  • Normalize gain to a consistent RMS or peak level (don’t over-compress).

Step 3 — Compute the FFT & Spectrogram

Key parameters you’ll choose and why:

  • FFT size (N): 2048–8192. Larger N gives finer frequency resolution but coarser time resolution.
  • Window: Hann (good default for audio).
  • Hop length: 25–50% of the window length for a smooth spectrogram.

Here’s a reproducible Python snippet you can run in Google Colab or locally. It uses librosa + matplotlib and works with your WAV file.

import librosa
import numpy as np
import matplotlib.pyplot as plt

# Load
y, sr = librosa.load('your_clip.wav', sr=44100)

# Parameters
n_fft = 4096
hop_length = 1024
window = 'hann'

# STFT & Spectrogram (magnitude)
S = np.abs(librosa.stft(y, n_fft=n_fft, hop_length=hop_length, window=window))
S_db = librosa.amplitude_to_db(S, ref=np.max)

# Plot
plt.figure(figsize=(10,4))
librosa.display.specshow(S_db, sr=sr, hop_length=hop_length, x_axis='time', y_axis='hz', cmap='magma')
plt.colorbar(format='%+2.0f dB')
plt.title('Spectrogram (dB)')
plt.show()

What to look for in the spectrogram

  • Horizontal lines = stable harmonics (common in voiced singing).
  • Vertical bursts = transients (plosives, strummed guitar attacks).
  • Energy concentrated low = warm/dark timbre; energy spread upward = bright timbre.

Step 4 — Identify the Fundamental and Harmonics

Harmonics are integer multiples of the fundamental frequency (f0). If the sung note has f0 = 220 Hz, harmonics appear at 440 Hz, 660 Hz, 880 Hz, etc.

Peak picking (programmatic)

Use a peak finder on the magnitude spectrum of a steady region. Example in Python:

import scipy.signal as sps

# Choose a steady frame (e.g., middle of the clip)
frame = S[:, S.shape[1]//2]
freqs = np.linspace(0, sr/2, len(frame))

peaks, _ = sps.find_peaks(frame, height=np.max(frame)*0.05, distance=5)
peak_freqs = freqs[peaks]
peak_mags = frame[peaks]

for f, m in zip(peak_freqs[:10], peak_mags[:10]):
    print(f"Peak at {f:.1f} Hz, magnitude {m:.3f}")

Look for a series of peaks spaced at multiples. The lowest strong peak is often the fundamental; if the fundamental is weak, you may see partials where the first visible peak is actually the 2nd harmonic.

Manual verification

  • Measure the frequency spacing between consecutive harmonic peaks: it should be ~f0.
  • If spacing is inconsistent, re-check window size and use a longer steady vowel segment.

Step 5 — Relate Harmonics to Timbre

Timbre is what allows you to tell two instruments or singers apart at the same pitch. It’s determined by the relative amplitudes of harmonics, the spectral envelope, and time-varying features like attack and vibrato.

Simple metrics to compute

  • Spectral centroid: the “center of mass” of the spectrum — higher centroid = brighter sound.
  • Spectral rolloff: frequency below which X% (e.g., 85%) of spectral energy lies.
  • Harmonic-to-noise ratio (HNR): a measure of periodic vs. noisy energy. Higher HNR = cleaner harmonic structure.

Compute these with librosa:

centroid = librosa.feature.spectral_centroid(y=y, sr=sr, n_fft=n_fft, hop_length=hop_length)
rolloff = librosa.feature.spectral_rolloff(y=y, sr=sr, roll_percent=0.85, n_fft=n_fft, hop_length=hop_length)

print('Mean spectral centroid:', centroid.mean())
print('Mean rolloff (Hz):', rolloff.mean())

Worked Example: A Simple Vocal Hum

Imagine you recorded a 4-second hummed A3 at about 220 Hz. After computing the spectrogram and performing peak picking, you find strong peaks at ~220 Hz, 440 Hz, 660 Hz, 880 Hz, and smaller energy at 1100–1760 Hz.

Interpretation:

  • The first five harmonics indicate a strongly periodic voice with clear harmonic series.
  • The relative amplitudes show more energy in the lower harmonics — this suggests a warm timbre.
  • A higher spectral centroid or extra energy above 2 kHz would indicate a brighter or more “edgey” vocal quality.

Common Pitfalls & How to Fix Them

  • Aliasing: If your sample rate is too low, high harmonics fold back. Fix: use 44100 or 48000 Hz.
  • Window leakage: Sidelobes from rectangular windows can hide peaks. Fix: use Hann/Blackman windows and longer frames.
  • Insufficient frequency resolution: Short FFT gives smeared harmonics. Fix: increase FFT size (but note time resolution trade-off).
  • Noisy background: makes HNR low and masks harmonics. Fix: record in quieter environment or use a noise gate carefully.

Extension Activities (Advanced)

  1. Compare a sung phrase, a guitar note, and a synth pad at the same nominal pitch. Compute spectral centroids and plot harmonic amplitudes. Discuss differences in spectral slope and formants.
  2. Use machine learning pitch estimators (CREPE-style models) to track f0 through vibrato and compare with your peak-detection results. Note: recent 2025–2026 browser ML models now run locally in real-time.
  3. Re-synthesize the clip by keeping only the first N harmonics (additive synthesis) to hear how the timbre changes as you remove harmonics.

Re-synthesis Example (Python)

Create a simple additive resynthesis using detected peak frequencies and amplitudes. This isolates the harmonic contribution to timbre.

import numpy as np
from scipy.signal import chirp

# Example: synthesize first 6 harmonics of f0
f0 = 220.0
dur = 3.0
t = np.linspace(0, dur, int(sr*dur), endpoint=False)

synth = np.zeros_like(t)
amps = [1.0, 0.6, 0.4, 0.3, 0.15, 0.1]
for k, a in enumerate(amps, start=1):
    synth += a * np.sin(2*np.pi*f0*k*t)

# Normalize
synth /= np.max(np.abs(synth))

# Save with soundfile or play in notebook

Classroom & Assessment Ideas

Turn the lab into a graded assignment or group project:

  • Students upload a 4–8 s recorded clip plus a short report with spectrogram screenshots, measured f0, list of harmonic magnitudes, and computed spectral descriptors.
  • Rubric (example): recording quality (20%), accuracy of peak identification (30%), interpretation linking harmonics to timbre (30%), clarity and reproducibility of code/steps (20%).

Inspired by contemporary singles like Mitski’s, you may be tempted to analyze a commercial recording. Copyright rules vary — do not upload copyrighted audio to public platforms. Instead, record your own short vocal lines or use royalty-free samples. Citing public articles about songs is fine; for example, the January 16, 2026 Rolling Stone piece on Mitski’s single inspired this lab’s angle, but we analyze original student recordings here.

"Even larks and katydids are supposed, by some, to dream." — spoken word inspiration from Mitski’s album tease (Rolling Stone, Jan 16, 2026)

As of 2026, expect a few continuing trends that will change how you do this lab:

  • Edge ML models that run in the browser for real-time pitch and timbre analysis — no Python install required.
  • Improved open-source datasets for timbre modeling, making classroom comparisons more robust.
  • AI-assisted mixing tools that highlight spectral issues and recommend equalization based on measured harmonics — useful for artists and audio-aware physics labs.

Actionable Takeaways (Quick List)

  • Always record WAV at 44.1/48 kHz and avoid clipping.
  • Use longer FFTs (4096) for better frequency resolution when analyzing steady pitched sounds.
  • Identify harmonics by spacing; compute spectral centroid to quantify brightness.
  • Re-synthesize from harmonics to directly hear how they shape timbre.
  • Be mindful of copyright — analyze your own recordings or public-domain samples.

Lab Checklist (Before You Turn In)

  • Original WAV file uploaded (3–8 s).
  • Spectrogram screenshot with axis labels.
  • List of detected peak frequencies and identified harmonics.
  • Spectral centroid and rolloff values with short interpretation.
  • Optional: short additive resynthesis file or audio sample showing harmonic isolation.

Final Notes — How This Helps You Learn Physics & Audio

This lab builds intuition for the frequency-domain representations central to physics, signal processing, and audio engineering. You move from abstract Fourier theory to measurable evidence: you can see harmonics, quantify their strength, and hear how they affect timbre. That close loop — measure, interpret, synthesize — is how learners gain confidence and creators make informed decisions in the studio.

Call to Action

Ready to try it? Record a short vocal or instrument clip and run the Python notebook above in Google Colab. Share your spectrogram and a one-paragraph interpretation in our studyphysics.online lab forum; we’ll feature thoughtful examples and give feedback. If you want a complete instructor-ready lab package (slides, rubric, Colab notebook), sign up for the audio-lab bundle on our site and get a free template tailored to your course level.

Advertisement

Related Topics

#signal-processing#lab#audio
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-03-10T00:32:02.672Z