Rust Audio

Best way to implement 3-band graphic eq

I’m trying to work out the best way to implement a traditional low-mid-hi graphic eq for my mixer (now called mixjack). I’ve experimented with FIR and IIR filters in the frequency domain, but the results weren’t great. I’d ideally like the 3 filters to preserve energy (when all turned on and mixed together) across frequencies.

What’s the best way to do this? Am I better working in the frequency domain? If so how should I design the filter for the best audible result, and what windowing should I use? Or is it more efficient to apply the fiter in the time domain? If so, how should I design the filter?

Hi @dodj.

You can definitely use the FFT to implement a 3-band EQ. I don’t know how if it’s better, however.

For an FFT-based EQ, I would go for a Hann window with 75% overlap before applying the FFT. This has the property that if you window the input and then add the windows again, you get the original back. Concerning the window size: it determines how many frequency bins you get (longer window means more bins), but a longer window also means more CPU and more latency. For instance, if you have a window of 8 samples (which is really short) and a sampling rate of 40 kHz, you have 4 frequency bins: 0-5kHz, 5-10kHz, 10-15kHz and 15-20kHz and you have a latency of 8 samples.

FIR filters and IIR filters are an entirely different beast and I must admit I’m not very familiar with them. I’ve started reading Julius O Smith III’s book “Introduction to digital filters”, but I haven’t finished it. I think usually one preferres to use FIR or IIR filters since these are usually computationally efficient. You can also use Faust to try out various things and export to Rust code, but I haven’t tried it yet.

Good luck!

1 Like

I can fill in some of the gaps. There is a theoretical 1 - 1 correspondence between FIR filters and spectral filters, and to get the convolution kernel for the FIR you apply the IDFT to the spectral kernel. Fourier transform turns convolution (weighted moving average) in the time domain to multiplication in the frequency domain. (The convolution kernel is just the weights to use in your weighted average, or a function in the continuous case, where the weighted average turns into an integral.) I don’t know how issues like windowing, numerical stability and discretization/quantization affect this process in practice.

Ah, I see. Yes, ignoring discretization, a FIR filter corresponds to a spectral filter with a window of infinite size, where the spectral filter is the FFT of the convolution kernel, with zeroes padded to give it an infinite length as well. The relationship with a STFFT (Short-Term FFT) with overlapping windows is still a little unclear to me, unfortunately.

However, we can already figure out why the windowing is there. If you take an infinite sinc convolution kernel and chop off the ends to make it finite, chopping off the ends is the same as multiplying with a square window. So the corresponding spectral kernel is a convolution of the FFT of the sinc function with the FFT of the square window. Since the square window has quite some high frequency content, the spectral kernel is not perfect.

In the case of an STFFT, you can also explain it differently. For an STFFT, the finite input signal does not really have a start or end: it wraps around. This means that if there’s a jump from the last sample to the first one, it’s percieved by the STFFT as a signal with quite some high frequency content, even if it’s not really there. The windowing is there to mitigate that.

Thinking about it some more, if you want a low-pass filter with a cutoff-frequency of 200 Hz with a sampling frequency of 40kHz, you will need an FFT window of at least 128 samples, probably 256 samples. With this length, it’s already probably more computational efficient to do it in the frequency domain than in the time domain.

Also interesting link:

1 Like