Help please, concatenating FFTs for a composite spectrum

I am writing a program for Windows called SDR_Xcvr. The name is misleading because it suggests a communications transceiver, but the program is really a test bench for experimentation, sort of a “LimeSuite for Dummies”.

One of its capabilities is performing a series of frame captures from a receive channel, stepping the center frequency for each capture, and creating a composite spectrum by concatenating the magnitude spectra from all of the captures. I’ve attached a screenshot below.

The signal source is a broadband noise generator, and there were 25 captures 10 MHz apart. The top plot is the IQ time domain data from the last capture. The second plot is the magnitude of the FFT from the last capture… If you look closely you can see that it droops slightly at both ends, despite using an LPF of 60 MHz. The 3rd plot is the composite spectrum, where you can easily see the drop-off at each transition. I’m not using a window function for the FFT, but I don’t think this should make a difference. I’d like to fix this, but don’t know how. Any suggestions would be appreciated.

P.S. I noticed that in the screenshot I was using a negative IF of -30 MHz, but the spectrum is exactly the same with zero IF.

1 Like

@allan This looks good idea as spectrum analyzer for tuning antenna and duplexer keep up the good works most of us dont know hot to program and this have potential as very useful l tool in radio amateur operator and hobbyist.

You are always going to have a bit of droop on a capture, because of the frequency response of the system not being flat. You can try to calibrate that out by applying a predistortion filter to offset that response, but that gets to be very tricky.

A better approach is to overscan and blend - take sweeps of e.g. 10MHz bandwidth, and step the frequency 8MHz, so you have 2MHz overlap. In the overlap region, do a linear blend of the magnitude: for each point, use Pblend[i] = A[i]*Pfirst[i] + (1-A[i])*Psecond[i], where Pblend is the blended point to be displayed, Pfirst is the data from the first sweep, Psecond the data from the second sweep, and A[i] goes from 1 to 0 as you step over the overlap. That will get rid of the droop and make the spectrum look nicer at the cost of more samples to process.

Thanks for your suggestion. If I do as you say it will reduce the droop by about half but not completely eliminate it, so I was thinking of just oversampling and throwing away the data at the edges.

I like the idea of an predistortion filter if it could just be a simple multiplication by a known curve in the frequency domain, although I guess that would be more like an equalizer or calibration rather than a predistortion filter. What do you think?

I do this sort of thing professionally - we’d do the compensation in the time domain before the FFT, rather than afterwards, because doing the compensation after can lead to some odd behavior when the signal is low (the noise won’t have the roll off, and you’ll be fooling yourself).

The linear blend approach does cut the droop, so long as the overlap region is larger than the region of the droop. If you don’t overlap beyond the droop you will get a bit of it left behind.

Since the detailed spectral response will vary slightly across the tuning range, you’ll likely need a lot of calibration curves, taken with a good-quality termination resistor on the input of the device.

The overlap approach is probably better.

Thanks for both of your help. I’ll probably use the overlap approach, but since most of my measurements will be made by comparing spectra with a calibration or reference run anyway, I can do that as well.

I’ll also be adding a mode where spectra can be obtained point-by-point, just using the magnitude at the center frequency of the frame. This will obviously take a LOT longer, but the advantage is that I can set a transmit channel to track the receiver frequency, and use that as a tracking signal generator. Then I should be able to do VNA.