Fast tuning requests leading to some sample flow interruptions

Hello, I’ve posted a topic this morning about some issues when trying to Doppler correct satellites, but I’ve deleted it since the issues can be mitigated in software and thus the details are irrelevant.

Nevertheless, I have a question for Lime developers: with Lime hardware I have some very short sample flow interruptions and some artefacts appearing and manifesting as audio pops while device center frequency tuning requests are made. This only manifests on my Lime hardware (mini and LimeNET micro).
Is this a known software issue? I don’t think it’s caused by something in hardware. Using the latest liblimesuite and SoapyLMS7 plugin.

(I’m not a Lime developer but) It happens with all SDR hardware, when you retune the frequency, it is caused by invalid input and discontinuities in the samples. For simplicity imagine the samples from a a single real ADC are 12,13,15,17,18 (and then you retune the frequency), there is a delay while the PLL(s) lock from the old to the new frequency (during which time the ADC is not sampling a valid input), then due to the change in frequency the samples could be 128,130,131,133, … The invalid samples and that discontinuity in the time domain may manifest as a temporary raise in the noise floor across all frequencies in the frequency domain, resulting in a loud click or pop.





The sine wave vector is just the values of sine(x), where x is every 30 degrees until it repeats. e.g. (0.0,0.5,0.866,1 ,0.866,0.5,0.0,-0.5,-0.866,-1 ,-0.866,-0.5) In the one bad sample vector I changed 1 value in the valid sine wave and then I padded out the vector with 15 other valid sine wave vectors (I should have used more valid sine wave), it is not perfect to show the effect, but it is good enough (you are seeing far too much of the actual signal, and not enough noise).

What some software does when you tell it to return, it slowly attenuates the incoming samples to zero, then zero pads it during the retune, and then after it confirms all the PLL’s(Phase Locked Loop) have locked to the new frequency and that the ADC is returning valid samples, it slowly ramps back up the signal level from zero to avoid any discontinuity.

What you may be able to do with the LimeSDR is retune digitally on the TSP and avoid some delay and discontinuity, provided the analogue RX mixer does not need to be retuned to track the signal.

1 Like

Thanks, this is a very good explanation. However, just checking the numbers and doing some calculations, it seems to me that this effect is maybe a little too pronounced than it should be?
According to the datasheet, the PLL settling time of the LMS7002 is between 50 and 150 microseconds, with 50 being the more typical value. At a sample rate of 48 Msps, that should lead to a window of roughly 4800 samples with abnormal values. I would have expected these samples to be spread in frequency domain quite a bit once the FFT has been performed. Furthermore, I would have expected that downsampling with filtering to a final sample rate of 8 ksps would greatly reduce the number of erroneous audio samples to something not very observable.
Instead, the audio artefacts suggest a longer interruption in valid samples, on the order of 1 or 2 milliseconds, enough to be discernable. This looks like maybe an order of magnitude more than expected?
If my calculations are wrong and this is within a reasonable operation range, it might be nice to have official confirmation.

This does not impact my application scenario a lot since I can tune in software and be issuing tune requests only once every 20 - 50 seconds, but it is still puzzling to me why the plutosdr for example exhibits no such discernable demodulation artefacts in the same scenario (seen with real signal being demodulated while tuning every 100 ms). I’d also add that there are software mitigations, but in some cases tuning requests might not be avoidable and I can’t see this dealing very well with digital signals. Hopefully this could be just some easily fixed software issue.

Is it possible that what I am seeing is the result of the calibration procedure running at every tune request irrespective of the frequency delta? And if yes, would there be a way to avoid it with high-level API like gnuradio or SoapySDR?

From what I understand, the calibration is not run at every tune request:

Still doesn’t explain why tuning requests take such a long time to complete. With the gr-limesdr blocks, every time I issue a tuning request, the flowgraphs basically stops for N miliseconds, and doing fast sequential tuning is almost impossible. With the ADALM-Pluto on the other hand, this is almost instantaneous, not noticeable.

well, one of the reasons for slow tuning is the software and firmware is not fully optimized. For example LimeSDR-USB, one SPI register write or read through USB3.0 takes around 160us. From a brief look of the code there seems to be around ~100 register transactions when changing center frequency and tuning, so that alone already makes ~16ms, plus there are some waiting to allow values to settle during tuning, which might not be necessary given the communications speed.
I’ve been tinkering with the code and firmware on my free time, and there is definitely room for improvements, by basically batching multiple register writes/reads into single USB transaction I’ve got 90us + 16us x registerCount for reads or writes. There are lots of places where just a couple of bits need to be changed inside a register, but a full register read/modify/write is performed by the software (that’s a 320us each), can be cut down to 90us + 32us x registerCount. In particular the PLL tuning is doing a lot of change register, wait, read lock result, operations. These are done from the software through USB, so that adds quite a bit of time, potentially this operation could be offloaded to the firmware to avoid USB communications overhead (haven’t tested it yet).
So optimizing out software redundant register read/writes, and optimizing data transfers it can be speed up drastically. But it’s just a side project for me, so I’m not sure when/if I’ll submit the changes.

Thanks a lot for looking into this and finding the cause. I have zero pressure to do anything about it too, but this could enable a lot of interesting applications like frequency hopping with gr-gsm trx when ARFCNs are not within the sampling bandwith. I’ll try to dig into the code too when I have some time.