LimeSDR Mini 2.0 Changing Frequency Spikes

Hello I have recently been trying out the LimeSDR Mini 2.0 and had a couple of questions.

I have noticed when using the mini 2.0 with tools like SoapySDR or GR it has a large spike in the center whenever I set a new frequency and several at each end, DC spike offset doesn’t seem to be the case, is there something that can be done to mitigate this?

Please let me know if this is something I should address SoapySDR with, but it seems to be faster on other SDRs I’ve used, is there anyway to increase the speed of setting a frequency on this LimeSDR Mini 2.0?

Lastly what are the downsides of using it on USB 2.0?

Thank you!

Hi, are you changing center frequency while the samples streaming is active?
Please tell more about your setup what you’re trying to do, what frequencies are you using?
There are ways to speed it up, but it depends on use case.
USB 2.0 is half duplex and limits the data bandwidth to theoretically 480 Mbit/s, that’s ~20MHz samples per second combined from all channels. So if you would be using Rx and Tx, then you would be limited to 10MHz sampling rate, but practically the limit would be closer to 5MHz. In short, the maximum usable samling rate would be limited, and device controls would be slower.

Thank you for the quick response,

I am trying to do a sort of sweep across the device’s full range of frequencies (10MHz-3.5GHz with a sample rate of 30MHz on USB3.0), similar to hackrf’s “hackrf_sweep”, so with streaming deactivated and reactivated when changing frequencies (slows down compared to keeping activated which introduces even more spikes) I haven’t been able to find any similar tools that can do this reliably with the current version of the device.

Thanks

I assume you’re using legacy LimeSuite.
The stream activate/deactivate is causing slow down, as it is creating/destroying process threads, which is relatively long operation.
If stream is active while the center frequency is changed, the “even more spikes” your seeing, is the local oscilator frequency change is performing tuning, so for a brief moment the RF samples are undefined, until the procedure completes.
The other spikes that you’re seeing are most likely DC and IQ imbalance, to compensate them a calibration has to be performed, it’s a relatively long operation, if I remember correctly might take ~100ms. Calibration should be done for each center frequency.

The SoapySDR and GnuRadio plugins perform the DC/IQ calibration only during the stream start, so while it’s running, after the center frequency change DC/IQ will not be calibrated and can produce spikes that you’re seeing. The calibration is not intended be performed while the stream is active, as during it sampling rate is being modified, so the samples are undefined while it’s ongoing.

So for continuous sweeping, the LO tune, and DC/IQ calibration values ideally should be precalculated for each frequency, and then their results used during the sweep.
For sweeping ideally the stream should be stopped during frequency change, as LO values, and DC values are written to separate registers, so there would still be a moment of time where spikes could appear. Or instead of stopping the stream, radio frontend could be disabled temporarily not to capture those spikes.

So it’s achievable what you’re trying to do, but needs custom implementation to achieve best possible timing.

After seeing you mentioning “legacy” LimeSuite, I realized LimeSuiteNG is available now. Does the new version of LimeSuiteNG have a significant difference in capabilities form the legacy version, possibly having more appropriate tools to achieve my goal?

Main reason for the LimeSuiteNG development was the introduction of PCIe based devices that have multiple RF chips and are oriented to high bandwidths, so optimizations had to be done, and new API made to allow handling multiple chips and better performance.

For your goal, the main benefit would be improvement of the streams handling.
Once stream has been setup, the repeated start/stop is a lot faster.
Rx/TxLPF dynamic tuning algorithms replaced with equations, is a lot faster.
I have work in progress improvement of the LO change algorithm by avoiding full tuning, or greatly reducing it’s time with initial approximation.

Unfortunately the DC and IQ imbalance calibrations is done from the host PC now, so it’s going to be slower due to USB communications overhead. I do have ideas how to speed it up, but currently that’s low priority.