How to minimize LO leakage on the LimeNET-micro?

I’m looking for some advice regarding minimizing the LO leakage of the LimeNET-micro. The device is supposed to transmit a multicarrier DMR system, and the LO leakage exceeds the allowed spurious level in adjacent channels. Further info: sample rate is 1200000, the NCO is not used, and the API is gr-limesdr with gnuradio 3.8 if it matters. Is there any combination of settings which can bring it down?
Attaching picture (LO can be seen on the left side)

After checking the log more carefully, it seems calibration failed when setting gain after the initial calibration with this error:

INFO: device_handler::set_gain(): set gain [TX] CH0: 70 dB.
Tx Calibration: MCU error 7 (Invalid Tx band)

Can anyone explain why this error occurs? The TX band is automatically selected when the sink is initialized and is BAND2.

Hello Adim,

An .ini file dump would be useful to check after the calibration fails.
Are you using single or both channels and is the receive part enabled?
You could also disable calibration procedures and try again.
Alternatively, we could have a call and a remote session to debug this.

Karolis

Hi Karolis and thanks for the reply. This is not a very urgent thing, so apologies if I have to delay a more detailed answer for later.

Since this is a LimeNET-micro, I think there can be only one channel, and I’m using channel 0 in gr-limesdr (more precisely using the API calls exposed by gr-limesdr C++ API).
The receiver is also enabled at this point and samples are flowing. No samples are flowing on the TX side during the failed calibration attempt (flowgraph stopped).
If I think about it, I think this is the main difference between the intial calibration which succeeds and the second one which fails: the RX flowgraph is enabled and samples are flowing. Is this bad to happen during calibration?

From a usability perspective, I d like to be able to change tx gain at runtime, so I hope there are steps to take while using gr-limesdr, even if I have to take additional measures to avoid this error.
It is my understanding from observation that the device needs TX calibration every time TX gain is changed regardless of gain delta right?
With the device in proper calibrated state, I observe 50 dBc or more attenuation of LO, and changing gain without proper calibration LO rises to above 40 dBc (don’t have exact figure sorry).

Adrian

Hello Adim,

LimeNET-micro still uses LMS7002 - hence it is possible to enable second channel - which in turn can leak into first one (if for some reason you have a very large LO in the second channel). Most likely this is not the case based on your description.

So do you see large LO only when calibration fails and no samples are sent? If this is true, than this is expected - resetting digital part of the LMS would solve this (most likely this function is not implemented in the release you are using).

“It is my understanding from observation that the device needs TX calibration every time TX gain is changed regardless of gain delta right?”
Not necesarily - depends which gain stage is controlled - PA controls should have minimal impact on LO during entire control range, while IAMP can substantial LO changes. Most likely the function you use uses both of them.

Karolis

Hi Karolis,
When calibration fails, I can see a large LO value appear, an almost instant change in its amplitude to what is probably maximum.

Regarding whether calibration is needed when changing gain: I’m using the set_gain method in gr-limesdr: gr-limesdr/sink_impl.cc at 88f3b8c9c48ae972da681df5ff819f8c41cb6cc9 · myriadrf/gr-limesdr · GitHub

which in turn calls LMS_SetGaindB here: gr-limesdr/device_handler.cc at 88f3b8c9c48ae972da681df5ff819f8c41cb6cc9 · myriadrf/gr-limesdr · GitHub

It is my understading that gain stages are distributed automatically by this method, i.e. I don’t have any manual control over them (and I don’t need it as such). But what I can see in practice is that if I have the device calibrated at maximum gain (73 dB) and then reduce gain at runtime to 70 dB without calibration, the LO leakage increases quite a bit, so this lead to me assuming I need to run calibration each time I adjust gain. I will get some real numbers that I see for you later while doing this procedure (LO leakage value vs gain setting).

I’m not sure why changing gain via this method in gr-limesdr API leads to such drastic variation in the leakage value. One other thing, this is all configured at runtime so I have no ini file.

Hi Karolis, I now have some actual numbers:

With initial gain set to 68 dB and calibration successful at this gain value, LO is at -49 dBc
If I modify gain at runtime with the method exposed in gr-limesdr to 72 dB and don’t run calibration, LO increases to -25 dBc
Set gain back to 68 dB, LO is back at -49 dBc.
Decrease gain from 68 to 64 dB, and the LO spikes back up at -24 dBc.

No other changes like sample rate, bandwith etc. done at the same time, just adjusting TX gain.
So it seems obvious to me that rather small changes in gain versus the calibrated value influence the LO leakage quite a lot.

I think I found the explanation for the anomaly when changing gain. Due to a bug in the code the transmitter was calibrated at a frequency offset by 2 MHz from the actual operating frequency. Which leads to my next question: how far away can I tune the transmitter before having to recalibrate it? Apologies if this infor is somewhere in the wiki and I didn’t find it.