How to Reduce TX/RX DC Offset in LimeSDR Mini 2.4 (Compared to USRP)?

Hello everyone,

I would like to understand how to reduce the TX and RX DC offset (LO leakage) in the LimeSDR Mini 2.4.

I am running a simple cosine wave loopback experiment using GNU Radio (gr-limesdr 3.10 plugin). From the received spectrum in GNU Radio:

DC level at received frequency: –72.36 dBm

Noise floor: approximately –90 dBm

When I perform the same experiment using a USRP under identical conditions, the DC level is much closer to the noise floor (around –87 dBm), which is significantly better.

Current Configuration:

→ GNU Radio with gr-limesdr 3.10

In LimeSuiteGUI I enabled:

  1. “Enable DC offset”

  2. “Automatic DC calibration mode”

  3. Set appropriate TX/RX gains, other parameters

  4. Set calibration bandwidth: 5 MHz

  5. Ran “Calibrate All”

Generated an .ini file from LimeSuiteGUI (please find the ini file here: https://drive.google.com/file/d/1ZjyYUaZ5mS33n9nq0HO0UcXnVgs63UjV/view?usp=sharing)

Loaded the .ini file into the gr-limesdr blocks in GNU Radio

Despite these steps, the DC spike remains significantly above the noise floor.

What additional configuration changes should I make in LimeSuiteGUI to bring the DC offset close to the noise floor (like USRP)?

Context (Why This Matters):

Our team is experimenting with running the OAI 5G stack using the LimeSDR Mini 2.4.

When comparing spectrum plots between USRP and LimeSDR:

USRP → DC/LO leakage is close to noise floor

LimeSDR → DC/LO leakage is significantly higher

We suspect that this higher LO leakage may be one reason why the gNB is not successfully connecting to the UE.

Any guidance on reducing DC offset to USRP-like performance would be greatly appreciated.

Thank you!

I’ll give you the same advice I’ve given many other people: Offset the actual tuned center frequency out of the channel, and then shift the carrier back in the digital domain. It’s very hard to totally remove the offset in hardware - most “solutions” end up acting as a very tight notch filter around the center, so if there is signal information at that point it will be lost, even if the spectrum “looks” OK at that point.
Many more expensive SDR front-ends do the offset and shift automatically in the hardware+drivers, so you cannot see the actual carrier spike on RX, and on TX the spike is outside the operating bandwidth and can be further filtered by a passband filter.

Understood — thank you, N0YKG, for the insightful suggestion.

Hello N0YKG,

Based on the suggestion I am trying to implement the DC offset avoidance by trying to shift the DC out off wanted signal band using NCO.

Idea is:
I first set the TX NCO to –4 MHz, which shifts the baseband signal down by 4 MHz. Then, I increase the RF LO to 2400 MHz + 4 MHz (2404 MHz) so that after upconversion, the wanted signal is still centered around 2400 MHz over the air. In this way, the transmitted DC component is translated to 2404 MHz, away from the actual information band at 2400 MHz.

On the RX side, with the LO set to 2404 MHz, both the TX and RX DC components appear at 0 Hz, while the wanted signal appears at –4 MHz after downconversion. I then apply a +4 MHz NCO shift to move the wanted signal back to 0 Hz and shift the DC spur to +4 MHz, where it can be removed using a low-pass filter.

On the TX side, the approach works as expected. I set the TX NCO to –4 MHz (4MHz downconverted) and increase the LO accordingly, which successfully moves the TX DC component out of the information band.

Then I strated to experiment with Rx NCO to implemt a shift by +4MHz
However, on the RX side I am encountering an unexpected issue.

When I apply a +4 MHz RX NCO shift, I am able to move the main DC spur away from 0 Hz as expected. But an additional DC spike appears at 0 Hz, which I did not anticipate. So even after shifting, there is still a DC component present at baseband center.

Note: In this experiment, I am not transmitting anything. The attached screenshots are the result of enabling the RX NCO and observing the FFT viewer to understand the receiver’s behavior.

I am unable to determine the source of this new DC component. This essentially defeats the purpose of shifting the spectrum using the NCO, as it results in the creation of a new DC spike at 0 Hz.

please find the ini file here: https://drive.google.com/file/d/1oWnTejkSycQDbsKwvxWyBHn_0wqw-hRN/view?usp=sharing

What could be the cause of this new DC component, and how can it be removed?

tagging @N0YKG @ricardas

Thank you

Well, that is a puzzle. I can suggest a few things to try to gather information:
Adjust the NCO (but not the LO) and see if the spur stays at the middle of the signal.
Vary the RX gains by 3dB, and see if the spur changes magnitude, and if so by how much.
Make sure you’ve done any IQ imbalance calibrations - an offset in I or Q could show up as a spur.

I just had a brief look into this. Seems that the new DC spike your seeing is created by the NCO itself.

The DC your seeing is -70dBFS, that is the the least significant bit of 12bit system’s dynamic range.
Mixing NCO frequency with DC signal of (0; 0), produces new DC of (-1; -1)
Mixing NCO of 0MHz with DC signal of (0,0), new DC (0;0)
Mixing NCO of any frequency with DC signal of (1;1), new DC (0;0)

So it is not an analog issue, but some kind of numerical result of digital signal mixing, producing a single bit offset.

Yeah, so this is definitely a digital processing artefact.
Not sure about the internals of the NCO itself, but in addition, the FFTviewer is displaying and calculating the FFT using floating point values, but the hardware operates with 12bit integer values.
Therefore software conversion between numerical formats introduces a “rounding” error;
the floating point range is [-1.0; 1.0], while the interger system is [-2048;2047], so the current conversion is lossy.

i.e
-2048 * (1.0/2048) = -1;
2047 * (1.0/2048) = 0,999511207 not 1.0
This can introduce that 1bit rounding error.

Thank you @ricardas and @N0YKG for the detailed explanation.

After going through your analysis and the LMS7002M datasheet, I have two follow-up questions:


1. Is there a chance that the DC spike is only a display artifact and it does not physically exist in the transmitted/received signal?

From post #7, you mentioned that the FFT viewer operates in floating point [-1.0, +1.0] while the hardware operates in 12-bit integers [-2048, +2047]. The scaling factor of 1/2048 is inherently asymmetric — -2048 maps exactly to -1.0, but +2047 only reaches 0.9995. This introduces a 1-bit rounding bias.

So my question is: is it possible that this -70 dBFS DC spike exists only in the FFT viewer as a result of this lossy integer-to-float conversion, and does not actually exist in the real RF signal? Or does the NCO arithmetic inside the LMS7002M chip itself produce a genuine (-1, -1) offset in the hardware samples before any software conversion even happens?

Understanding this distinction matters because:

  • If it is only a viewer artifact → it can be ignored for actual transmission/reception

  • If it genuinely exists in the hardware samples → it will appear in the real RF output and needs to be corrected


2. If it truly exists in the hardware samples, what is the recommended solution?

From the datasheet (Figure 11), the RX DC Correction block runs before the CMIX/NCO block in the RXTSP chain. This means the on-chip DC correction loop cannot see or correct any DC that the NCO itself introduces downstream.

Given this architectural constraint, is there a recommended approach from Lime Microsystems for using NCO frequency shifting while avoiding this residual DC artifact?

I ask because this is becoming a practical concern for me in the context of 5G OAI gNB-to-UE operation — a DC offset at this level could interfere with an effective gNB↔UE link. This concern originally arose when I compared spectral and time-domain plots between the USRP and LimeSDR Mini 2.4, where the LimeSDR’s DC offset level appeared very close in magnitude to the 5G RF signal itself (this is related to the experiments discussed in Question about LimeSuiteNG-integration fork in : Question about Limesuiteng-integration fork).


Thank you again for your time and insight.

Ok, so I checked with raw 12bit values without the floating point conversions, results are the same.
I’m not a DSP guy, but I suppose that outcome is expected, as digital NCOs do have inherent quantization errors that produce spurs NCO#Spurious_products, so “that DC” is byproduct of it.

It’s not LMS7002M specific, it’s in general how digital NCO mixing works.

I think you’re focusing on the wrong thing here if you think this NCO introduced “DC” of -70dBFS is of any concern. It’s digital and insignificant in this context.

This could be an issue. But that’s the analog DC, and it has to be corrected first, before any digital DC correction is considered. Without analog DC correction, you’re losing dynamic range.
It might not be possible to completely eliminate the analog DC offset. Thats when digital DC correction or NCO shifting to the side can help to further eliminate the remaining DC.

Thank you @ricardas, that clarifies things.

Regarding the analog DC — my original approach was to shift it out of the information band using the NCO (shifting the LO by +N MHz and applying a corresponding –N MHz NCO offset), and then filter it out using a low-pass filter implemented via the GP FIR filters in the RXTSP. However this approach has two problems:

  1. RF bandwidth consumption — shifting the signal away from DC by N MHz effectively wastes N MHz of usable RF bandwidth, which may not be acceptable in a 5G NR context where channel bandwidth and center frequency are tightly defined by the standard.
  2. As you explained, the NCO shifting itself introduces a new –70 dBFS digital DC at 0 Hz, which somewhat defeats the purpose when the information band level is lower than or close to - 70 dBFS

So analog DC correction does seem like the right approach. However, I have a concern — does the analog DC correction essentially act as a notch filter around 0 Hz, and if so, does it risk removing or attenuating actual signal information that sits close to DC? For a zero-IF architecture like the LMS7002M, the subcarriers nearest to the LO frequency would be the most affected.

Please clarify:

1. Where exactly in the receiver chain does analog DC correction take place? is it in the RXTIA?

2. What is the recommended strategy and steps to perform analog DC correction on the LimeSDR Mini 2.4 via LimeSuite GUI?

Thank you again.

To do the shifting you need to run at a much higher sampling rate, that’s the only waste I see. Other than that I don’t understand what you mean by wasting RF bandwidth, and how it would not be acceptable for 5G NR standards.
As long as you provide the expected spectrum frequencies, 5G doesn’t care about what combination of LO and NCO shifting you’re doing to make it, it’s not aware of that. The only real issue would be with Transmitter side, if you apply the same technique your Tx DC could actually appear in someone elses band.

Don’t mistake dBFS, and dBm, those are entirely different units and different contexts.

No

Based on the DC offset correction Control Diagram, it seems yes.

In GUI, open LMS7002M Calibrations tab, there is a button “Calibrate Rx”, FFTviewer should not be streaming data when doing calibration.

On the analog DC correction behavior - my experience is different. I had a signal that had important information at the carrier, and when I activated the DC correction ALL signal energy was removed, including the signal energy that was actually present in the signal.

Thank you,

I disabled the CMIX in the RxNCO , and then ran the “caliberate Rx“

Note that running “caliberate Rx” automatically enables the RxNCO DC corrector and DC tracking loop and it is observed that the spike is at 0Hz at a level of approx -70dBFS which is same at the same level as “digital processing artifact” which was created earlier when the mixing with NCO occured

this is was a little surprising as I expected to see the spike at 0Hz to drop to around -95dBFS like the shifted DC did when I used the RxNCO where the “Caliberate Rx” was done and the DC corrector and DC tracking loop of the RxTSP were not bypassed.

pls see the below image:

This got me thinking that what you described as a “digital processing artifact” occurring due to mixing with the NCO might also be happening when the DC corrector and DC tracking loop are enabled. Digital DC corrections may also be causing the single bit offset.

The DC spike at 0Hz appears to be at the same level whether the NCO is enabled or disabled (in both cases, “Calibrate Rx” was performed and the DC corrector and DC tracking loop were kept enabled). Pls let me know your opinion on this.


now if I bypass the DC corrector and DC tracking loop, to remove any digital correction and only observe the analog correction, the spike at 0Hz becomes taller to approximately - 62 dBFS


I was also going through TX DC calibration not working - #2 by ricardas
and this chat gave the idea to tweek 0x05C8 DC_RXAQ and 0X05C7 DCRXAI registers, through that I was able to reduce the spike at DC to abt -80 dBFS

with the 0x05C8 DC_RXAQ and 0X05C7 DCRXAI values as shown below:

image

However, I observed that these DC correction values vary across different LimeSDR Mini 2.4 boards.

I tested the same procedure on another board, and max reduction at 0Hz spike level that I was able obtain after trying all the combination of 0x05C8 (DC_RXAQ) and 0x05C7 (DC_RXAI) register values was approximately –70 to –75 dBFS.

Please clarify,
1. how can I reduce the DC spike to even closer to the noise floor(like USRP) using analog correction methods for DC elimination , any register tweeking technique ?

RxNCO and DC correctors are separate things, they’re not the same module. Anyway, the calibration procedure basically do analog DC, digital IQ imbalance calibration, and also enables the digital DC corrector.
Just to be clear in the Rx chain there is:

  1. analog DC corrector
  2. digital IQ (gain/phase) corrector
  3. RxTSP digital DC corrector

That’s digital quantization error, you cannot avoid it. -70dBFS is essentially 1bit error in 12bit system. It’s going to appear in any digital processing. Let me give you an example:
Digital system is using integers, and you want to calculate an average value of some samples.
The samples are [1, 2, 1], what do you think is the average value of those? Ideally it would be 4/3=1.3333… but your using integers, you can’t represent that, it’s either 1 or 2, so there you have it, a quantization error of 0.333, the result is 1, instead of the perfect 1.333, therefore you end up with digital “DC” offset of 1bit from the perfect value. That’s essentially what the RxTSP digital corrector is doing, it calculates the average value of some samples and offsets them all by the average.

It won’t happen. LMS7002M is a 12bit system, that gives you a dynamic range of ~72dB. To achieve -95dBFS, the ADC would have to be 16bit. There is a caveat to that, if it’s only 12bit system, then how can you see the shifted DC at -95? That’s because the additional precision is achieved by sampling over long period of time. I’m not going to go into details about oversampling, I’m not an expert in that and can be wrong. But in essence the ADC provides only 12bit instant raw values, but the actual FFT calculations are done on PC using floating point values and using 16384 samples, that essentially provides extra precision bits in the spectrum display. But those calculations are completely independent of the hardware, hardware only provides the raw input values of 12bit, what extra precision you can achive with them is up to your algorithms.

That is expected, it’s very unlikely that analog DC can be perfectly corrected. The correctors have a pretty coarse steps, so one step can be too little, the next already too much. So that’s why you correct analog DC as close as possible, and only then use digital correction to eliminate the rest.

DC offset is affected by many variables, LO frequency, gain, temperature… so it’s not consistent and won’t be the same for every device.

The calibration algorithm uses only the LMS7002M internal processing, it finds the best values that it can detect using it’s measurements. You are looking at FFT when doing the calibration manually, the FFT displays more precise measurements due to oversampling, therefore you can fine adjust the final values to better result.

I’m not familiar with how USRP works, or how precise their correctors are. But I can definitely tell you that DC is not your problem if you think that -70dBFS DC is going to affect anything in 5G communications.
I believe you can have other problems in the analog side:
Gains selection, to not saturate the receiver or transmitter.
Reference clock drift, even though software shows you the LO frequency as expected, the software does not know about the analog discrepancies in the oscillator crystal, it could have couple kHz offset from the expected value, so that needs to be calibrated.

Thank you, for the detailed and quick response! It’s going to take me some time to thoroughly understand everything you’ve explained.

In the meantime, I’ll proceed with the 5G work by following your advice and focusing on the issues you pointed out.