TX DC calibration not working

I haven’t checked RX DC calibration.

Yes, basically it should be the same.

Source code currently not available, but in general procedure is:

  1. backup parameters
  2. tune RX from TX by offset sampleRate/4
  3. write zeros to TX (continuously)
  4. enable RX RSSI
  5. calibrate TX DC AI:
  6. measure RSSI with min. gain
  7. measure RSSI with max. gain
  8. if RSSI with min gain is better, shift “max” gain to 3/4 of current spacing between min and max
  9. repeat 6…8 until min=max
  10. jitter gain a little bit with RSSI averaging
  11. repeat 5…10 for other gains

As you can see, this is kind of brute force approach.
It’s better to use golden ratio and enable all GFIR tuned to specific band.

RX DC will be the basically the same, just different gains. To turn TX off or not, I haven’t decided yet, need more testing which one would be better.

Just a side note, dBm should not be used to judge calibration effectiveness, what you should be using is dBC.

Thank you for the algorithm! I’m just not sure I completely understood the whole idea. Sorry, I’m not so deep in SDR internals yet. Can you please elaborate a bit about what is its output (result of steps 5-10) and on all those gains that are mentioned:

  1. By what means do you do step 5 - calling MCU calibration routine?
  2. Are all of gains in steps 6, 7, 8, 9, 10 all tx gains? If yes, than if I get it right, you search for a tx gain level which gives you best DC level. But how can I eliminate DC at the gain level that I need?
  3. What “other gains” do you mean in step 11? May be you meant other frequencies?

By what means do you do step 5 - calling MCU calibration routine?

Step 5 is just title for steps 6…10. Maybe I’ve should used different notation.

Are all of gains in steps 6, 7, 8, 9, 10 all tx gains?

Yes, they are TX DC I, DC Q, Gain I, Gain Q and Phase.

If yes, than if I get it right, you search for a tx gain level which gives you best DC level.

Yes, just plain old binary-like search algorithm.

But how can I eliminate DC at the gain level that I need?

If you mean at different PAD gains, then you need to re-calibrate (or cache) calibration values for each PAD gain settings, as the offset drifts slightly with different PAD and fronteng gain settings.

What “other gains” do you mean in step 11? May be you meant other frequencies?

I meant that procedure 6…10 is repeated, but with respect to DC I, DC Q, Gain I, Gain Q and Phase.

1 Like

I’ve noticed that last version of stock calibration routine does a better job.
Still, about 10 dB worse than what I’m using.
Maybe i’ll try to implement fine calibration routine, which will tune gains and offsets obtained from stock calibration to their best. By the way, mine current implementation runs for about 5 seconds, which is very long, compared to stock calibration. 2-step calibration seems good compromise: coarse calibration (stock), and then optional fine tune calibration.

Stock calibration is essentialy doing the same, except it does true binary search and makes sure that Rx gain is not saturating receiver. Are you running your algorithm with internal loopback or external?

I’m updating the code for external loopback option, and it gives ~10dB better results, varies by frequency, but achieves DC level around -65 dBC

@DreamNik:

Thank you, it’s pretty clear now! Also glad to hear from @ricardas that he’s working on external loopback, hopefully there will be no need to do it myself…

Yes, mine is basically the same as stock. The reason I’ve made one is that previous stock calibration didn’t worked at all. Every calibration gave completely different results. New stock calibration does very good job.
I’ve ran algorithm by using internal, on-board, and external (U.FL-U.FL) loopbacks.
All of them were pretty close.

-65 dBc?
Mine LimeSDR now calibrates to about -32 dBc with stock firmware using internal loopback, as measured by spectrum analyzer.
Adjusting DC by hand, gives -62 dBc. At this level DC is pulsing 20 dB up and down with frequency 1-2 Hz.

Sideband rejection is -35 dBc, after stock calibration.
After manual tuning it’s about -70 dBc.

Another problem, is that actual DC minimum (as measured by spectum analyzer) is not the same as minimum, measured by LimeSDR.
For example, if LimeSDR thinks that DC have minimum at I=-398 Q=171, and changing this values leads to RSSI and FFT at DC to rise, real DC minimum is at I=-450 Q=105 offsets.

SXT and SXR frequencies are separated by 1 MHz.

What frequency and output band you are using? Are you sure your set gain is not overloading transmitter?

Another problem, is that actual DC minimum (as measured by spectum analyzer) is not the same as minimum, measured by LimeSDR.

That’s what i said at the very beginning, should’ve chosen better words.

The calibration is being done using chip’s internal loopback, as the frequency goes to the high end of the spectrum at some point the characteristics of path start to differentiate between chip’s internal loopback and the board’s external path. So the calibrations works fine, but the conditions are slightly different during the calibration and after it’s finished.

I’m working with 2.4 GHz band, sometimes tuning to away to 2.2, 2.8 or 2.9 GHz, just to test that interference isn’t causing any problems. TX is at BAND2.

Frontend gain is set to bare minimum, as I mentioned this problem in this topic Heavy distortions on TX . E.g to gain=1 or gain=2.

PAD gain is varying. Typically, higher gain give better results.

Is your board modified in any way? Share your chip configuration file.

No, stock HF unmodified LimeSDR v1.4.

Configuration file:

Here is what I get:

No calibration:

Stock calibration:

Manual calibration, using LimeSDR receiver and FFT

Manual calibration, using external spectrum analyzer:

Your manual calibration results look much better then stock calibration ones. Also note that signal level stays the same. Can you say what gain and antenna ports did you use?

I’ve used TX BAND2 and RX LNAH (for on-board loopback) and LB2 (for internal loopback).
Tried many gains with the same difference in calibration.
With lower gains (TBB) the signal itself is much cleaner.

I’ve also noticed that LMS_Calibrate doesn’t seem to do a good job of suppressing TX LO leakage. and have started working on doing a calibration myself.

Are the analog DC corrections documented anywhere? In the Programming and Calibration document I find no references to DCWR_TXAI and friends, or PD_DCDAC_TXA or DCMODE. All of these registers seem to be accessed by the calibration routines in LMS7002M_RxTxCalibrations.cpp.

Is there a more complete document somewhere?

Is there some black magic to use those analog corrections? In my hands, They don’t seem to have any effect at all - which may explain why LMS_Calibrate doesn’t take care of the leakage?

On the other hand, the DCCORRI and DCCORRQ seem to do exactly as advertised. Is there a downside to using them?

Try the CalibrationUpdate branch.

Don’t know about documentations.
To use Tx DC analog correctors: enable DCMODE=1, activate PD_DCDAC_TXA=0, now the correctors are active.
To change corrector value: set DCWR_TXAI=0, write desired value DC_TXAI=value, set DCWR_TXAI=1. (same applies for all other channels).
Corrector values are not two’s complementary, I suggest reusing Read/WriteAnalogDC functions:
https://github.com/myriadrf/LimeSuite/blob/CalibrationUpdate/mcu_program/common_src/lms7002m_calibrations.c#L307

Analog correctors have smaller step and bigger range than digital DCCORRI and DCCORRQ correctors.

@ricardas Thanks for the explanation. Those DC corrections now seem to work for me! A couple of questions though

  1. I had found this:

Modify_SPI_Reg_bits(LMS7param(PD_DCDAC_TXA), 0);
Modify_SPI_Reg_bits(LMS7param(DCWR_TXAI), 0);
Modify_SPI_Reg_bits(LMS7param(DCWR_TXAI), 1);
Modify_SPI_Reg_bits(LMS7param(DC_TXAI), int2txdcreg(dcI));
Modify_SPI_Reg_bits(LMS7param(DCWR_TXAQ), 0);
Modify_SPI_Reg_bits(LMS7param(DCWR_TXAQ), 1);
Modify_SPI_Reg_bits(LMS7param(DC_TXAQ), int2txdcreg(dcQ));

in CalibrateTx, but couldn’t quite figure out what the DCWR_TXAI toggle was doing it seemed weird to come before setting the register - your instructions make it seem like that code is buggy? Shouldn’t the DC_WR toggle come after setting the value in DC_TXAI ?

Question 2: Is there any chance supposed to be a +1 inside the abs on line 314 above? Without it there’s a weird discontinuity going from -1 (which maps to 2046) to 0 (which maps to 0), and another weirdness that -1024 maps to 2047 which seems horribly wrong.