Change in LP triggers calibration and calibration uses the LP bandwidth value for the bandwidth value of the calibration. Maybe that last point is not correct but I don’t know what “bandwidth” really means for calibration. Maybe this should not exceed the full bandwidth hence capped by sample rate.
Anyway 30 MHz and above would mean 30 MS/s and above (I think LP bandwidth is double sided bandwidth here) and this is quite fast. Depending on hardware you may expect some instability.
I suppose you’re trying to work on HF and therefore need a LP of 30 MHz and above. You don’t need a host sample rate as fast as 30 MS/s however what matters is that the ADC or DAC sample rate is above 30 MS/s. Then you can use the hardware decimator/interpolator to achieve this at much smaller host rates. But then it makes it critical to know what the calibration bandwidth does because if it has to be capped by the host sample rate then it needs to be much smaller than the LP bandwidth. I might try to see what capping the calibration bandwidth really does as there is not much literature about this.
Best regards, Edouard.
Edit: things look much more complex than I thought when writing the code: https://wiki.myriadrf.org/LMS7002Mr3_Calibration_Using_MCU There are actually 2 APIs for calibration:
- LMS_Calibrate uses channel bandwidth for calibration routines 1 and 2 i.e. DC offset and LO leakage (Tx), phase, gain imbalances
- LMS_SetLPFBW uses LP BW and is for calibration routines 3 and 4 i.e. analog filter bandwidth
The code does LMS_Calibrate after LMS_SetLPFBW and uses LP bandwidth. This seems to be quite wrong moreover since LMS_SetLPFBW invokes calibration routines all streams should be suspended while it runs.
Issue: https://github.com/f4exb/sdrangel/issues/92 opened with these findings.