I’m new to the LimeSDR platform but is it possible to remove the TX carrier when not transmitting?
I’m trying to transmit in bursts, and it seems every time I’m not actively transmitting the carrier is just popping up above the noise floor. Just activating the stream causes the carrier to be present.
I am running the calibration however it doesn’t seem to be doing a good enough job.
I’ve seen posts that talk about playing with the DC Corrector sliders in Lime Suite, and rolling their own calibration procedure, as well others that just say to manually enable/disable the front end. But those have not been successful for me. The latter works, however the time between bursts is small enough that timing the shutting off of the front end wasn’t viable.
All these posts were from several years ago, so I was under the assumption that given the elapsed time any issues should have been addressed by now.
So is it possible to get rid of the carrier or is this something that is just to be expected with the LimeSDR?
Hello,
I am bumping this because I am now experiencing the same problem.
At my first implementation, I was always streaming. If I had no relevant data I streamed 0s, so I was assuming I was seeing the carrier due to that.
Lately I been making a Soapy implementation and been using an USRP B210 because my Limes were high-jacked for another project. During this time I implemented timestamps on the bursts and it have been looking good with the carrier of after bursts. Yesterday I got to borrow the Limes for some hours and I tried running with them instead. That was when I noticed that the carrier was still present between bursts.
Therefore I am bumping this hoping for some guidance.
So is it possible to get rid of the carrier or is this something that is just to be expected with the LimeSDR?
I don’t think you can truly calibrate it out. I’ve spent more time than I would like to admit on this issue. The stock calibration routines just don’t work at reducing the carrier. I have about 20 lime mini’s and it’s across the board… I’ve manually adjusted the values for DC_TXAI and DC_TXAQ and can achieve results far superior (carrier is almost gone) to what automatic calibration can do. However, either due to time/temperature, this does not always remain. Eventually the carrier will start to be more pronounced.
When using waveforms that have a high peak-to-average ratio (OFDM) it’s pretty obvious where the carrier is, even if you are transmitting.
I contemplated trying to re-implement the calibration procedure, however if memory serves correctly it’s done in the MCU and at the time it was more effort than I wanted to spend especially if it wasn’t guaranteed to remain somewhat stable.
I still wonder if there is something fundamental that I am doing wrong, or are other people using these for something else? Because I would have thought more people would have brought this up / dealt with it by now.
Correct, in some situations the DC corrector step size might be too coarse, so it wouldn’t be able to completely eliminate the DC offset without overshooting.
Automatic calibration is performed using chip’s internal loopback, the signal characteristics are not completely identical when it’s going through loopback, and when it’s going through the whole RF path to antennas. So even though the algorithm finds the best values, they are not always the best when applied for the full RF path.
Yes, the calibration used to be performed by MCU, to guarantee timing for measurements, and reduce registers write/read operations time overhead, as USB interface is quite slow. In the LimeSuiteNG the calibration code is running on the PC, so you can easily modify it if you want LimeSuiteNG/embedded/lms7002m/calibrations.c at develop · rjonaitis/LimeSuiteNG · GitHub
Okay, thanks for the replies. Sounds like a calibration error then. Some questions:
When you calibrate them, you store the values in the ini file or do you have to do this before every run because it is different every time?
How do you notice the “drifting” (better word?) and how long does that usually take?
It seems like I need to read up on the calibration procedure. I only saw that it did some calibration routine and left it at that so far… Can some one point me on where I start to learn about this, would be super helpful. I tried looking into the LMS7002 docs, but I’m coming from SW, not RF, so it takes some time to understand. If I get some place to start I can probably get help from local ppl once I’m at a level to understand the answers…
Calibration values are stored in chip’s registers, so if the .ini file is saved after calibration then it will contain the values. Calibration values are affected by LO frequency, gains, and temperature, so if any of these changes, calibration should be redone.
External tools like spectrum analyzer need to be used to measure the change. The drifting usually is related to temperature changes.
Tx DC calibration in a nutshell: the RF is configured in loopback mode, so that Rx would see only the Tx DC component, and then a couple of iterations of 2d binary searches are performed to find the DC corrector values for I and Q channels that result in the lowest signal amplitude.