Heavy distortions on TX

Hi!

LimeSDR heavily distorts signals on TX. For example, I created this graph, which outputs single-side BPSK random signal:

Output of FFT plot is:

But real spectrum is heavily distorted on left side:

For more confusion, running TxTSP BIST (Build-in self-test) (“Start BIST” on “TxTSP” tab in LimeSuite) helps a little bit:

For reference, this is output of HackRF One (Blue) with the same graph:

Much cleaner and better signal.

Can someone give a hint why LimeSDR distorts signal so much? And how starting BIST changes something?

(LimeSDR-USB FW:3 HW:4 Protocol:1 GW:2 GW_rev:10 Ref Clk: 30.72 MHz)

@DreamNik,

Generally, for transmit, you need a sampling rate that is more like 3.000 MS/s to no more than 5MS/s and I see no decimation in your setup (there will be a ratio - generally 4:1 of what your interpolation is - - I’ve found that 64/16 works best). Try a higher sampling rate and induce decimation in your GNU setup that will be 4 times less than your interpolation.

This is based on what I’ve been using with SDRAngel…So give it a try.

73 de Marty, KN0CK

Ok, I’ve tried:

  1. sample rate: 1M , 2M , 3M , 4M , 5M, 10M, 20M
  2. decimation (parameter in the Band Pass Filter)
  3. difference fractions of sample rate around 4.5M for different interpolation factors, since SoapySDR don’t have any options to control interpolation ratio

Everything lead to same, or even worse results.

Maybe I’ve misunderstood your hint?

You’ve suggested this chain?:

  1. Signal waveform at rate X
  2. 64 decimation in GNURadio graph, output at rate X/64
  3. 16 intepolation in TxTSP, output at rate X/64*16 = X/4
  4. ADC at X/4

@DreamNik,

This evening when I’m back in front of my setup I’ll send you what I have on transmit on my SDRAngel setup. In practice, transmit sampling rate should never be higher than 4 MS/s and the ratio is 64/16 (interpolation/decimation) but I need to confirm that later - stay tuned.

73 de Marty, KN0CK

I’ve checked these modulations:

  1. AM, xFM, xPSK mostly not affected
  2. OFDM signals (like DVB-T) signals looks ok, but heavily affected too.
  3. modulation on screenshots above is SSB BPSK (or whatever it’s called) very affected

For me it simply looks like the reason is that sampling rate is below minimum advised value of 5 MHz (TX filter-related limitation). Check this topic. People have reported that using LO offset could help a little, but personally I would try to avoid all these decimations etc. and just check higher sampling rate in traditional approach. If you will specify the bitrate (I am not sure if I see it right that you want it to be 32 kb/s, yielding double-sided BPSK signal main lobe width of 64 kHz), I could try to achieve the same effect with my way of generating of such kind of signal.

//edit: this is what I got after playing with GRC for a moment. It clearly shows that these distortions may be avoided.

Sorry, but this is not true at all. In theory, board is supporting sample rate up to 61.44 MS/s and though it will probably start to drop samples for that high sample rate, there is no reason why you cant successfully use sampling rate of, say, 45+ MS/s on TX side.

Forget about SSB BPSK.

This event happens with simplest graph:
image

Result:

I’ve tried many sample rates ( 1M, 3.072M, 4M, 5M, 10M, 20M, 30.72M, 40M, 50M, 61.44M) with same results.

Ignore DC spike - it’s calibration problem, I think( TX DC calibration not working ).

With some gain tuning I’m able to lower some of spikes, but without success of cleaning up whole signal.

PothosSDR - same results.

Is the same happening when you use:

  • USRP Sink instead of Osmocom Sink,
  • set TX gain to 0,
  • add ‘cacheCalibrations=0’ to device args,
  • and set bandwidth to 0?

If so, I suppose your board is faulty.

@ccsh,

When I made the statement about transmit sampling rate not being higher than 4MS/s I was referring to transmit voice operation. I’ve checked this on my Lime and anything higher than 4MS/s in voice transmit mode has distortion on the signal. Keeping it at or under 4MS/s the voice transmit mode sounds better and with hardly any distortion. Other modes will absolutely require a higher sampling rate in transmit and thus weren’t part of my comments since the bulk of the experimentation I’m doing is for HF voice transmit.

73 de Marty, KN0CK

@martywittrock,

Okay, got it, but I believe that it depends on used application. Generally, using higher sampling rate should not be causing such negative effects if it is correctly supported by all blocks in signal path (actually, you should expect even better voice quality since you are sampling your signal in more accurate way). So I think that 4 MS/s limitation is introduced by the application you are using, not the LimeSDR board itself.

@DreamNik,

I have tested equivalent flowgraph with my LimeSDR board (though I have been using USRP Sink block instead Osmocom Sink), and got far better results - at least 20 dB of attenuation between actual 1 MHz signal and highest intermod product:

1 Like

@ccsh,

And that’s true - the application I’m using is SDRAngel and I, too, believe that the issue is application specific.

73 de Marty, KN0CK

If “IF Gain” = TIA then 0db is maximum. This has always been a problem for me. It seems to act like a compressor, amplifying noise, distortion, intermod, and harmonics, while keeping the desired signal about the same. I think it works in three steps, -12,-3, and 0dB. My SpecAn only goes up to 1000MHz but I think I will try this and see what happens at 950MHz.

Good point, it may be it. Actually this strange gain mapping is one of the reasons why I have suggested to use USRP Sink instead Osmocom Sink - in the USRP Sink you are supposed to provide single gain value, which is going to be distributed among all amplification levels automatically with some clever logic hidden in SoapyLMS7 plugin.

If i cut the bandwidth down I get a clean-ish signal. Opened up I get something very similar to that Agilent output above.
What really makes a difference is cutting the amplitude. Bellow 0.5 it gets nice. At 0.35 it looks great. I had to tighten up the range on my SpecAn to see it.
I have no idea what those three gains are doing. They all make some difference. I will have to put sliders on them and experiment.

OK, with sliders:
I found I have the best results with the IF below -30.
The RF has the least effect, overall gain for the most part.
The BB has a large and inconsistent effect. I can turn it down and get a clean signal. Then I can turn it back up and suddenly get distortion. I can turn it back down and not get a clean signal until below where it was before. Then I can turn it up higher than where it distorted and still get a clean signal, etc. This is the same for all the controls but worse for BB.
You SHOULD be able to have the same three numbers give the same results every time.
This all with the generator amplitude at 0.35.
Edit: Just trying to push the amplitude and got a good signal with RF:10, IF:-20, BB:0.

Narrowed down things:

  1. TX calibration is a mess. It won’t properly calibrate DC, phase and gains.

This is what I get right after graph run:

After manual adjustments spectrum is just spot-on:
Able to almost completely remove DC (blow noise floor with 20MHz bandwidth).
Able to get 60 dB difference between signal and mirror.

Zoomed peak:


(note the noise around peak - what causes it?)

  1. “Frontend gain” in LimeSuite have ENORMOUS effect on all other unwanted signals.

This is spectrum with “Frontend gain” set to 4 (from LimeSuite, while GNURadio graph is still running):

This is with FG=7:

This is with FG=20:

This is with FG=36:

Frontend gain causes peaks to jump, after reaching some small value around 4-5.

Someone experienced this behavior with frontend gain?

Hi,
setting too high frontend gain causes saturation, so harmonics start to appear. There is ‘Tune gain’ button in LimeSuite TBB tab. It selects maximum frontend gain setting that does not cause saturation while utilizing full DAC range.

@IgnasJ, is there any equivalent to this magic ‘tune gain’ button in SoapySDR so it can also be used in custom c++ application to automatically use optimal gain which does not cause saturation?

Good question!