Some tech questions about the LimeSDR XTRX ecosystem

Dear All,
I have some technical questions about the LimeSDR XTRX HW:

  1. The data sheet says that the maximum RF bandwidth is more than 100 MHz. However, the user gasparka asked about the filtering that limits the bandwidth to 75MHz. Can I get 100MHz BW out of the device or is there any filtering that limits it?

  2. What are the physical dimensions of the LimeFEA mPCIe Full carrier card ?

  3. The document: [LimeSDR XTRX v1.2 Board — LimeSDR XTRX Boards 23.01 documentation] (LimeSDR XTRX v1.2 Board — LimeSDR XTRX Boards 23.01 documentation) shows that the system does not have a continuous RF coverage. There are gaps in the RF band due to the matching networks. The frequency ranges for the Rx channels in Fig.6 and Table 3 do not match. Which one is correct?

Thanks for your help.

Regards,

Darek

@ricardas could you clarify please.

Both versions use the same PCB and this measures 100x65mm excluding the PCIe connector.

There are no gaps this will just be where the matching networks are optimised for.

You can get 100MHz bandwidth, it’s not limited by filtering. The limit depends on channels count, so in SISO you can have sampling rate up to 122.88MHz, but in MIMO only 61.44MHz.
That’s a dependency on PCIe connection. XTRX uses PCIe Gen2x2 lanes which mean the maximum data rate could be 1 GB/s, but most mPCIe adapters provide only x1 lane, so the maximum data rate drops to 500 MB/s.

SISO: 122.88MHz * 3 bytes (IQ pair) = 368.64 MB/s
MIMO: 122.88MHz * 6 bytes (2xIQ pair) = 737.28 MB/s : too much for x1 PCIe lane.

Hi @andrewback and @ricardas ,
first of all thank you very much for your answers. I got LimeSDR XTRX with the PCIe carrier board. After some mechanical challenges, I have added the board to a MINI-PC MS-01-S1390.

I got all the required SW build from the sources, and the system seems to be operational.
This is what limeGUI is showing me this picture (nothing is connected to the input)

At 100MSPS, we have some overruns reported:

14:38:33] WARNING: /dev/limepcie0/trx0 Rx0: 306.249 MB/s | TS:499988736 pkt:1953081 o:10432(+2120) l:0(+0) dma:50079/50080(+1) swFIFO:511
[14:38:34] WARNING: /dev/limepcie0/trx0 Rx0: 306.249 MB/s | TS:599998464 pkt:2343744 o:12629(+2197) l:0(+0) dma:60096/60096(+0) swFIFO:512
[14:38:35] WARNING: /dev/limepcie0/trx0 Rx0: 306.372 MB/s | TS:699998208 pkt:2734368 o:14794(+2165) l:0(+0) dma:70112/70116(+4) swFIFO:509
[14:38:36] WARNING: /dev/limepcie0/trx0 Rx0: 306.249 MB/s | TS:799997952 pkt:3124992 o:17043(+2249) l:0(+0) dma:80128/80132(+4) swFIFO:509
[14:38:37] WARNING: /dev/limepcie0/trx0 Rx0: 306.249 MB/s | TS:900007680 pkt:3515655 o:19184(+2141) l:0(+0) dma:90145/90148(+3) swFIFO:510

Some questions:

  1. What are the recommended APIs? On one side, the LimeSuite APIs look more C-ish, and seems to have a bit simpler way of configuring devices, on the other side the LimeSuiteNG shall be use with the never HW (according to the doc). What is your recommendation? Is there any way to use C-like wrapper with the LimeSuiteNG C++ APIs?
  2. How fast we can retune the RF front-end, and what is the best way to do it?
  3. Based on the one of the examples provided, I have tried to implement a simple data dumper (store samples for further analysis with Matlab). However, I run into some problems. The system provides a few hundred samples with non-zero values, and then for very long period of time I got zeros, followed by non-zero samples again. Not sure if I have configured the device properly, and if my implementation is missing some crucial steps. System always provided the required amount of samples (from the StreamRx call), reports overruns, no losses, with data rate 0 bytes/s, and no late packages.

In the included code I call:

stream → StreamStatus(&rxStreamStat, nullptr);

after reading samples from the device. See included code.

I could not find any information what is happening to the data vector when we have:

  • overruns
  • losses
  • late packages

How I can check the samples integrity ?

Your comments are highly appreciated!

With the best regards,
Darek

The data dumper code:
#include “limesuiteng/limesuiteng.hpp”
#include
#include “common.h”
#include<stdio.h>

using namespace lime;
using namespace std::literals::string_view_literals;

double frequencyLO = 1000.0e6;
float msr_duration_in_sec = 0.5;
float sampleRate = 100.0e6;
static uint8_t chipIndex = 0; // device might have several RF chips

bool stopProgram (false);
void intHandler(int dummy)
{
std::cout << "Stopping\n"sv;
stopProgram = true;
}

static LogLevel logVerbosity = LogLevel::Verbose;
static void LogCallback(LogLevel lvl, const std::string& msg)
{
if (lvl > logVerbosity)
return;
std::cout << msg << std::endl;
}

int main(int argc, char** argv)
{

lime::registerLogHandler(LogCallback);

auto handles = DeviceRegistry::enumerate();

if (handles.size() == 0)

{
    std::cout << "No devices found\n"sv;
    return -1;
}

std::cout << "Devices found :"sv << std::endl;
for (size_t i = 0; i < handles.size(); i++)
  std::cout << i << ": "sv << handles[i].Serialize() << std::endl;

std::cout << std::endl;

// Use first available device

SDRDevice* device = DeviceRegistry::makeDevice(handles.at(0));

if (!device)
{
    std::cout << "Failed to connect to device"sv << std::endl;
    return -1;
}

device->SetMessageLogCallback(LogCallback);

device->Init ();

// RF parameters
SDRConfig config;
config.channel[0].rx.enabled = true;
config.channel[0].rx.centerFrequency = frequencyLO;
config.channel[0].rx.sampleRate = sampleRate;
config.channel[0].rx.oversample = 64;
config.channel[0].rx.lpf = 0;
config.channel[0].rx.path = 2; //rxPath;
config.channel[0].rx.calibrate = false;
config.channel[0].rx.testSignal.enabled = false;


// TX is inactive
config.channel[0].tx.enabled = false;
config.channel[0].tx.sampleRate = sampleRate;
config.channel[0].tx.oversample = 2;
config.channel[0].tx.path = 0;
config.channel[0].tx.centerFrequency = frequencyLO - 1e6;
config.channel[0].tx.testSignal.enabled = false;

std::cout << "Configuring device ...\n"sv;

auto t1 = std::chrono::high_resolution_clock::now();
device->Configure(config, chipIndex);
auto t2 = std::chrono::high_resolution_clock::now ();

std::cout << "SDR configured in "sv << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << "ms\n"sv;

FILE *file_id;

// Samples data streaming configuration

StreamConfig streamCfg;
streamCfg.channels[TRXDir::Rx] = { 0 };
streamCfg.format = DataFormat::F32;
streamCfg.linkFormat = DataFormat::I16;

std::unique_ptr<lime::RFStream> stream = device->StreamCreate (streamCfg, chipIndex);

stream->Start();


std::cout << "Stream started ...\n"sv;
signal(SIGINT, intHandler);

const unsigned int NumSamplesToRead = 1024;

complex32f_t **rxSamples = new complex32f_t *[2]; // allocate two channels for simplicity

for (int i = 0; i < 2; ++i)
    rxSamples[i] = new complex32f_t[NumSamplesToRead];


auto startTime = std::chrono::high_resolution_clock::now();
t1 = startTime;
t2 = t1;

uint64_t totalSamplesReceived = 0;
StreamMeta rxMeta{};
StreamStats rxStreamStat;


file_id = fopen("/dev/shm/lime_sdr_samples_1000MHz_100MSPS.bin", "wb");
if (file_id == NULL)
  {
     printf("Cannot create the samples file");
  }

unsigned long max_num_samples_to_collect = (unsigned long) (msr_duration_in_sec * sampleRate);


float re_arr[NumSamplesToRead];
float im_arr[NumSamplesToRead];
float cplx_num[2];
size_t written_to_file;

printf ("max num samples to collect:%ld\n", max_num_samples_to_collect);


int print_debug = 1;


double freq = device->GetFrequency (0, TRXDir::Rx, 0);
printf("Freq:%f [MHz]\n", freq/1.0e6);

while (totalSamplesReceived < max_num_samples_to_collect)
  {

     // read samples
     uint32_t samplesRead = stream->StreamRx (rxSamples, NumSamplesToRead, &rxMeta);

     if (samplesRead == 0)
       {
         printf("--ZERO SAMPLES --\n");
         continue;
       }

    // Check status. Is this correct way and place to call it?

    stream -> StreamStatus(&rxStreamStat, nullptr);

    if (print_debug)
       {

          printf (" pkg overrun: %d, lost: %d, data rate:%f, late:%d, "
                  "packets:%ld \n ",
                  rxStreamStat.overrun, rxStreamStat.loss, rxStreamStat.dataRate_Bps, rxStreamStat.late, rxStreamStat.packets);

            //printf("Samples read %d total sample:%ld\n", samplesRead, totalSamplesReceived);
        };



    for (uint32_t i = 0; i < samplesRead; i++)
     {
         cplx_num[0] = rxSamples[0][i].real();
         cplx_num[1] = rxSamples[0][i].imag ();
         written_to_file = fwrite (cplx_num, sizeof (float), 2,  file_id);
         //printf(" re:%f im: %f\n", cplx_num[0] , cplx_num[1]);
      }
    totalSamplesReceived += samplesRead;
}



// clean up
stream.reset ();
fclose (file_id);

DeviceRegistry::freeDevice(device);

for (int i = 0; i < 2; ++i)
    delete[] rxSamples[i];
delete[] rxSamples;

return(0);

};

limeGUI is not particularly efficient, it is calculating FFT using only single CPU thread, and also updating the GUI, so it might not sustain the highest sampling rates that the SDR hardware is capable, if the host CPU is weak.

LimeSuiteNG has ‘limesuiteng-legacyapi’ library, that provides the same C API functions as LimeSuite, but that’s only for backwards compatibility, so that existing software wouldn’t need to reimplement support.

Right now it’s quite slow couple milliseconds, but that’s just because of software doing extra redundant register writes. With some optimizations, XTRX can retune LO in ~500us, can be less if frequencies can be known ahead of time and dynamic values are precomputed.

There are issues with your implementation, if you expect it to work with 100MHz sampling rate:

  1. Reading in only 1000 samples chunks. That creates a lot of cpu overhead in queues locking. So you should read bigger chunks at least 16384.
  2. Reading StreamStatus after each StreamRx. That also creates a lot of CPU overhead, as the counters need to be synchronized between threads. Use it periodically, if you need to see what the state is, if you are interested in bad events, like packet loss, overrun, underrun, you can provide stream status callback function to StreamConfig, it will get called when that happens.
  3. After each read, using printf to output the status. Printing to screen is really slow, so your program will spend more time printing, than actually processing the samples, causing Rx overruns.
  4. writing samples data to file, one sample at a time, that is also ineffective. You should just write the entire buffer in one go, the floating point samples are already interleaved IQIQIQ…

Rx overrun: PC has received the data, but the FIFOs are full and cannot accept more, so the received data is dropped
Rx loss: PC has received data, but there was missing samples since last reception.
Tx underrun: PC is sending Tx packet with a timestamp that is already past, so the data is dropped and not sent to hardware.
Tx late: PC has sent data with timestamp to hardware, hardware has received it, but the samples were not transmitted, as the timestamp was already late.

Just to add that new applications should not use the legacy API, as this will provide a subset of NG functionality and perhaps more importantly, it may be deprecated at some point in the future.

Hi @andrewback @ricardas,
thanks for your answer. I did some experiments with the HW & a modified version of the code from my previous post.

Input:
I feed the LimeSDR XTRX from the VSG60A Vector signal generator. A single sinusoid at 1025MHz with Ps =-40dBm. The LimeSDR was tuned to fc=1GHz, and the sampling rate was set to 100MSPS.

Samples collection:
See attached code. The system collects 2^16 samples (or any other number), and save the whole buffer to a file in one go. I guess this is a bare minimum, to get the samples stored to a file.
The data was imported to Matlab, and this the results:

The real part (and also imaginary) of the data vector are zeros most of the time (this is within one single 65k sample buffer received from the device). Zoomed into the first part of the samples shows:


and two “bursts”:

I did experiments with 1024^2 samples, and the signal did look very similar to the 65k data vector.

I did not dive into the limeGUI source code (I am not a C++ specialist) but it seems to be working well.

Maybe there is a “magic” configuration item that I did not set correctly, and lack of a proper setup is the main source of this behaviour.

Could anybody try to run the code below on the LimeSDR XTRX and see what is
happening ?

Cheers,
Darek

The code:
#include “limesuiteng/limesuiteng.hpp”
#include
#include “common.h”
#include<stdio.h>

using namespace lime;
using namespace std::literals::string_view_literals;

double frequencyLO = 1000.0e6;
float msr_duration_in_sec = 0.5;
float sampleRate = 100.0e6;
static uint8_t chipIndex = 0; // device might have several RF chips

bool stopProgram (false);
void intHandler(int dummy)
{
std::cout << "Stopping\n"sv;
stopProgram = true;
}

static LogLevel logVerbosity = LogLevel::Verbose;
static void LogCallback(LogLevel lvl, const std::string& msg)
{
if (lvl > logVerbosity)
return;
std::cout << msg << std::endl;
}

int main(int argc, char** argv)
{

lime::registerLogHandler(LogCallback);

auto handles = DeviceRegistry::enumerate();

if (handles.size() == 0)

{
    std::cout << "No devices found\n"sv;
    return -1;
}

std::cout << "Devices found :"sv << std::endl;
for (size_t i = 0; i < handles.size(); i++)
  std::cout << i << ": "sv << handles[i].Serialize() << std::endl;

std::cout << std::endl;

// Use first available device

SDRDevice* device = DeviceRegistry::makeDevice(handles.at(0));

if (!device)
{
    std::cout << "Failed to connect to device"sv << std::endl;
    return -1;
}

device->SetMessageLogCallback(LogCallback);

device->Init ();

// RF parameters
SDRConfig config;
config.channel[0].rx.enabled = true;
config.channel[0].rx.centerFrequency = frequencyLO;
config.channel[0].rx.sampleRate = sampleRate;
config.channel[0].rx.oversample = 2;
config.channel[0].rx.lpf = 0;
config.channel[0].rx.path = 2; //rxPath;
config.channel[0].rx.calibrate = false;
config.channel[0].rx.testSignal.enabled = false;


// TX is inactive
config.channel[0].tx.enabled = false;
config.channel[0].tx.sampleRate = sampleRate;
config.channel[0].tx.oversample = 2;
config.channel[0].tx.path = 0;
config.channel[0].tx.centerFrequency = frequencyLO - 1e6;
config.channel[0].tx.testSignal.enabled = false;

std::cout << "Configuring device ...\n"sv;

auto t1 = std::chrono::high_resolution_clock::now();
device->Configure(config, chipIndex);
auto t2 = std::chrono::high_resolution_clock::now ();

std::cout << "SDR configured in "sv << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << "ms\n"sv;

FILE *file_id;

// Samples data streaming configuration

StreamConfig streamCfg;
streamCfg.channels[TRXDir::Rx] = { 0 };
streamCfg.format = DataFormat::F32;
streamCfg.linkFormat = DataFormat::I16;

std::unique_ptr<lime::RFStream> stream = device->StreamCreate (streamCfg, chipIndex);

stream->Start();


std::cout << "Stream started ...\n"sv;
signal(SIGINT, intHandler);

const unsigned int num_samples_to_read =  64 * 1024;
const unsigned int num_blocks_to_read    = 10;

complex32f_t **rxSamples = new complex32f_t *[2]; // allocate two channels for simplicity

for (int i = 0; i < 2; ++i)
    rxSamples[i] = new complex32f_t[num_samples_to_read];


auto startTime = std::chrono::high_resolution_clock::now();
t1 = startTime;
t2 = t1;

uint64_t totalSamplesReceived = 0;
StreamMeta rxMeta{};
StreamStats rxStreamStat;


file_id = fopen ("/dev/shm/lime_sdr_samples_fs1025MHz_fc1000MHz_100MSPS_pt_m40dBm.bin", "wb");   
if (file_id == NULL)
  {
     printf("Cannot create the samples file");
  }


size_t written_to_file;
size_t total_written_to_file = 0;

printf ("max num samples per block:%d\n", num_samples_to_read);
printf ("max num samples to collect:%d\n", num_blocks_to_read * num_samples_to_read);


int print_debug = 0;


double freq = device->GetFrequency (chipIndex, TRXDir::Rx, 0);

printf("Freq:%f [MHz]\n", freq/1.0e6);    

uint32_t blk_cnt = 0;
// write the number of samples per block
written_to_file
  = fwrite (&num_samples_to_read, sizeof (unsigned int), 1, file_id);
            
while (blk_cnt  <  num_blocks_to_read)
  {

     // read samples       
     uint32_t samplesRead = stream->StreamRx (rxSamples, num_samples_to_read, &rxMeta);

     if (samplesRead == 0)
       {
         printf("--ZERO SAMPLES --\n");
         continue;
       }
   
    if (print_debug == 1)
       {

         if ((blk_cnt % 5) == 0)
           {              
            stream -> StreamStatus(&rxStreamStat, nullptr);
            printf (" pkg overrun: %d, lost: %d, data rate:%f, late:%d, "
                    "packets:%ld \n ",
                    rxStreamStat.overrun, rxStreamStat.loss,
                    rxStreamStat.dataRate_Bps, rxStreamStat.late,
                    rxStreamStat.packets);
            blk_cnt = 0;
            };
       };
    written_to_file
      = fwrite (rxSamples[0], sizeof (float), 2 * num_samples_to_read, file_id);               
    
    total_written_to_file += (sizeof(float) * written_to_file);
            
    
    totalSamplesReceived += samplesRead;
    blk_cnt ++;
}

printf ("Bytes written to file: %ld \n", total_written_to_file);
printf ("Total samples received: %ld \n", totalSamplesReceived);

// clean up
stream.reset ();
fclose (file_id);

DeviceRegistry::freeDevice(device);

for (int i = 0; i < 2; ++i)
    delete[] rxSamples[i];
delete[] rxSamples;

return(0);

};

and the output from the program:

Devices found :
0: LimeSDR XTRX, media=PCIe, addr=/dev/limepcie0, serial=000000000e5dd88c

Configuring device …
Sampling rate set(100.000 MHz): CGEN:400.000 MHz, Decim: 2^8, Interp: 2^8
SDR configured in 190ms
/dev/limepcie0/trx0 Rx0 Setup: usePoll:1 rxSamplesInPkt:256 rxPacketsInBatch:39, DMA_ReadSize:40560, link:I16, batchSizeInTime:99.84us FS:100000000.000000

Stream started …
max num samples per block:65536
max num samples to collect:655360
Freq:999.999996 [MHz]
Bytes written to file: 5242880
Total samples received: 655360
Rx0 stop: packetsIn: 2652

I’ve ran your code, and it seems to work correctly. I do get a file with the samples as expected. It might be a performance problem, or data transfer issues. Does the problem happen with lower sampling rates (like 10MHz)?

What is your XTRX gatware version/revision? You can check by running in terminal “limeDevice -f
Are there any error messages in system log? “sudo dmesg | grep limepcie

Once the board has been configured, you can try to dump samples using “limeTRX --log=debug --samplesCount 655360 --output iqsamples.bin
The output file will contain interleaved int16_t data of IQ samples.

Hi @ricardas,
The output of the command:
limeDevice -f
Found 1 device(s) :
0: LimeSDR XTRX, media=PCIe, addr=/dev/limepcie0, serial=000000000e5dd88c
Expansion name : UNSUPPORTED
Firmware version : 4
Gateware version : 1
Gateware revision : 17
Gateware target board : LimeSDR XTRX
Hardware version : 18
Protocol version : 1
Serial number : 241031308
SPI slave devices :
FPGA
LMS7002M
Memory devices :
EEPROM
FPGA/FLASH
FPGA/gold-image
FPGA/user-image
GPS Lock:
GPS - Undefined
Glonass - Undefined
Galileo - Undefined
Beidou - Undefined

sudo dmesg | grep limepci gives:

[ 3.241102] limepcie: loading out-of-tree module taints kernel.
[ 3.241133] limepcie: module verification failed: signature and/or required key missing - tainting kernel
[ 3.241491] limepcie : module init v0.1.9-gbecc3e92
[ 3.241818] limepcie 0000:01:00.0: enabling device (0000 → 0002)
[ 3.270006] limepcie 0000:01:00.0: [device info] LimeSDR-XTRX FW:4 HW:18 PROTOCOL:1 S/N:0x000000000E5DD88C
[ 3.270019] limepcie 0000:01:00.0: 8.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x2 link)
[ 3.270024] limepcie 0000:01:00.0: dma_get_required_mask: ffffffff.
[ 3.270262] limepcie 0000:01:00.0: try_set_dma_bitmask(32): test buffer va:0000000061d9a96d pa:104c5c000 bus:fffff000
[ 3.270267] limepcie 0000:01:00.0: using 32bit DMA mask
[ 3.270319] limepcie 0000:01:00.0: 1 MSI IRQs allocated.
[ 3.270330] limepcie 0000:01:00.0: DMA buffer size(65536)
[ 3.270332] limepcie 0000:01:00.0: DMA channels: 1
[ 3.273259] limepcie 0000:01:00.0: Creating /dev/limepcie0/trx0
[ 3.273805] limepcie 0000:01:00.0: Creating /dev/limepcie0/control0
[ 3.273840] limepcie 0000:01:00.0: UART count: 0

and the third one:
limeTRX --log=debug --samplesCount 655360 --output iqsamples.bin
FPGA: StopStreaming
FPGA: StopWaveformPlayback
FPGA: ResetPacketCounters
FPGA: ResetTimestamp
/dev/limepcie0/trx0 Rx0 Setup: usePoll:1 rxSamplesInPkt:74 rxPacketsInBatch:13, DMA_ReadSize:8944, link:I12, batchSizeInTime:96.2us FS:9999999.000000

SetOSThreadPriority: Failed to set priority(6), sched_prio(2), policy(1), ret(99)
RxSetup wait for Rx worker thread.
Rx worker thread ready.
Rx data to file: iqsamples.bin
FPGA: StartStreaming
Rx receive loop start.
Samples received: 16384
TRXLooper::Stop()
TRXLooper: wait for Rx loop end.
Rx receive loop end.
Rx0 stop: packetsIn: 6032
FPGA: StopStreaming
RxTeardown wait for Rx worker shutdown.
Rx worker thread shutdown.
DeviceRegistry Removed: LimePCIe
DeviceRegistry Removed: FTDI
DeviceRegistry Removed: FX3

The samples from the int16 bin file look like (blue → Re, red ->Im) :

At fs=10MSPS and f=1002 MHz, I got:

and zoomed on the real part of the data vector:

  1. I am using gcc version 12.2.0 (Debian 12.2.0-14) for building the SW.
  2. I am using only the RX part of the LimeSDR.
  3. The input signal goes via the RF front-end (LNA) on the mPCIe carrier board.

Regards,
Darek

Just double checking, you are using exactly LimeSDR XTRX v1.2 board? Are you using a modified gateware? The reported hardware version of LimeSDR XTRX should be 2, your gateware is returning 18.

[ 3.270267] limepcie 0000:01:00.0: using 32bit DMA mask

Driver is currently using only 32bit DMA adressing, so just to avoid any host system addresing issues, I suggest to update XTRX gateware from: LimeSDR-XTRX_GW/bitstream/user_flash_programming_file.bin at master · myriadrf/LimeSDR-XTRX_GW · GitHub
flash the gateware with: limeFLASH -t "FPGA/user-image" user_flash_programming_file.bin
The latest gateware will allow the driver to switch to 64bit DMA addresing, avoiding any potential linux compatibility layers.

The graph is too dense to see anything, should it contain the sinus wave?
Amplitude seems low compared to the floating point version, just to make sure the reading is right, the values in the file are raw (int16_t I , int16_t Q) native endianess (so most likely little endian). I’m just interested if it has the same 0 value gaps.

Hi @ricardas,

I have got the board from Crowd Supply a week ago, and I did not touch any firmware on it, The board version is 1.2.

As for the int16 data stream, I need to run the test once again.

BTW, which image shall I try to flash?
Cheers,
Darek

You can enable internal test signal generator, it will produce sine wave, in your code add:

config.channel[0].rx.testSignal.enabled = true;
config.channel[0].rx.testSignal.divide = lime::ChannelConfig::Direction::TestSignal::Divide::Div8;
config.channel[0].rx.testSignal.scale = lime::ChannelConfig::Direction::TestSignal::Scale::Half;

First test the int16 data gathered with limeTRX, see if it has the same problem.
If it does, then try to update the gateware.

The one I linked, user_flash_programming_file.bin

Hi @ricardas,
I have modified the code to use the internal generator (the VSD is still connected to the input. Sampled at 10MSPS. The result of data collection and processing:


and a bit zoomed to a single burst:

Cheers,
Darek