[SOLVED] Wire format and linkRate

Hi everyone!

I have been fiddling with my brand new LimeSDR and have encountered a strange situation.

I was testing if wire formats affected performance, so I took the SoapySDRUtil tool and modified it so I could change the wire format between CS16 and CF32. Strangely, even tough CS16 should take half the bandwidth, when I run the program with a rate of 3 megasamples the “usbtop” tool reports 12 MBps usage for both wire formats.

After that I took the “singleRX” code from LimeSuite and did the same, testing with LMS_FMT_F32, LMS_FMT_I16 and LMS_FMT_I12 and 1 Megasample rate, getting results from the “linkRate” value extracted from the stream status and the “usbtop” command.

For F32 format -> linkRate: 4MBps, usbtop: 4 MBps
For S16 format -> linkRate: 4MBps, usbtop: 4 MBps
For I12 format -> linkRate: 3MBps, usbtop: 3 MBps

For I12 the results make sense, since 12 bytes are 1.5 Bytes, the same goes for S16 (2*2bytes). But for F32 shouldn’t it be 8 MBps?

I am using the following code, and only changing the format and the buffer type
//Streaming Setup

//Initialize stream
lms_stream_t streamId;
streamId.channel = 0; //channel number
streamId.fifoSize = 1024 * 1024; //fifo size in samples
streamId.throughputVsLatency = 1.0; //optimize for max throughput
streamId.isTx = false; //RX channel

//streamId.dataFmt = lms_stream_t::LMS_FMT_F32; //32-bit floats
streamId.dataFmt = lms_stream_t::LMS_FMT_I16;
//streamId.dataFmt = lms_stream_t::LMS_FMT_I12; //12-bit ints

if (LMS_SetupStream(device, &streamId) != 0)
    error();

//Data buffers
const int bufersize = 10000; //complex samples per buffer
//float buffer[bufersize * 2]; //must hold I+Q values of each sample
short buffer[bufersize * 2];
//Start streaming
LMS_StartStream(&streamId);

 auto t1 = chrono::high_resolution_clock::now();
auto t2 = t1;

while (chrono::high_resolution_clock::now() - t1 < chrono::seconds(10)) //run for 10 seconds
{
    int samplesRead;
    //Receive samples
    samplesRead = LMS_RecvStream(&streamId, buffer, bufersize, NULL, 1000);
    //I and Q samples are interleaved in buffer: IQIQIQ...
    //Print stats (once per second)
    if (chrono::high_resolution_clock::now() - t2 > chrono::seconds(1))
    {
        t2 = chrono::high_resolution_clock::now();
        lms_stream_status_t status;
        //Get stream status
        LMS_GetStreamStatus(&streamId, &status);
        cout << "RX data rate: " << status.linkRate / 1e6 << " MB/s\n"; //link data rate
        cout << "RX sample rate: " << status.sampleRate / 1e6 << " MSamples/s\n"; //link data rate
        cout << "RX fifo: " << 100 * status.fifoFilledCount / status.fifoSize << "%" << endl; //percentage of FIFO filled
    }
}
//Stop streaming
LMS_StopStream(&streamId); //stream is stopped but can be started again with LMS_StartStream()
LMS_DestroyStream(device, &streamId); //stream is deallocated and can no longer be used
 //Close device
LMS_Close(device);

return 0;

Thanks for the help!

Kind regards,
Aridane

Everything is as expected, the hardware works only with samples in I12 and I16 formats, when F32 is selected it still uses I16 as source for transfering, but on PC side values are just recalculated to floating point values.

1 Like

I see, thank you very much for the clarification!

For the IQ samples I am assuming that the values can oscillate between [-2048, 2047], should I assume the same for F32?

F32 outputs normalized values [-1:1]

1 Like

Thanks! And sorry for the noobism!