Data format of samples

Using the c++ limesuite library

Since the LimeSDR uses a 12 bit ADC, is there any difference in the amount of information that can be represented between the LMS_FMT_I12 and LMS_FMT_I16 data type?

I understand that LMS_FMT_I12 is used in the hardware with 3 bytes representing 1 IQ sample and the LMS_FMT_I16 uses 4 bytes to represent 1 IQ sample as shown here. So what is the advantage of using LMS_FMT_I16 instead of LMS_FMT_I12?

The F32 outputs normalized values [-1:1] of the LMS_FMT_I16 data type. How does the software do this? Is it as simple as taking a sample from the I16 format and dividing it by 0x7fff?

One for @Zack I think.

Hello @kohlsn,

How does the software do this? Is it as simple as taking a sample from the I16 format and dividing it by 0x7fff?

Exactly. You can find more details in the Read/Write functions implementation (https://github.com/myriadrf/LimeSuite/blob/master/src/protocols/Streamer.cpp#L74) :

if(config.format == StreamConfig::FMT_FLOAT32 && !config.isTx)
{
    //in place conversion
    complex16_t* ptr = (complex16_t*)samples;
    int16_t* samplesShort = (int16_t*)samples;
    float* samplesFloat = (float*)samples;
    popped = fifo->pop_samples(ptr, count, &meta->timestamp, timeout_ms);
    for(int i=2*popped-1; i>=0; --i)
        samplesFloat[i] = (float)samplesShort[i]/32767.0f;
}

So what is the advantage of using LMS_FMT_I16 instead of LMS_FMT_I12?

Some hints can be found in comments (https://github.com/myriadrf/LimeSuite/blob/master/src/protocols/Streamer.h#L61):

Choosing a compressed format can decrease link use
at the expense of additional processing on the PC.

Since the LimeSDR uses a 12 bit ADC, is there any difference in the amount of information that can be represented between the LMS_FMT_I12 and LMS_FMT_I16 data type?

It’s difficult to say for sure, but according to the comments mentioned above both formats contain the same data and LMS_FMT_I16 is just a “16-bit container” for “12-bit values”.

I’m not sure in the last statement, so it would be great to get the correct answer from @Zack.

You can send I and Q sample (12 bits each) using 3 or 4 bytes:

  1. When sending using 4 bytes each of I and Q sample is send using 2 bytes, hence 4 bits not used i.e. link (USB/PCIe) throughput waste.
  2. When sending using 3 bytes there are all the bits of 3 bytes occupied. Hence no link throughput waste, but there is more processing involved at host side for I and Q samples unscrambling.
1 Like

Thank you, that makes a lot of sense.

I did some testing with the data types.

I found that I can normalize the LMS_FMT_I16 data type to produce the same result as using the LMS_FMT_F32.

However, when I normalize the LMS_FMT_I12 data type I get a result of a increased delay between the transmitter and receiver. Is this because of the I and Q unscrambling? Data being truncated? Or some other reason?

The LMS_FMT_I16 data type is just the LMS_FMT_I12 data type shifted 4 bits so that the least significant 4 bits are 0. If this is the case wouldn’t it make more sense to normalize the LMS_FMT_I16 to LMS_FMT_F32 by dividing the samples by 32752 (0x7FF0) and not 32767 (0x7FFF)?