How does the software do this? Is it as simple as taking a sample from the I16 format and dividing it by 0x7fff?
Exactly. You can find more details in the Read/Write functions implementation (https://github.com/myriadrf/LimeSuite/blob/master/src/protocols/Streamer.cpp#L74) :
if(config.format == StreamConfig::FMT_FLOAT32 && !config.isTx)
//in place conversion
complex16_t* ptr = (complex16_t*)samples;
int16_t* samplesShort = (int16_t*)samples;
float* samplesFloat = (float*)samples;
popped = fifo->pop_samples(ptr, count, &meta->timestamp, timeout_ms);
for(int i=2*popped-1; i>=0; --i)
samplesFloat[i] = (float)samplesShort[i]/32767.0f;
So what is the advantage of using LMS_FMT_I16 instead of LMS_FMT_I12?
Some hints can be found in comments (https://github.com/myriadrf/LimeSuite/blob/master/src/protocols/Streamer.h#L61):
Choosing a compressed format can decrease link use
at the expense of additional processing on the PC.
Since the LimeSDR uses a 12 bit ADC, is there any difference in the amount of information that can be represented between the LMS_FMT_I12 and LMS_FMT_I16 data type?
It’s difficult to say for sure, but according to the comments mentioned above both formats contain the same data and LMS_FMT_I16 is just a “16-bit container” for “12-bit values”.
I’m not sure in the last statement, so it would be great to get the correct answer from @Zack.