Small packet loss in the beginning of the RX stream

My use case is to make occasional parallel captures of certain duration from two Rx inputs, but I found out that this glitch happens when using one input as well.

The working cycle is as follows:

Init Lime board for 2 channels;
SetupRXStream x N, where N can be 1 or 2;

do {
  StartStream x N;
  receive TargetSampleCnt samples;
  StopStream x N;
} wihle (getchar() != 'q')

DestroyStream x N;
Close Lime board.

I receive the stream blockwise using a buffer of 50000 samples, at Fs = 5, 10, 20, 40 or 80 MSps. After each received block I call LMS_GetStreamStatus and if it indicates drops, log them to the screen. My setup can sustain 2x40 or 1x80 MSps streams freely for up to 5 secs for sure (and I think more, just didn’t try), except for the very first block of each Start/StopStream cycle.

I tried various combinations of Fs, oversampling and data format mode values, and built a small table which illustrates the case (oversampling seems not to have effect on this):

for FMT_I12:

MSps            Pkts lost  T drop start, mcs
2x40 (or 1x80)  ~160       187
2x20 (or 1x40)  15-50      374
2x10            0-8        748
2x5             0-5        1495

for FMT_I16 I checked two variants:

MSps            Pkts lost  T drop start, mcs
2x20 (or 1x40)  20-70      280

, which corresponds to the I12 case accounting that 280 ~= 3/4 * 374, so I didn’t try further.

So, these losses occur approx. at 14950 sample from the beginning of the stream, but their count differs from time to time. The pity is that it obviously isn’t caused by overall system throughput barrier, as it happens even at 10 MSps.

I encountered a glitch like this previously, when it happened stable at 1.5 ms after stream start, but then found out that it was caused by MS VS debugger (I use MS VC++ 2013). So, free run w/o debugger attached solved that. But in this case I tried to eliminate all reasons that I could imagine, running release version from cmd prompt, switched off my disk writing routines, etc. It’s very stable.

Of course, knowing that it’s to high degree “fixed”, I can just throw out first samples, but I don’t like such solutions and hope that it can be overcome some apropriate way…

My knowledge is very obsolete. But if I remember correctly there used to be some leftover data buffered from last stream somewhere in USB controller or FPGA. It used to return old 12 packets of I12 data (that would be 16320 samples) at the start of stream. Could be less if PC is slow to initiate USB transfers and they get discarded in hardware.

@ignasj this should be checked in firmware

By the way, do I make correct assumption that after I call LMS_StopStream(), there is no need to empty LimeSuite internal buffers by LMS_RecvStream calls to read out the data that came in while streaming was stopped? (I mean before the next LMS_StartStream call.) I thought that LMS_StartStream starts to fill buffers from scratch, is that right?

That’s right, LMS_StartStream call clears internal buffers before actually receiving data.

I do the following thing: capture one trace without modulation, then turn on AM on generator and take the second capture. The signal is totally different from the first capture, thus I know that the beginning part before the glitch is from the new capture, not leftover from previous (see image).

Where can I look how many samples one packet holds? Is it constant for different sampling rates? What does it depend on?

At the moment don’t know where the protocol description is. But it’s basically this:
Whole packet size 4096B:
Header 16B, rest is used for samples data.
Number of samples depends only on format selected.
I12 fits 1360 samples
I16 fits 1020 samples

When using MIMO samples count in packet is the same, but consists of interleaved data from each channel

Thank you. As a workaround, I ended up with just throwing away first 15000 samples of the capture after LMS_StartStream(). Not so good for low Fs values though (eg loosing first 1.5 ms at 10 MSps), hope that eventually this will be fixed…