Burst with timestamps in LimeSuiteNG

I am struggling with understanding how to properly use the burst mode with the new LimeSuiteNG interface.

My sample_rate is 4e6, I want to send burst packets in a 10ms time frame and the burst length is 5ms. I have a tx buffer size of 1024 samples.

For this example the burst packet has to be broken up into 19 packets of 1024 samples plus one packet of 544 samples.

So in a loop I first add 40,000 ticks to the timestamp in the StreamTxmeta struct: tx_meta.AddTicks(40,000). I set the hasTimestamp element to true and set the flags to zero.

After transmission of the first packet of the burst I set the hasTimestamp to false and transmit the following 18 packets.

For the last packet of the burst consisting of 544 samples I set the flags to StreamTxMeta::EndOfBurst.

Then the loop continues with setting the next timestamp.

The code snippet:

 const size_t frame_ticks = sample_rate * 10e-3;

  vector<complex<float>> tx_buffer(1024);
  StreamTxMeta tx_meta;

  // Initialize the oscillator:
  complex<double> w = exp(2.0i * acos(-1) * double(tone / sample_rate));
  complex<double> y = conj(w);

  while (!stoken.stop_requested()) {

    tx_meta.timestamp.AddTicks(frame_ticks);

    cout << format("{:.2f} {:.2f}", tx_meta.timestamp.GetTicks() / sample_rate,
                   tx_stream->GetHardwareTimestamp() / sample_rate)
         << endl;

    tx_meta.hasTimestamp = true;
    tx_meta.flags = 0;

    for (size_t i = 0; i < 19; ++i) {
      for (size_t n = 0; n < tx_buffer.size(); ++n)
        tx_buffer[n] = 1.0 * (y *= w);
      complex<float> *tx_wrap[1]{tx_buffer.data()};
      tx_stream->Transmit(tx_wrap, tx_buffer.size(), &tx_meta);
      tx_meta.hasTimestamp = false;
    }

    {
      for (size_t n = 0; n < 544; ++n)
        tx_buffer[n] = 1.0 * (y *= w);

      complex<float> *tx_wrap[1]{tx_buffer.data()};
      tx_meta.flags = StreamTxMeta::EndOfBurst;
      tx_stream->Transmit(tx_wrap, tx_buffer.size(), &tx_meta);
    }
  }

I would have expected, to see a 5ms pulsed sinusoid with a 50% duty cycle. However this is not the case. The sinus is seen as a uninterrupted stream, just no pausing at all.

I also would have expected to see some backpressure from the ->Transmit* calls which seems to happen, but in a strange way. I observe a build up of the scheduled timestamps versus the current hw time as can be seen from the following snapshot from the console output:

...
44.35 22.64
44.36 22.64
44.37 22.64
44.38 22.65
44.39 22.65
44.40 22.66
44.41 22.67
44.42 22.67
44.43 22.68
44.44 22.68
44.45 22.68
44.46 22.69
44.47 22.70
44.48 22.70
44.49 22.71
44.50 22.71
44.51 22.72
44.52 22.72
...

Obviously I am misunderstanding how the interface is supposed to work. Hopefully someone can point me into the right direction. Any help is appreciated!

Regards, Roland

With a sample rate of 4e6 this code adds only 4000 ticks each loop, which is 1ms, and your burst is 5ms long, that means the bursts overlap in time, and they up end up being transmitted one right after another, resulting in a continuous transmission.

You should transmit all your packets with specific timestamps enabled, that way each packet will be checked if they are not “too late” to be transmitted and would be reported as Tx underruns, such packets would be dropped and not transmitted.

setting hasTimestamp to false, means transmit as soon as possible, no timestamp checks are performed, and the data is just simply appended to the FIFO buffer. (It would work like you expect in your example, the first packet with a timestamp enabled, would create a backpressure from hardware, and the rest of non timestamped packets would be just waiting to be transmitted right after it.) But because your timestamps are incorrect, the timestamped packets that are late were dropped, and so nothing creates a backpressure for the remaining packets that you send without timestamps. Resulting in continuous transmission.

Software internally batches Tx packets and sends them to hardware once a specific samples count is filled, or a StreamTxMeta::EndOfBurst flag is given, to flush the batch at current size.

Transmit does create backpressure, it’s just the software internally has ~100ms of buffering, so backpressure would be noticed only after that much data has been filled.

Ok, so what do I miss then? What is a Tick?

I am asking because 4e6 * 10e-3 = 40,000. So where does the additional divisor of ten (10) come from?

My bad, I looked into the exponents, and missed the 0.

The issue you were experiencing was caused by use of mixed timestamp enable flags in packets. It’s fixed now: Fix batching of Tx packets with mixed timestamp enablement · myriadrf/LimeSuiteNG@23d2b5d · GitHub
But overall if timestamps are being used, it’s best to specify timestamp for all Transmit() calls, to avoid the situation I’ve mentioned, that if there were any performance fluctuations or anything else that would affect timing of data transfers, those problems could be detected.

Thank you, with your latest changes it seems to work. :slightly_smiling_face: Some questions remain:

  1. What is the EndOfBurst flag is supposed to do?
  2. I do understand, that you suggest putting timestamps on every packet, however since the Timespec cannot directly be set with a tick count it turns out cumbersome tracking the ticks with AddTicks only. Possibly I am not exploiting the interface to its full potential yet. Can you give, please, some rationale as how the timestamp (via Timespec interface) is supposed to be used?

Software transfers packets to hardware in batches, the EndOfBurst flag flushes the batch early.

It can be set with explicit constructor:

tx_meta.timestamp = Timespec(int64_t(tickCount));