SoapyLMS pipeline latency

Hello!

So I was trying to use my limeSDR as a simple SNA to measure power responses of the filters we build, and I decided to use SoapySDR just because it is an easy way to make a portable (across SDRs) application. I got to grips with most of the API, but there is still this question that I’m kind of curious about is the pipeline latency. That is, how far in the future should I schedule the TX burst in order to be able to reliably receive it?

I’m using buffer size of 1 MTU and I schedule i MTU of TX ahead of time and then just start receiving. I found that if I schedule the TX burst 20ms in the future, they tend to come out pretty much when expected using lower sampling rates (below 20MSPS) and central frequencies (which is desired for my application), but when I try to schedule them earlier (that is - less far into the future) or go wider in Fs, the bursts tend to not get transmitted at all.

So I was curious if there was a reliable way to account for it in software prior to doing the measurements or should I adopt a different strategy like continuous RX instead of burst mode to minimize the acquisition time of my measurements?

Thanks.

Sounds like it could be a question for @joshblum and I also have a vague memory of a utility that could be used to measure latency, which again I think Josh created.

Hi! Thanks for the response!

If you mean the MeasureDelay.py script in the SoapySDR repo, than it is not exactly what I’m concerned with. Basically it schedules a pulse TX 100ms in the future receives it via loopback and compares the timestamps. I have a similar function that calculates a delay that is identical to that script (although I use simple edge detection in place of a cross correlation).

But What I’m looking to achieve is to understand how early in the future can I schedule a burst TX to get it to transmit reliably. That is what I was talking about in the opening post - sorry if I wasn’t clear enough.

For now, I can assume that 100ms scheduling delay should be enough, and proceed with my application, however every delay will stack up and ultimately slow down the acquisition, which I’m trying to maximise of course.

The 100 ms is just some safe guess to read the current timestamp and stage a transmit packet. You could probably lower than that time until it stops working – by a few orders of magnitude.

Also reading the current timestamp may be a little slow, im not sure, it may be faster to get a recent timestamp from the receive stream.