RX Timing


I am building an application using LMS API in C++ where timing is very very crucial in the order of 100 microseconds. I noticed that limeSDR starts sampling when StartStream function is called. However, the RX starting time still varies slightly in the order of 1 or 2 milliseconds. How can I reduce this and make RX accurate? Any thoughts on this? I am experimenting with this on both Windows 10 and MacOS big sur btw.

Thank you in advance,

This is a very important issue that is related to a lot of applications. Is it really not possible on LimeSDR?

The LimeSDR is a full duplex device - Why not keep the RX running all of the time and select the data when you need it - that would eliminate the starting time.

1 Like

@RightHalfPlane I think that is not a viable option. I send an external trigger from GPIO output to a signal generator, then after exactly 2ms I have to receive the signal (accurate in the order of 10 microseconds). If I keep RX running, how would I now when the GPIO has been enabled?

If you mark the RX stream when you set the GPIO and the Stream rate is 1 M Samples per second then 2ms is 2000 samples after the mark - that is accurate to 1 microsecond - if the clock on the LimeSDR is up to it. You will likely need a calibration delay factor, but hopefully it is a constant or very small.

@RightHalfPlane but how would I mark the sample when the GPIO is enabled. How is it possible with LMS API?

You have one thread that is buffering the RX stream to memory and another thread that is processing the buffered data in delayed real time. You use a “volatile” variable to inform the delayed real time thread when the GPIO was enabled. If there was no delay in the system, you would just could count 2000 samples and start processing, but the signal generator has a start up time and there are other delays in the system - you will need to adding in the calibration delay factor to account for the build in delays. The time between the GPIO being enabled and the Spike from the signal generator on the RX channel gives you the starting point for finding the calibration delay factor.

Your windows or mac user-mode (application) code, run in a thread or threads managed by the OS, along with many many other threads. These are not RTOS’s. For example, in Windows, your threads will routinely be suspended for milliseconds. This is probably the jitter you’re seeing. A common rule of thumb for windows application development is to design for at least 10 millisecond suspensions, because the OS will do that (or worse) once in a while, completely beyond the control of you or the application.


You should check Synchronize two LimeSDR . It can be used to synchronize several LimeSDR together or a Lime with any other external device.

NOTE that I would advice you to leave the RX stream on instead of turning it on and off. I usually lost samples (ie there’s a discontinuity in the timestamps, maybe it’s linked to your Corrupted Samples between Specific Indices of Samples) at the start. Also it may take a few ms to start streaming, which you may not be able to afford.

@KarlL how do you continue streaming non-stop? I imagine an infinite loop which includes RecvStream function inside. But then what if the receive buffer gets partially full before the next samples come in? I think I might get duplicate or incomplete samples.

@ab1 What would you suggest? Is a real time OS only solution? Maybe I can get a Raspberry pi and install a real time linux on it.

You could indeed have a loop, where you call RecvStream, then check the timestamp. If it’s negative, then it means the synchronization signal was received (see again Synchronize two LimeSDR).

So at first you just drop all received samples with positive timestamps, then once you receive a negative timestamp, you know it’s the start of the signal your interested in. Then depending on what you need to do with that signal, you either process the samples live, or you can receive all the X following samples in a big enough buffer, and then process them.

Note that the timestamp is for a block of 1360 samples (in mono channel) or 680 samples (in dual channel). This has to do with how the FPGA is programmed. You need to see if that precision is good enough for what you need.

If you don’t care about the real timestamps, then it’s fine. If you do need the real timestamps, then you’ll need to deduce the start of your signal depending on the previous or following positive timestamps.

You won’t get duplicate or incomplete samples. If you ask RecvStream to return eg 1000 samples, it won’t return until it has 1000 new samples.

@KarlL Wow this explains a lot thanks man! How did you learn all these because the official LMS API documentation does not mention ANY of this. Anyways, the bad new is that I do not have a second lime sdr, and the only two options I have are: either transmit my signal using TX channel and receive the response via the same lime sdr after the whole signal is sent… or let a signal generator send my signal with an external trigger via the sdr’s GPIO and receive the response via RX channel after the whole signal is sent. Also, leaving the RX on does not solve my issue since I would have to do a lot of post processing to extract the part I am interested in but I will try that as an emergency backup plan.

The forum contains a lot of information, but it can be quite difficult to find exactly what you’re looking for…

I don’t know what you’re trying to achieve, but leaving RX on will not create that much post processing if you only process specific blocks of samples and just discard all others.

You were talking about using a Raspberry Pi: you could use it to control the GPIO of the Lime and of your signal generator. A simple Arduino could work also. You wouldn’t need a RTOS if all you need is synchronize the signal generator and the Lime.