I wrote code that sends a small block of bits with a timestamp from the local clock. I configure the Lime SDR v2 in loopback with the RF port connected to the TX port. I also wrote a digital loopback that takes I/Q samples and loops them from the TX to the RX side of my code. It takes about 14 milliseconds to loop in digital loopback. It takes about 400 milliseconds to run the same data through the Lime SDR. I turned the ‘hint’ from 1 to 0 when configuring the TX and RX stream to get lower latency vs. throughput. I also decreased the buffer size to 2K on the TX and RX side.
Questions:
How can one configure the Lime SDR to have less latency? What kind of latencies should I expect going through the Lime SDR?
Could you describe more in detail how/between which points you’re measuring the time? A digital loopback with 14 miilliseconds latency seems strange, is it latency, or a total time duration that you’re measuring?
I assume you’re using legacy LimeSuite, if you are including LMS_StartStream() function in your time measurement, then it’s going to be a lot, because the function is creating Rx/Tx threads which is a relatively long operation. Also what sampling rate you are using?
I use the Linux internal high resolution clock. I grab a timestamp from the clock, then send it over the Lime SDR with the TX/RX ports connected with an RF cable. I am using LimeSuite (not NG). I startup the driver and wait till I get lock then send the timestamp. When I receive the timestamp I read the linux clock and diff the two values.