Hello. I am looking for some info regarding the behaviour/timing of the LimeSuiteNG SDRDeviceStreamTx and StreamRX methods. I would like to implement an SFCW radar using an XTRX board and as such, need to transmit and receive and samples simultaneously. I have followed the provided C++ examples, and have confirmed with benchtop measurement equipment that my C++ code to transmit samples and receive samples both work as expected individually.
As a next test, I have connected the TX-A port of my XTRX to RX-A via an attenuator, and am transmitting a pulsed single-frequency sine wave (20000 samples at 10MHz sampling rate) via the StreamTx command. If I call StreamRx with a large enough RxBuffer (say 3x the number of tx samples), immediately followed by a call to StreamTx, should I expect to be able to receive the output waveform if both tx and rx LOs are tuned to the same frequency?
If I call the two streaming methods in a loop (as eventually I will be calling these in a loop with increasing LO frequency) with txMeta.timestamp = 0 and txMeta.waitForTimestamp=false , writing the contents of the rxBuffer to an ostringstream after each time through, and then finally to a .txt file, I receive a sine wave as expected only in one iteration of the for loop. Which iteration seems to depend on the number of samples and sample rate, but is consistent for a given configuration.
I have implemented a similar SFCW radar on an Ettus SDR using the UHD Python API, and had to use separate tx and rx threads to accomplish this. Is this the case here as well? I have not come across LimeSuiteNG examples that make use of multithreading, but have much more experience in Python programming than I do in C++.
Additionally, in the dualRXTX.cpp example, which loops back from RX->TX, it sets txMeta.timestamp = rxMeta.timestamp + samplesInBuffer * 64; . Why the *64 ? Wondering if I am missing something important on timing here to synchronize the tx and rx streams.
Sorry for the long post, any advice on this would be much appreciated!
If you first StreamRx(), and only then StreamTx(), then logically you can’t receive your signal as nothing has been transmitted yet, assuming that’s being done sequentially on the same thread. You would receive your signal only with the next StreamRx().
Not using timestamps for Tx will transmit samples as soon as possible, but because PC is not a hard real time system, there is no guearantees that there would be no gaps in the transmitted sequence. With timestamps, more data would be buffered and the sequence would be guaranteed to be continuous.
I’m not familiar with the algorithms of SFCW radar, but I assume yes, if you needed multithreading for implementation with those SDR, then you’ll need it with LimeSDR as well.
StreamRx() and StreamTx() are thread safe, so you can call them from separate threads.
64 is just arbitrary number, the Tx timestamp just has to be in the future. When data is submitted to StreamTx() it still has to be processed, so the timestamp should be set far enough into the future that there would be enough time to process the data and transfer it to device. Otherwise if the data packet arrives to device, but the packet timestamp is already too late for it’s scheduled transmission, then the samples are just gonna be dropped and nothing will be transmitted, and the stream will report that there were samples loss.
Thanks for the insight. So with StreamTx() one can essentially schedule the start of transmission, whereas StreamRx() starts receiving samples immediately.
I will try out multithreading and see how that goes.
Following up here. With a thread pool and dedicated thread each for tx and rx, I am able to receive the transmitted pulses reliably in an external loopback configuration when an appropriate tx timestamp is set.