Modifying dualRXTX.cpp example - sort of fixed now

I am trying to modify the dualRXTX example that comes with LimSuite, basically I am using the original code
by rather than echoing back the received samples I am transmitting my own samples instead. Trouble is I can’t
get it to work.

The Lime tunes to the right frequency and when I call LMS_SendStream it returns the number of samples
I gave it. But nothing is visible on my spectrum analyser. Well that is not quite true, I can see the centre
carrier leakage but no modulation.

I was expecting SendStream to block when the FIFO is full when timeout is set to zero but all it does is
return 0 samples sent immediately.

It seems like everything is working except the DAC is not being updated. Anyone got any suggestions?

  • Charles

add enable tx antennal code
if (LMS_SetAntenna(device, LMS_CH_TX, 0, 1) != 0)

I am afraid that didn’t make a difference.
I did notice that setting timeout to 0 sets the timeout to 0 and does not do what it says in the header file
which is sets timeout to infinity.

I have checked the values I am sending to the device which are valid. GetStreamStatus is reporting
sensible information and no LMS calls are returning any error codes.

It is as if the first sample in a block goes to the DAC but nothing further. I am sending a complex
signal that should output a tone 1 MHz from the carrier frequency.

  • Charles

Hi @g4guo,

Is the unmodified example working OK? Or you just get into the issue after code is modified? If yes, could you share the code please.

Hi Zack,

The example appears to work so it is obviously something I am doing that is wrong.
I have the added complexity of the fact I am compiling my code with NVIDIAs nvcc
as I am working on a program that uses CUDA and GPUs to do the signal processing.

I recompiled the limesuite example using nvcc and it performs in the same way as the gcc
version does. I will have a further look at what I am doing before I post the code.

I also currently have the Lime Calls in a thread so I need to check that is not the problem.

  • Charles

Seems like the problem is down to flow control of the LMS_SendStream.
The example relies on using RecvStream and timestamps to flow control
the Sending. If the Send queue overflows it breaks it.

The sampleRate field in the status message always returns 0 on both Tx and Rx.

So I am going to have to re-design my code to flow control the transmitter.

Shame there is no transmit only example. I suspect that is for the reason
I have just encountered.

  • Charles

What is happening is the tx stream fifo is filling up.
The rx stream seems all right. Both tx and rx stream
are configured as per the example code.

I have hacked together a standalone program that
should output a 3 MHz sinewave on a 1GHz carrier.

Here is the dropbox link

  • Charles

Seems I have fixed it.

Only LMS_StartStream the streams you are going to use!
If you start 2 streams but only use one of them it doesn’t like it.

I am still getting overruns on tx but that is due to me not understanding how
the transmit flow control works.

  • Charles

Great to hear you are making progress, Charles! Keep us posted and let us know if there is anything we can help with.

Hi Andrew,
Yes I am getting somewhere. My application will require the USB3 interface to
operate at it’s maximum.

I have found that setting timeout to 0 in LMS_SendStream is not a good idea.
The header file seems to indicate that should set the timeout to infinity. It seems
better to set it to a large value instead, setting it to zero’s increases signal dropouts.

I have also found that to get maximum throughput it is essential to set the channel
to 1.0 in the latencyVsThrougput field.

I have tried increasing the value of the fifoSize field in the stream configuration but
whatever value I set it to GetStreamStatus always reports a fifoSize of 8192.

I think to get the maximum possible throughput I need a much larger fifo (I am not worried about

Basically I would like to run the board flat out in full duplex on a single channel.

  • Charles

Looking at the fifo code I can see that there might be a bug that ignores timeout and overwrites the oldest data immediately when fifo is full and synchronization to timestamps is enabled.
You can try disabling synchronization to timestamps by setting:
tx_metadata.waitForTimestamp = false;

Also, you are right about incorrect description of timeout=0 in the header file.When fifo is full it doesn’t block at all and return with 0 samples written. Use reasonably high timeout value for blocking instead.

I think this’s a bug ,bufferLength value is always 65536


The number of samples it reports is 8192 as I am using 16 bit samples each complex sample would require 4 bytes.
4 * 8192 = 3276.

I wonder if the 65536 is a physical limitation?

Being lazy I hadn’t got around to “feeling the source” but that seems to confirm what I have been seeing.