I’m using the SoapySDR API to control the RX and TX.
It seems things need to be set in a specific order. For example if I configure and start the RX part, then configure and start the TX part, I end up with issues at the RX, while if I first configure both RX and TX, then start the RX, then start the TX, everything seems to be fine.
I’m still trying to transmit at specific timestamp. In mono channel things are working well, but I need to set things in the right order: I set the RX and TX, then starts the TX stream, then the RX stream.
Is that normal? Should I always set everything before starting the streams?
I’m having 3 issues currently:
When in mono channel for RX and TX, I sometimes get a WARNING L in the console. In that case I see just part of what I wanted to send, with the signal starting at the right time. So it’s as if the beginning of the signal to transmit arrived on time in the Lime, but not the rest? That’s a bit strange, as sending the data through USB is a lot faster than the expected duration of the signal.
In dual channel for RX and TX, I always get lots of those WARNING L in the console. In that case I only see a very short signal instead of my 10ms signal, but again, those signals seem to start at the right timestamp. This is strange, especially as I get the same even if I try to send the signal further in the future.
Finally, even when sending signals at once, as opposed to at a specific time, I sometimes notice my signal is diluted, ie instead of lasting 10ms, it will last longer. When looking at it on a waterfall, it seems as if the signal isn’t being sent 100% of the time, as if the Lime was sending some of the signal, then stop, then some more. I’m working at 55MS/s for both RX and TX.
You should not set/change sample rate after starting streaming. Also it is probably better to avoid running calibrations or tuning filters while streaming is running as those operations take some time and will interrupt sample stream which may cause problems.
All your issues seem to be related to data not coming from PC to LimeSDR over USB in time.
‘Warning L’ is only displayed when timestamps are used, it means that some data arrived to FPGA to late and was not sent. If you send samples far enough into future, the beginning of the signal should definitely be sent as it is waiting to be sent in FPGA. Once FPGA buffer depletes, it all depends on how quickly PC is able to send further samples over USB.
When FPGA buffer is empty, it will send zeros to RF. So if timestamps are used and there is a delay in USB stream, a gap will be transmitted to RF and the board will drop some latter packets (WARNING L) because of late timestamp (as their timestamps were at the time when the board was sending zeroes to RF).
If timestamps are not used LimeSDR sends samples from FPGA buffer without waiting. So if there is a delay in USB stream, a gap will be transmitted to RF and the board will resume sending samples once anything arrives via USB (no samples will be dropped so the gap will push the end of the signal further).
AFAIK theoretical sampling rate limit for SISO mode is 61.44 MHz, which means that in MIMO mode you’ll get at most 30.72 MHz. That’s theory, in practice even with very powerful machine 25 MHz in MIMO mode is often as far as you can go without dropping packets (and that’s just sending/receiving samples, I’m not even talking about processing them in any way). So, looks like your sampling rate is too high and that’s why you get your late packet indicators.
So if I send something far enough in the future, only the beginning will go from the PC to the Lime over USB and be stored in some buffer on the Lime. Then once it’s time to send it, the buffer will have some free space and only then will the following part of the signal be sent from the PC to the Lime over USB? That would definitely explain what I see.
In that case, is there a way to change the size of the buffer to hold more data inside the Lime?
That’s indeed what I’m seeing.
It seems even with only 1 TX at 55MS/s and no RX, samples are not always sent fast enough to the Lime when transmitting at once.
Is there anything I could configure, like
using (or not using) SOAPY_SDR_END_BURST?
setting the latency when setting the stream up? I tried both 1 (maximum throughput) and 0 (lowest latency) but it doesn’t seem to change much. I can’t really say one is better than the other.
there’s also bufferLength or txBufferLength, but I’m not sure where they are used.
You’re right of course. Also the type of USB you choose is important. I’ve noticed on my Windows computer that if I plug the Lime on a USB 3.0 (3.1 Gen 1) it is not fast enough while 3.1 (3.1 Gen 2) is good (even though the FX3 is only 3.0). Using a good USB cable is also important.
It seems that if I have 2 activated channels in RX, even if I’m not actually reading samples through USB, then I can’t send samples fast enough on a single TX. I only see the very beginning of my signal being sent. Maybe as there are 2 RX, even if there’s just one TX and I’m not actually reading samples, the Lime goes into MIMO mode and the internal buffer for TX is smaller.
If I only have 1 activated channel in RX, then things work quite well with single TX.
Ok, so if I get this right you want to transmit and receive in burst-fasion, 2xRX and 1xTX. Could you provide some parameters such as sampling rate for 2xRX, sampling rate for 1xTX, burst length, silence period between bursts? I was doing such thing in SoapySDR for 2RX & 2xTX some time ago and it worked quite well (assuming sampling rate for both TX and RX <=25 MHz).
Actually I’m using 55MS/s for the 2 RX, so using even 1 TX becomes difficult. Also right now I’m constantly receiving, even when transmitting. Maybe I could avoid receiving all the time so that the TX part won’t be impacted.