Timestamps support via SoapySDR

Dear all,

I just wanted to ask if anybody has been successfully transmitting or receiving in a burst fasion when providing timestamps via SoapySDR? If so, can you share the code?

I am asking because of observing really strange behavior in such conditions: in RX-only mode hardware timer is incrementing correctly and the first burst is received correctly, but all subsequent readStream calls are blocking until timeout occurrs, while for TX-only mode hardware timer is not incrementing at all but it is set to 0 all the time, thus returning time error for first writeStream call and insantenous timeouts for all following calls.

I am sure that my timestamps are set correctly and are not passed too soon or too late. I will share my code, but unfortunately I will not probably be able to do that this week (do not have access to the computer right now).

@ccsh,

You may want to check out this post here ------> Problem using Lime w/Soapy 'C' API for full-duplex TX/RX

ā€¦it has A LOT of information on the subject youā€™re requesting info for and may explain some of the issues youā€™re seeing with your LimeSDR and timestamping. Hope this link will help you -

73 de Marty, KN0CK

Thanks for a quick answer. I have already seen that topic, but it describes different issue (late packets) while in my case unexpected timeouts are the root cause of a problem (and the hardware timer for only TX operation, which does not change its value).

Like I wrote, in my case I am sure that timestamps are correct and being sent to device just on time.

If what youā€™re trying to do is similar enough to my application Iā€™ll be happy to share my code.

Iā€™m trying to implement a pulse compression radar, which puts out a coded TX burst of a particular length at a specific time. The receiver free-runs, and range is based on the correlation delay of the TX waveform in the RX buffer(s).

While I donā€™t have it working quite yet (I wrote the post martywittrock referenced above) I do have TX waiting for a particular time before putting out a buffer.

Let me know if you want to look at my presently-quite-ugly code.

It would be very interesting/kindly if you share your presently-quite-ugly code with us. Thanks in advance.

And did anyone succeed in implementing or changing the KIT radar tool box echotimer with the LimeSDR?

Looking at your post again, I note that you stated the TX clock does not increment.

My understanding (confirmed by experiment) is that there is not a separate TX and RX clock. When using writeStream, the txtime is a parameter passed to the transmitter. Whenever I query txtime, I get back the last value I supplied.

When I specify a value well in the future (as long as 10 seconds in some tests) I see the TX carrier turn on as soon as I start passing samples. But the actual TX modulation does not show up until the scheduled time.

On the other hand, the rxtime increments constantly, and the times correspond to those used on the TX side. It seems to correspond exactly to the time used by the tx stream to schedule samples.

Does that help? Or did I mis-understand the problem?

All you wrote is right. I have actually created new issue on soapysdr github page where I have described problems I am facing with: https://github.com/pothosware/SoapySDR/issues/138

I have also prepared test code (see link above), so you can see for yourself what is wrong.

@ccsh
@Zero
It seems we all are trying to make the same application. Transmit pulse and receive at the same time while getting synchronised response to stimulus. Timestamps are essential in order to get synchronized response. My early version of transmit receive loop is here: https://github.com/gitmodimo/LimeSDRTest/blob/master/limesdr.cpp
void LimeSDRWorker::run() is the main loop.
Maybe we could team up if our objectives are similar.

@modimo, so far I am interested in reliable way of using timestamps for TX and RX at all, in my case TX & RX bursts are actually placed one after another on a timeline (they do not have the same timestamps), but it does not really matter. The problem is that with both UHD and SoapySDR APIs behavior of LimeSDR is incorrect.

Me too, because my TDOA testingā€¦

I donā€™t mind working together. Eventually my stuff will be proprietary, but right now Iā€™m working at the send-a-buffer/receive-a-buffer stage. My proprietary stuff is going to be what goes out in the TX buffer, and what happens after I get a receive buffer back.

FWIW I believe I have a (very crude! very ugly!) working example - the receive buffer begins at time 0, and the transmit buffer goes out when I ask it to. Iā€™m just at the point where Iā€™ll be confirming this using a loopback cable.

I plan to use different lengths of cable to verify that the receive timestamp is valid relative to the transmit; it should be straightforward to calculate when the TX burst should appear in the RX buffer.

OK, after working with the board for a few days Iā€™m convinced the TX timestamp is just a suggestion.

I have my transmitter and receiver looped with a 1 meter cable through a 20dB attenuator. Iā€™ve made the following observations:

  1. There is a significant latency between passing a buffer to the TX stream and when the data can come out the spigot: latency is between 10 and 600 uS depending on the sample rate (higher rates = higher latency). If you set your TX time to be earlier than this latency, it will chop off the beginning of the buffer.

I believe there are several issues here. First is the latency of the samples, through the various API layers and the USB buffering down the cable. Then there are numerous FIFOs in the FPGA.

It appears from the FPGA code (VHDL) that theyā€™re using a 120 MHz clock to run the timestamp comparison - thereā€™s a counter thatā€™s reset at time zero, and when the counter matches the timestamp the samples are sent to the SDR chip. This tells me several things (all bad for my radar application): granularity of the TX timestamp is not one nanosecond (which the Soapy API implies), and because the time clock is not necessarily at the same rate as the sample clock it means there will be significant start time jitter because of the clock domain transition.

  1. Even when you set your TX timestamp to be later than the pipeline latency, the loopback timing is still grossly non-deterministic. For low sample rates the error is low (on the order of a single sample), but at 61.44 MS/sec the TX to RX delay can vary by many thousands of samples.

Iā€™m going to connect my unit to a high speed scope later today to determine if the variation is on the TX or the RX side (or, perhaps, both).

Is anyone else making progress on this issue?

Excuse me - the timestamp clock is 160 MHz, not 120. Itā€™s the clock used to move data out of the FPGA to the SDR chip.

Here are my thoughts as it comes to timestamp-based application with LimeSDR in comparison to USRP devices, which I have been using so far (thanks to @joshblum for pointing out some of them):

  • you canā€™t really get real current device timestamp in case of LimeSDR - what you get by calling getHardwareTime() is actually a cached value of last timestamp seen by RX thread. It may be a little problem if RX calls are happening rarely (say that you are requesting RX burst every 10 seconds, then the value returned by getHardwareTime() will be updated every 10 seconds as well).
  • hardware timer will start only when internal RX thread will be enabled - it means that you cannot create truly TX-only, timestamp-based application for LimeSDR (it has to have RX calls as well).
  • unlinke in case of USRP, send()/write() methods in case of LimeSDR arenā€™t blocking until hardware time will match timestamp requested by user. Also, I have not seen a single error reported by these calls, even if they definetely should appear (like when trying to provide very late timestamp etc.).
  • LimeSDR does not support async burst acknowledgement, so you cannot be sure that your TX bursts were successfully transmitted over the air entirely or not.

To sum up, as for burst-like application, RX works like a charm, while TX is not really reliable in that sense, that you cannot be sure that your bursts are actually entirely transmitted, unless you will check it over the air with another device. For example, the necesssary RX-TX time offset cannot be determined on the basis of reported error codes, you have to rather observe your TX bursts over the air in order to check if they are not clipped. Also it makes it difficult to observe missing of single samples or single problems occurring only from time to time (say that for some reasons 5 samples are missing at the beginning of every 100th TX burst - you probably wonā€™t be able to see that e.g. with an oscilloscope).

I havenā€™t been checking the accuracy of TX/RX bursts placing on a timeline, so I cannot confirm issues reported by @Zero

At 15.36 MS/sec timing seems spot-on; I can hit a target in the receive buffer with my transmit burst with an error of less than one sample (on average). But at 61.44 MS/sec I canā€™t hit my target any closer than +/-20K samples.

Looking over the Lime Soapy driver, it seems to have no sample rate dependence. Since I certainly see a dependence, the implication is that the problem is not the code.

I have a couple more Lime SDRs coming in a couple of weeks - Iā€™ll repeat my experiments with them, and see if perhaps I have a hardware issue.

To clarify: the WriteStream and ReadStream code in SoapyLMS7 does not have a sample rate dependence.

Iā€™m kind of curious if thatā€™s kind of the same reason the RFID reader I was trying to make didnā€™t work: RFID library issues - Popping from TX?

getHardwareTime() is kept up to date by the internal rx stream thread. You do have to activate the rx stream but you dont need to call readStream(). The timestamp will be kept up to date regardless.

I agree that keeping the rx stream activated for a transmit only application is annoying and unexpected. I think register access to the last timestamp counter would be preferable in this case. @Zack thoughts?

I disagree that the UHD send blocks on the timestamp. The send() call only blocks when the internal buffer is filled. Its actually identical to LimeSDR WriteStream() behaviour.

Now you might have seen this call block for certain buffer sizes and certain devices (when the burst size > internal buffering). In this case the FPGA fills up first, then any PC buffering, then the host blocks until the FPGA starts draining. So its not actually blocking the host API call on the timestamp; which would actually be pretty disastrous in terms of rampant underflows, but its sort of dangerous API use. What if the device is swapped out for something with larger buffering like bigger FPGA, more libusb buffers, N200 vs B200, etcā€¦ Anyway, thats just a guess. LimeSDR does have a lot more internal buffering, so that might explain what you were seeing (depending upon the burst size).

True there isnt burst acknowledgement. But you can know if the getHardwareTime() > burstTime + burstDuration, then you can read the stream status without a timeout, if there is no error, you know the burst occurred.

It seems round-about, but I was considering emulating the burst ack indication this way internally. Basically if the time expired and there was no error, then it must have occurredā€¦ I believe this is also the recommendation for the bladerf API, just for comparison, FWIW.

Not sure what you mean. Is this about measuring worst case turn around time? I have seen some applications that try to send late packets intentionally because they want to test which time delays are too tight and will regularly cause late packets. Just curious.

The nano seconds are just sort of a rate independent time representation. Nearly every device that supports timestamps basically has actual precision down to the sample rate. Even in the case of LimeSDR, the nanoseconds are converted to ticks in the baseband clock rate domain:

Example writeStream():
metadata.timestamp = SoapySDR::timeNsToTicks(timeNs, _conn->GetHardwareTimestampRate());

Example readStream():
timeNs = SoapySDR::ticksToTimeNs(metadata.timestamp, _conn->GetHardwareTimestampRate());

I have this example floating around that I often bring up. It uses a timed transmit burst and timed reception to measure a pulse delay. I will leave it with you in case its any help. I admit that your timing issue with the 61.44 MS/sec could be another bug entirely, since you seem to have 15.36 MS/sec working.

python MeasureDelay.py --rate=10e6 --freq=1e9 --rxGain=20 --txGain=20 --rxAnt=LNAL --txAnt=BAND1

In the above example I just put some rubber duck antennas on channel A LNAL and BAND1 ports.

1 Like

Josh -

I had seen, and tried, the Python loopback delay measurement, but have never really gotten it to work. Tried again this morning to confirm; hereā€™s the output with TX1 band 1 looped to RX1 lowband through a 20 dB attenuator:

Traceback (most recent call last):
File ā€œMeasureDelay.pyā€, line 187, in
if name == ā€˜mainā€™: main()
File ā€œMeasureDelay.pyā€, line 184, in main
dumpDir=options.dumpDir,
File ā€œMeasureDelay.pyā€, line 139, in measure_delay
raise Exception(ā€˜correlation(%d) does not match argmax(%d), probably bad dataā€™%(rxCoorIndex, rxArgmaxIndex))
Exception: correlation(9819) does not match argmax(1289), probably bad data

Then it segfaults. I also tried with 20 dB less attenuation in the path (to make up for the inline pad) with identical results.

Iā€™m starting to suspect my Lime is a lemon. I have a couple more coming in a week or so, and will see if they behave any differently.

Re: nanoseconds to sample rate - I figured out that one nanosecond granularity on TX was unrealistic. What bugs me, though, is that the clock that is used is not related to the sample rate - itā€™s the clock used on the RX JESD207 transfer bus. I can check again, but I belive that runs at a fixed 160 MHz; am I mistaken?

 TNX