Synchronize two LimeSDR

I’m using the modified FPGA file http://www.phas.ubc.ca/~michal/capture_timer.rbf .

@Zack @andrewback Do you know if there are any timing constraints on the negative timestamp?

I’m receiving packets of 128 samples or even 64, and want to detect whenever such packet has a negative timestamp. I’m using an Arduino to set and clear the FPGA[0]. If I only set the FPGA[0] for a very short time, eg ~3µs, I’ll sometimes not see the negative timestamp (as if the FPGA[0] wasn’t set for long enough to be detected), and when I do see the negative timestamp it would last too long (eg 10µs instead of the expected 3µs).

It’s really important for me to handle short times and also to be sure that the timestamps behave properly.

@cmichal I’ve done other tests with the FPGA file you provided. I wonder if there is some delay in the timestamp becoming negative and positive.

I’m receiving 100M blocks of 128 samples at 60MS/s, and my Arduino is setting the FPGA[0] for 12µs then clearing it 34µs. I’m not expecting the Arduino to be infinitely accurate, but I would expect to see more or less the same number of blocks with negative timestamps, corresponding roughly to 12µs, and same for positive timestamps corresponding to 34µs.

But then I’d see 5, 6, 10 or 11 negative blocks in a row (I’d expect around 5 and 6), then 15, 16, 21, 22 positive blocks in a row (I’d expect around 16). I find it strange to see increments of 5.

If I lower the time when the FPGA[0] is set, then I would miss some switch between positive and negative, as if the timestamp was eventually never modified (note that at 60MS/s I should receive a block of 128 samples in less than 3µs, so waiting 10µs should be more than enough).

Do you have any idea what could be the problem?

I do have a guess. The FPGA packs samples into packets of 1360 samples. If you’re using a buffer size of 128, several of those are coming from each packet. The timestamp only gets updated in the header of each packet. Are you receiving on both channels? If so, then the 5 block increment makes sense, since you get 5.3 of your blocks out of each 1360 sample packet.

I’d suggest trying a block size of 680 if you’re using two channels, or 1360 if just one.

I’m curious about what the sequence of timestamps looks like. Can you paste in a list of: last few positive timestamps before negative, then the list of 21 or 22 negative ones?

Thanks, I thought there might indeed be something like that (though I thought reading something about 1040 samples). I’m using 2 channels so I’ll try with block sizes of 680.

With my previous tests I would have up to 11 negative timestamps. The positive timestamps go up by 2133 or 2134 each time (128 samples at 60MS/s). I get something like:

1761202933
1761205067
1761207200
1761209333
-6148914689475294922
-6148914689475292789
-6148914689475290655
-6148914689475288522
-6148914689475286389
-6148914689475284255
-6148914689475293455
-6148914689475291322
-6148914689475289189
-6148914689475287055
-6148914689475284922
1761234933
1761237067
1761239200
1761241333

BTW we have 1761234933 = 1761209333 + 12 * (128*1000/60) so we correctly go back to positive timestamps as if nothing ever happened.

The negative timestamps themselves don’t make much sense as is though.

The 680 samples matches what I’ve seen previously: 680 samples are received in 11.333µs, and the lowest I could leave the FPGA[0] set was 12µs. Below that I’d sometimes not see negative timestamps.

I tried setting the FPGA[0] for 12µs then clearing it for 36µs, and I get 1 or 2 negative blocks and 3 or 4 (sometimes 5) positive ones. It makes sense as the timing of the set and clear can’t be perfect.

A lot of this makes sense. In the list of timestamps you posted, the negative ones increment by the same 2133/2134. The incrementing must happen in the LimeSuite library on the host computer. Then half way through the negative time stamps, they jump back again - since another packet is received that contained the same negative timestamp as the first one. Everything is a bit confused though because of the misalignment of the 128 sample buffers with the 680 sample packets.

So for the negative timestamps to make sense, we need the buffer size to be aligned with the native packet size (which actually depends on the data format. Its 1360 samples for 12bit samples, but 1020 for 16 bit samples). Its ok if you use a buffer size that is an integral multiple of the packet size. The pulse on GPIO0 needs to be at least one buffer size long, and yes, you will sometimes see one negative timestamp and sometimes two.

I’m confused though about why your stamps increment by 2133/2134. I would have expected the timestamps to increase simply by 128 each time? Are you doing the multiplying by 1000/60?

If so, by doing that, you make it much harder to figure out when the GPIO actually went high.
If you take out the leading bit of: -6148914689475294922, you get a number that doesn’t make any sense as a timestamp. If you go back to what I would expect the raw timestamps to be:
105672176 (your first one multiplied by 1000/60)
105672304
105672432
105672560
First negative was then approximately at stamp 105673328. If you then set the msb: (1<<63) that gives -9223372036749102480
which, if you then /60*1000 gives: -6148914689475295072 which is in the ballpark of your first
negative number. Going backwards is going to be really tough.

I’d suggest not multiplying by 1000/60 till after you’ve sorted out the negative timestamps, then you can just remove the leading bit, and what’s left tells you the correct time value.

Tanks a lot again for all the explanations.

In dual channel, 12 bits samples, with blocks of 680 samples, I indeed see constant negative timestamps, and it now makes sense that for smaller blocks the timestamps changes a bit.

Concerning the timestamps I showed, I’m just using the Soapy interface that returns the buffer's timestamp in nanoseconds:

Looking further, it seems SoapyLMS7 converts the FPGA timestamp into nano seconds:

Great. Then I think it all makes sense. I’m not sure what’s best to suggest in terms of finding what time the GPIO trigger occurred though - the conversion to ns messes things up quite a bit.
I’d be tempted just to hack soapy to not do that - replace that call to ticksToTimeNs with just:
timeNs = metadata.timestamp; Otherwise the backwards conversion is going to be pretty ugly.

Hi, I’m using also an Arduino to pilot the GPIO of the Lime. The problem is that they don’t use the same pin sizes, so I had to hack an existing cable, and the result is not really pretty.

What Arduino are you using? And were you able to find a ready made cable to plug your Arduino and Lime?

Thanks.

My solution is not pretty. I bought a cable that has the right connector for the GPIOs on both ends and cut it in half. There are things like this; https://www.adafruit.com/product/2094
that might make it prettier. I have some vague recollection of spotting a cable with a 10 pin 0.05" connector at one end and an 0.1" 10 pin connector at the other, but even that wasn’t going to be great.

Hello, I’m curious if you ever found a reasonable solution for your issue @KarlL ? I’m trying to do something similar.

I’m still digesting all of the details given throughout this thread, but the timers and timing conversions in both HW and SW seem to make it messy. If the timestamp handling leads to some inconsistency, is there possibly a different way to set it up so that samples are only added to packets when a trigger signal is active? The most blunt way of doing this doesn’t seem to be workable (disabling the LMS RX hardware entirely; the pin is not accessible), but it might be possible with some shenanigans in the rx_path logic. I briefly tested some, but there’s a lot I didn’t understand at the time about how it sets up the packets. The above discussion helped that, at least!

For my purposes, I only really care about samples that occur during an external trigger signal; the rest can be dumped, and the hardware state reset and held to a neutral configuration until the next trigger. Essentially, I want the LimeSDR to act like an oscilloscope. I’ll do some more fiddling this afternoon and see if the packet timing state can be easily reset.

I’m not sure what you’re trying to achieve. For my purposes, it’s fine: whenever I see that the timestamp becomes negative, it means the external signal was triggered. If you only have one LimeSDR, this could be useful to know something happened, if you have several LimeSDR, you can use that to synchronize them quite precisely.

I don’t really care about the real timestamp when the timestamp is negative.

I work in dual channels, so I receive by blocks of 680 samples (see Synchronize two LimeSDR for why).

Sorry I didn’t mention it in my earlier post, but I’m looking to maximize bandwidth as much as possible. For that case it is beneficial if I can prevent adding samples I don’t care about to the queue, which allows a higher sampling rate to not get bottlenecked by USB3 comms… in theory, anyway. Messing with timestamps on top of that might be useful for downstream processing, but is probably not required.

I installed Quartus Prime Version 17.1.0 Build 590. I simply opened the project lms7_trx.qpf then in Project Navigator, I chose Revisions and clicked Compile All. The compilation went through, but when testing the rbf file, it does not seem to be working, ie I never see the timestamp become negative.

The rbf file is around 560kB while the once provided by @cmichal is around 564kB.

Were you able to compile it and use it? Are there other things to do than just click Compile All?

I haven’t tried to build from that repo, but have done more work for my application, and “packaged” my patches a bit. If you visit: http://www.phas.ubc.ca/~michal/LimeSDR there’s a link to two versions of the patches. One is similar to what was posted above, but has some additional modifications: as long as you don’t need to use other GPIOs, you can completely ignore those other modifications.

The other patch (labelled untested) does all the same stuff, but is to be applied to a much newer version of the gateware. There was a pretty substantial reworking of the top level of the gateware at some point, and I adjusted my patches. Those files have readme’s in them that say what commit of the gateware they apply cleanly to.

I didn’t have a chance to test the later patches thoroughly - but I believe the situation is that the “untested” gateware worked, but there was a problem in the LimeSuite that was from that same time period. So you could use the gateware built from the newer patches, but might want to stick with the older LimeSuite. Or just stick with the earlier versions.

There are some build instructions in the readmes - but basically yes, I just hit Compile All in Quartus.

I should add that in the newer versions the patches are a bit more readable because the graphical toplevel layout is all gone, all of the source files touched are vhdl.

So I am trying to do the same thing here with a LimeSDR Mini. I have just started my final year project at university where I am trying to design and build a system capable of tracking the 3D flight path of ‘an object’ by listenening to the objects telemetry packets from multiple locations and then computing the cross correlation of the IQ streams from the independant recievers to work out the TDOA, and hence the location of the object. The system therefore requires multiple independant SDRs to be syncronised easily - ideally to within 1 or 2 samples at 30MS/s.

In order to do this the obvious choice is to use the 1Hz PPS output from a GPS module as an external trigger to syncronicse the SDRs as described above.

The gateware for the Lime mini has a slightly different top level setup but is very similar to that of the Lime USB so I have been able to succesfully modify the gateware to implement the external trigger functionality. The PPS is fed into a GPIO pin and as above the sample index of the 0 to 1 transition is embedded in the USB packet header, with the MSB set as a flag for the PC.

With this flashed to the FPGA I also modified the file ‘streaming.cpp’ in LimeSuite/SoapyLMS7 and rebuilt so that the raw timestamp from the USB packet is returned to the user application.

//timeNs = SoapySDR::ticksToTimeNs(metadata.timestamp, sampleRate);   
timeNs = metadata.timestamp;

Next a python script was written to look for packets with a modified timestamp and then report the number of samples between the current and previous sync events.

import SoapySDR
from SoapySDR import * #SOAPY_SDR_ constants
import numpy #use numpy for buffers

#enumerate devices
results = SoapySDR.Device.enumerate()
for result in results: print(result)

#create device instance
#args can be user defined or from the enumeration result
args = dict(driver="lime")
sdr = SoapySDR.Device(args)

#apply settings
sdr.setSampleRate(SOAPY_SDR_RX, 0, 30e6)
sdr.setFrequency(SOAPY_SDR_RX, 0, 868e6)

#setup a stream (complex floats)
rxStream = sdr.setupStream(SOAPY_SDR_RX, SOAPY_SDR_CF32)
sdr.activateStream(rxStream) #start streaming

#create a re-usable buffer for rx samples
buff = numpy.array([0]*1020, numpy.complex64)

pps = 0
prev = 0
for i in range(1000000):
    sr = sdr.readStream(rxStream, [buff], len(buff))
    ts = int(sr.timeNs)
    if(ts < 0):
        prev = pps
        pps = ts + 9223372036854775808 
        if(pps!=prev):
            print(pps-prev)

#shutdown the stream
sdr.deactivateStream(rxStream) #stop streaming
sdr.closeStream(rxStream)

At 30MS/s with a 40us wide PPS pulse the output from this script was as follows:

matt@ubuntu:~/Documents/iib-project/lime-sdr$ python3 delta-test.py 
{addr=24607:1027, driver=lime, label=LimeSDR Mini [USB 3.0] 1D3AC940C7E517, media=USB 3.0, module=FT601, 
name=LimeSDR Mini, serial=1D3AC940C7E517}
[INFO] Make connection: 'LimeSDR Mini [USB 3.0] 1D3AC940C7E517'
[INFO] Reference clock 40.00 MHz
[INFO] Device name: LimeSDR-Mini
[INFO] Reference: 40 MHz
[INFO] LMS7002M calibration values caching Disable
[INFO] Rx calibration finished
25030102
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000016
30000020
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000018
30000020
30000018
30000018
30000018
30000018
30000018
30000018

Now this seems to suggest that everything is working as intended, however there is somthing that is confusing me.

Why does this only work when I read the samples 1020 at a time? If I set the buffer size to 1360 then it all goes quite wrong. I see from the stream protocol that 16 bit samples are sent 1020 to a packet - does the SDR automatically choose the largest bit depth that it can happily spew out within the USB bandwidth?

From looking at the VHDL and the FT601 datasheet it is not obvious what is going on so if somebody could shed a little bit of light on the specifics regarding the USB packets on the lime mini that would be really helpful.

In addition I am happy to share the modified gateware should anyone want it. Big thanks to @cmichal for your work on the gateware for the Lime USB.

Matt

It doesn’t surprise me at all that things go bonkers unless you set your buffer size to a multiple of the FPGA packet size. As soon as its not a multiple, then the library has to fiddle with the timestamps from the FPGA, and I’m sure the negative numbers get badly confused.

The number of samples you can fit in a packet does depend on the data format. I’ve been using the 12bit packet format (LMS_FMT_I12) , which gets 1360 samples per packet. If you use the 16 bit format you only get 1020 per packet. You should be able to set the buffer size to 2040 or 4080 and have things work ok.

In the LimeSuite library, you can choose the data format in the dataFmt field of the stream, which you can set before calling LMS_SetupStream.

Do you have an example program that sets up a stream using the LMS_FMT_I12 data format and then looks at the timestamp of each packet, making use of the LMS API? I’m struggling to find a decent example online.

There are a couple of examples included in the LimeSuite source code. dualRXTX.cpp is a useful starting point.
You can find them here:

Thanks I’ll take a look at that. I was looking through the documentation, including the pdf on compilation, and I cannot seem to find instructions on how to compile the program using g++? Where can I find the apppropriate command line instructions?