Synchronize two LimeSDR

There seems to be some issue with my modification. I tested it fairly carefully looking at the synchronization of transmitted samples with respect to an external clock, and all looked good. I’ve just started looking at receiving now, and there seems to be a big issue. It looks like the latency between tx and rx varies each time you hit the GPIO timestamp reset button. The timestamp at which I receive an external event varies by ~1000 samples, seemingly independent of the sample rate (tested 10 MSPS and 25 MSPS). It almost seems as though a packet in progress is discarded when the reset arrives? Any insight would be appreciated.

Tagging @Zack to see if he can perhaps advise.

That would be great. I spent some more time looking at the VHDL, and I think I’m starting to see why it doesn’t work. There are two timers keeping track of the sample number - one appears to count the samples as they are received from the LMS7002M, the other appears to count them as they get loaded into packets. It looks to me like the former runs regularly, but the second is bursty. So they both get reset at the same time, but they didn’t necessarily have the same values when they were reset.

I’m working on a new plan - instead of resetting the sample count, I’ll try to report the time that the GPIO goes from L->H. When the GPIO is high, the time value that gets reported in the rx packet header will be the stored time from the timer that counts incoming samples. So that receiving software can tell the difference, I flip on the highest bit in the time word. At this point this strategy is functional, still need to test if it gives good synchronization.

I’m a complete novice at vhdl, so my interpretation of what’s going on in here should certainly be read with appropriate caution.

So this new strategy seems to do the trick. Now my rx events start in the same place, give or take one or two samples, every time!

Changed source files are: and
The output binary is:

To use it, you need to hold GPIO0 high for at least one full rx buffer, and then watch for rx buffers with the leading bit set. Remove the leading bit and what’s left is the last timestamp before GPIO0 went high. That same number gets transmitted in the timestamp field with every packet until GPIO0 goes low. There is some bit shifting happening in the timer in some circumstances I didn’t fully understand, so it might be safest to check for a bit set in either of the top 2.

@KarlL and @mleech, if you’re using the older version, you might want to give this a try.


Thanks, I actually installed the Quartus software a few days ago, did the modification and tried it. Strangely, without doing anything to the GPIO 0, when receiving samples the timestamps were incorrect. Instead of having multiples of 34133 like 0, 34133, 68267, 102400, … I was having multiples of 133 and it was going back to 0 after reaching 11200 like so : 0, 133, 267, 400, 533, 667, 800, 933, 1067, 1200, …, 10933, 11067, 11200, 0, 133, 267, …

I did use the latest master where I would regularly see huge jumps in timestamps.

So with your new strategy, setting GPIO 0 to high sets the highest bit of the timestamp to 1, and setting it to low sets the highest bit back to 0? I was actually interested in doing exactly the same, as it allows to not lose the timestamp, while still knowing when the GPIO 0 was changed. I’ll try it when I have the chance.

Did you use the latest code from the master branch?


I don’t actually have multiple LimeSDRs, but this will be useful for a project iin Kelowna that
I’m peripherally involved with. Once you have it where you want it, and it can be generalized
it maybe should be picked-up by the devs here.

My thought is that one GPIO pin is used to enable this feature, and the other acts as a trigger.
For normal use, without the “enable this feature” pin, the device would act normally.

What is your use-case? I’m guessing NMR related, but maybe not?


Pinged @Zack to see if it’s something we can roll-in to a future update.

Thanks. Will pick it up and consider to roll-in to a feature update.

Does that mean the Timestamp stays constant until GPIO0 goes low? Or it continues to run as normally, only with the leading bit set?

I’m actually playing with a switch and would like to know when the switch happened.

That would be really neat to have that feature in the official FPGA.

This is actually based on a snapshot of the FPGA repo from Feb 6, last commit is:

I’ve had the best luck making things behave if I keep the buffer size to be an integral multiple of the native packet size. For data format I12, that’s 1360.

The GPIO usage is what you said. Keep GPIO 0 for normal timstamps. When you set GPIO 1 high, the last timestamp seen before it went high (with the leading bit flipped on) gets transmitted for as long as GPIO 1 is high. When you set it low again, everything goes back to normal.

Did you try my image?

Yes, multi-channel NMR spectrometer. So the rf from multiple channels needs to be nicely synchronized, and also synchronized with logic events controlled by another device. Things are working reasonably well at this point, the start-up to start-up jitter is down to ~2 sample periods. Its almost good enough.

I’ve also been thinking about triggering - something a bit different from what you’ve mentioned though. The issue I’d be interested in addressing is getting the remaining jitter in the timing down even further. I think the source of it is that the decimation of the sample clocks on multiple boards are out of phase. What would be really great for me, would be some way of triggering streaming start-up so that the sample clock (CGEN) decimation on multiple boards were all in phase. Any advice? (@Zack?)

We have essentially the same requirement here. If one can do a “trigger streaming start” with an external GPIO or two (one to enable the functionality, one as the trigger), then with boards fed from a common REFCLOCK, phase-and-time synchronization should be really good.

The only issue will be that frac-N synthesizers even fed from the same clock can be
up to +/- pi out of phase with one another at PLL lock time. Some devices have a
phase-resynch signal that helps with this. Dunno about the LMS70002.


Marcus - how good synchronization do you need? My understanding of the sampling clock is: There is a sample clock VCO that always runs at a frequency in the range ~1900 - 2900 MHz. This VCO frequency is divided down by some even number ( 2(H+1) ). For my purposes, I don’t care about the phase of VCO at all (which I think is what you’ve mentioned) - what I care about is the start of decimating that ~2.5 GHz down to my 10 - 25 MHz sample rate. Maybe that’s good enough for you too?

I need good phase coherence among all of the receiver inputs. Fixed phase offsets of a sample
or two aren’t that big a deal. But once you start getting larger offsets, then the array is
essentially not pointing where you think it’s pointing, and as it gets worse still, you don’t get
any useful beam at all.

My primary requirement is low mutual phase noise among all the receivers–this is largely
accomplished by having the synthesizers all referenced to a single REF CLOCK.

The secondary requirement is low mutual phase-and-sample offset among all the receivers,
where “low” is necessarily a bit fuzzy. One can calibrate phase offsets in the array using
various techniques that may not be available for NMR work.

Just building a conventional N-antenna interferometer array, you need mutual phase coherence
and low timing and phase offsets. I’m considering an experimental 4 x 4 FFT beamformer
operating at 21cm, if I can get our University partner to cover the costs… They’re currently
using our 2-channel 21cm spectrometer, but a beamformer that is looking at 16 beams
on the sky all at once would be quite a nice thing, and not necessarily very expensive…

ok, I think we’re mostly after the same thing here. I would hope to get to the point where the synchronization between boards is a small fraction of a sample dwell. As far as phase synchronization of the actual rf signals themselves on different boards - I don’t much care. I’m generally working at different frequencies on different boards and their phase coherence doesn’t matter. I need the timing of the timestamps to be accurately known though.

For you its true - I think you will always have a phase issue amongst the rf signals on multiple boards. I don’t know any way around that.

I spent a little more time looking at how the ADC and DAC sample clocks are generated, and syncing them between multiple boards looks tricky. I think I do see a path to doing it, but that may be just naive optimism from someone who doesn’t know enough about fpgas to know better.

The DAC and ADC clocks both come from the divided down CGEN VCO. The barrier in my mind is syncing the divisors. The CGEN PLL seems to have two possibly relevant signals: RESET_N_SYNC and PD_FDIV_O_CGEN. The former sounds like it might be just the ticket, though it seems to reset quite a few pieces of the PLL, and I wonder if that means that the loop would drop out of lock? And if so would it lock again with the same timing on all boards necessarily?

The PD_FDIC_O_CGEN just powers down the divider, presumably leaving the VCO locked. Perhaps if you powered the divider down on all the boards and powered them back up simultaneously they’d all be in sync.

Unfortunately both of those signals are bits in registers in the LMS7002M, accessed by SPI, and not obviously easy to toggle from the FPGA. Perhaps one could have a module in the FPGA that, when triggered externally by GPIO, hijacked the SPI interface and bit-banged the instructions to toggle the RESET_N_SYNC line or the divider power? Unfortunately, this looks to be somewhat (ie, way) beyond my ability - but I’d be very interested in having it work.

One thing that I could conceivably try though, is doing this in software with very low sample rates. If the sample rate could be brought down to say 1kHz or so, then toggling those registers with the existing mechanisms seems likely to tell whether this strategy is worth pursing.

Are you working on CHIME? If so, it seems likely we know some people in common.

No, I’m not working on CHIME. Would be fun, but, no.

I run this little adventure:

I made a branch with these files on github for easier reference:

The salient changes to the file lms7_trx_top(2).bdf weren’t obvious from a textual diff, is there a pin remapping in there somewhere? Or is that file entirely generated? (I’m installing Quartus now… maybe the answer will be obvious if there is some graphical diff tool there…)

A different version of Quartus was changed and so even if you generate the bdf from the same code, you’ll get very different bdf.

The changes in the bdf are pretty minimal. Just added one input to the rx_path_top module and hooked it to gpio pin 0.

I’ve realized why the variations in my synchronization are sometimes as big as two sample periods. With only one channel running, the timestamps only count every second sample. I have a plan to try to improve matters, but it will be a few days till I get the chance to try it.

1 Like

Some progress here. I’ve modified the sample timestamp capture so it is accurate to single timestamp precision. I should figure out how to do a pull request…

In some overzealous moments, I went further, and figured out how to synchronize the RX sample clocks in multiple boards to each other to within a few ns. I hijacked a GPIO input on each board and feed it with a clock oscillating at the ADC frequency (divided by 16), run that into an xor gate (inside the FPGA) and compare against bit 3 of the sample count. Then take the output of the xor and export it on another gpio pin. That output then goes into a low pass filter, and is measured as an analog value on an ADC of an arduino. Then by fiddling with the VCO frequency that generates the lime’s ADC/DAC clock, you can bring the lime ADC clock into synchronization - sort of a poor man’s pll.

If there is interest, I could post the changes needed to make this work.

Unfortunately, the situation for TX doesn’t seem quite so perfect. Even with the ADC clocks on the boards synchronized to within a couple of ns as above, there is still some variation in the timing of TX output, that I believe is due to the dividers in the TX and RX clocks. For a 10 Msps sample rate, the CGEN clock runs at 80 MHz, and is divided down by 4, then by 2. I see timing variations now that span + to - 25ns in steps of 12.5ns. It feels like the dividers in the tx and rx chains are out of step with each other. Its not bad really - good to about 1/2 a sample dwell. The timing difference only changes when you fiddle with the sample rate. Once you set it, things stay synchronized.

1 Like

It turns out that calls to set the transmitter gain also mess up sync between TX and RX within the rf chip.

So - I’ve added one more XOR gate in the FGPA to compare the phase of the TX clock and the RX clock, and pipe that phase detected signal onto a GPIO. I don’t have an great systematic way to correct that phase, but you can just reset it repeatedly till it settles at the position you want. My solution here only works if the DAC and ADC rates are identical - though with a bit of fiddling you could probably get something similar to work if they differed by a factor of 2 or 4. I am also using interpolation/decimation in the TSP of 2. If that factor was bigger this might get ugly.

At this point now, I have the DAC and ADC clocks synchronized on multiple LimeSDR boards with an external oscillator with reproducibility on the order of a couple of ns. You could probably do a bit better than that if motivated.