VCTCXO calibration procedure / management of clock correction value

For pretty much anything cellular related, a highly stable clock is required. The VCTCXO on the LimeSDR products is not sufficient to meet the relevant 3GPP specs, but it should be sufficient for practical experiments without having to permanently attach some bulky external OCXO or GPS-DO.

I’m currently looking for best practises on how people are both doing the actual calibration procedure, as well as how to manage the resulting correction value.

However, what is the designated procedure for initial calibration of the clock error? Using a random LimeSDR mini has about 1kHz carrier clock error after LMS_Init(). There’s the VCTCXO DAC in hardware that can be usd to compensate for the error. However:

  • what’s the standard software that people use to calibrate the error?

I’m looking for something that can be used e.g. with an external reference RF generator. Or even something like which would use a public GSM carrier as reference to calibrate against.

  • how and where is such a clock correction value handled by LimeSuite? After all, it would be rather ugly if each and every application program has to maintain some application specific configuration on where to store the correction value for each given unit.

In the ideal world [tm], I would hope there’s some non-volatile storage on each LimeSDR board (like an I2C EEPROM), where the unit-individual clock calibration value can be stored. This way, one can swap one LimeSDR board for another one and automatically the unit-specific clock correction value would be used, completely transparent to any application.

Slightly a side note: What I also note is that the VCTCXO DAC resolution is also super coarse. I’m seeing something like more than 200 Hz per LSB as a slope at at DCS1800 carrier frequency. As far as I remember, the GSM carrier frequency requirement is 30ppb, i.e. we have 58.31 Hz of tolerance at a 1.877 GHz carrier. Is this coarse step size expected with the DAC?

received some out-of-band communication from Lime Microsystems:

  • the DAC really is only an 8-bit DAC resulting in a rather coarse frequency correction steps
  • the DAC value is already persisted in the EEPROM on the board, i.e. the DAC value written using the LimeSuite API will remain present even after LimeSDR power-cycling.

@LaF0rge, hi, I recently bought a low-cost GPSDO (the one-port Leo Bodnar Electronics low-jitter GPS-controlled reference clock), to get improved precision over what the TCVCXO provides. This appears to involve removing the TCVCXO circuit path to a clock buffer chip, and feeding the output of the GPSDO to the buffer instead.

Taking a peek at the schematic for my v 1.4 board, I see that IC22 is an AD5601 8-bit DAC - for which there are two drop-in replacements AD5611 (10-bit) and AD5621 (12-bit) ( Replacing the chip with an AD5621 should improve the frequency correction range per step, shouldn’t it?

I’m leery of removing SMD parts from this board, to feed the clock buffer from my external clock generator. I don’t know which option to take: replace the DAC, or hack the clock input to feed my LimeSDR from the GPSDO. What would you experiment with first?

yes, you can re-work the DAC to a higher-precision part. But then you’ll also have to touch the entire software stack, from the device firmware up to the host software to deal with anything larger than an 8bit value there.

Another idea that came up was to PWM the DAC. Question is what kind of rate we can toggle it, and whether the filtering must be adjusted between DAC and VCTCXO input to make sure the VCTCXO really only sees a filtered/averaged voltage instead of the PWM pulses.

Took a closer look at the datasheet. The DAC parts AD56{01,11,21} use SPI, at 30 MHz max, and can receive sixteen-bit data frames continuously, containing 8, 10 or 12 bits. DAC updates on the sixteenth bit of the frame. So the DAC might be able to do 1.875M conversions a second, rate will be limited by FPGA gateware.

I’m intrigued, and only slightly daunted by the modifications to the board that that idea entails.

Did you ever replace the DAC? Im about to do so on the LimeSDR Mini using the pin/function compatible 12-bit DAC.

I actually don’t think there should be any software modifications required as the DAC value is treated as 16-bit everywhere because the register is 16 bits wide. If you’re interested I’ll let you know how it goes.

@mc955 I’d ordered and received my second LimeSDR USB mid-August to pursue this, but have got caught up in other things meanwhile.

As you’ll see in other threads on this forum, there is plenty of interest in mods to tweak ‘n’ improve sampling performance. If you succeed, and the increased DAC resolution improves timing accuracy, then the osmocom project (among others) will benefit from whatever you find out. I’ll climb out of this SS7 rabbit hole I’d fallen into, eventually, and follow this up next.

Will you be trying to substitute an AD5621? It’s in a tight place on the LimeSDR USB, but should be straightforward to rework; I figured to use a fingernail-sized copper foil mask with a small aperture through which to lift the desoldered part, to protect the surrounding chip capacitors. And to get a spare AD5610 to put back, if the modification entails too much work going through the software stack as @LaF0rge warned, to bring this $299 beauty back to working order (I do expect to break mine).

Im doing this on the LimeSDR-Mini which uses a different DAC, the Texas Instruments the DACx311.

Reworking the IC should be pretty easy on the Lime Mini as its right near the edge. In terms of improving performance I think the easiest thing is to use an external GPSDO, however in my particular application I will not have access to a valid GPS signal all the time. My plan therefore is to let the Lime reach thermal equilibrium and then characterize how the frequency varies with the DAC voltage. I want to be able to set the frequency to within 1 Hz of a target hence the increased bit depth required. Preliminary tests have shown that the on board VCTCXO is stable enough once thermal equilibrium is reached.

In terms of the software modifications required I believe the only tweaks required will be to the NIOS CPU firmware. The value is actually sent to the FPGA as a float through a LMS_WriteCustomBoardParam function call. The NIOS firmware catches this here:

We should in theory just be able to remove the check to see that the MSB is empty in line 698 and then formulate the two bytes dac_data[0] and dac_data[1] correctly for the 12 bit version of the DAC.

I have so far as a test tried just removing the check and then sending a value greater than 255 and seeing if the FPGA reports an error. Sadly I can’t work out how to compile the new NIOS firmware as sadly Quartus doesn’t do it when you build the project. @Zack Would you be able to advise how to do this?

1 Like

@avahilario I have managed to do this successfully!

I replaced the 8-bit DAC on the LimeSDR-Mini with the 12-bit version, the Texas Instruments DAC5311.

I then only had to modify a few lines in the NIOS firmware to get it to work. You can see the diff here:

There are no modifications to LimeSuite required to get this to work. It wont error out if you send a number greater then 4095 so beware of that. I’ll probably add in a check tomorrow and push the update. It won’t be the same for the LimeSDR as it is a different DAC but it should be fairly easy to work out how to assemble the bytes that get written out over SPI based on the original value using the datasheet.

Let me know if you need any assistance.

1 Like

Whoa…! Congratulations, @mc955! This is great; I’ll go ahead and order the AD5621 for my boards. Knowing this is possible for the LimeSDR Mini, I may get one of those, and a pair of DAC5311s. @LaF0rge will likely find this very interesting too.

Taking a closer look at the code chunk in main.c, I’d guess we’d need to extend the size of dac_val to 16-bit unsigned short int to accomodate the additional four bits of resolution (I’ll take a closer look this weekend).

It also seems to me that that we may need only touch the gateware at this point. However, applications using the Limesuite API expecting to read back TCVCXO DAC register values will almost certainly need to be refactored. (Is there such an API call? I haven’t gone that far into the code, but expect there to be, as the LimeSuite GUI should be able to read those via the FPGA).

Limesuite will still work fine with the modified gateware as its API calls LMS_Write_VCTCXO and LMS_Read_VCTCXO are actually sending and receiving a 16-bit payload anyway.

1 Like

Very cool, @mc955 - you’re way ahead of me here. I’ve put in an order for AD5621s for my Lime USB, and expect the parts to arrive in a week or so.

I’ll be sticking with the LimeUSB, as I’m experimenting with the osmocom 2G stack, which already works just fine with my stationary setup. I’m wondering whether this will improve EDGE throughput; and also how an un-modded LimeUSB behaves compared to one using a 12-bit DAC when used for LTE.

Sounds good, let me know how you get on. I am using my DAC to trim the VCTCXO to the PPS output of a GPSDO I made. Basically I am counting the number of samples that occur between PPS edges and adjusting the DAC accordingly. I can happily get within 0.1Hz at 30.72MHz. The GPSDO I have uses a uBlox module which has rather poor holdover when you loose lock, hence I am relying on the very good short term stability of the Lime’s VCTCXO.

An updated version of my main.c can be found here:

I fixed a small bug that would give you an error if you sent a value less than 255. It now checks the value is in the range 0-4095 and only updates the DAC if this is the case.

Just curious, the DAC7311’s datasheet says

These devices are pin-compatible with the DAC8311 and DAC8411, offering an easy upgrade path from 8-, 10-, and 12-bit resolution to 14- and 16-bit.

So I guess with the according changes to main.c using a 16-bit DAC8411 should work just fine. Is this correct?

Yes if that is the case then I don’t see why not.

1 Like