I am trying to get a deeper understanding of how LimeSuite and the LimeSDR work in general. When using LimeSuite, I can configure my device and then pop open the FFT viewer to get something like this:
I have begun to write some C programs to read information from the LimeSDR, and I have a few questions about how I might construct such graphs.
For the I/Q plots, what exactly are the units being used? Both the I and Q aixs seem to range from -2000 to 2000. Is this just some arbitrary scale (instead of going from -1 to 1)?
In my C program, I read at what I assume is a fixed frequency when I execute SoapySDRDevice_readStream(). If that is the case, how is the amplitude vs frequency graph constructed? Does the device quickly sweep over the various frequencies in some range to take readings?
In the amplitude vs frequency graph, how is the dBFS (amplitude) calculated? When I read a stream in my C program, I get a pair of I/Q values. How can I calculate what the decibel level is from that? I imagine I need to read something else along with the values, but I am not quite sure what. Can someone offer some guidance?
dBFS has 0 assigned to the maximum possible digital level (which can be increased with higher levels of decimation*). So technically it is a relative scale. If you wanted dBm then you would need to use external known calibrated source(s) and measure at all gain settings and frequencies, and produce a lookup table to convert from dBFS to dBm.
What this document shows is different to actual IQ decimation, but the core idea is similar enough that if you understand it, you should probably understand why decimation is used. http://ww1.microchip.com/downloads/en/AppNotes/doc8003.pdf
With decimation you get an extra 0.5 bits or 3dB of dynamic range every time you half the bandwidth (and it also reduces the processing requirements by reducing the datarate). But to provide benefit it does require that some signal+noise is tickling the at least the LSB of the ADC. It is one of the few times where additional noise is useful.
For the FFT, let me parrot my understanding to see if its correct. When I tune my LimeSDR to a particular frequency, I also set some bandwidth as well. My LimeSDR then converts whats happening at that frequency within the range of the set bandwidth. Based on some radio stuff I don’t understand, this is converted to an I/Q value. If I collect enough of these I/Q values I can perform an FFT on them to get the component signals. Is that correct?
If so, I imagine that would mean any component signals found would need to be playing some part of what is read within the bandwidth being looked at. Is that also correct?
The word you are looking for is Heterodyne. It is where two sine waves are multiplied together in a mixer (amplifier) and the result is two new sine waves one at the sum of the two frequencies and one at the difference of the two frequencies. Usually one image is discarded, by attenuating it with a filter.
That could be fed into a single ADC sampling at double the Nyquist frequency, but having it in In-phase and Quadrature components is considered so useful that two ADC sampling at the Nyquist frequency are typically used for RF applications. This very old article was useful in helping me to understand IQ: https://www.arrl.org/files/file/Technology/tis/info/pdf/020708qex013.pdf
Any signal can be analytically decomposed into a series of sinusoids at different frequencies.
The FFT is effectively calculating the correlation function between each of these sinusoids and your signal.
An FFT of a finite length is an estimate of this, the properties of that estimate will change with things like
selection of windowing function, integration times, etc.