Is the LimeSDR sensitive enough to pickup RF from brain activity?

So if an EEG is measuring macroscopic electrical activity taking place in the brain, then given the relationship of the electromagnetic spectrum, the brain should presumably also be producing RF. Would the LimeSDR be sensitive enough to receive these RF signals, if for instance you were inside of a faraday cage?

I want to get like an Intel Skylake server with a C236 chipset, which has 10x USB 3.0 ports and then connect 10 LimeSDRs around the brain with some type of faraday cage surrounding it. Each LimeSDR has 6 RX U.FL connectors and 2x2 MIMO, so you should be able to wire up like a 60 antenna array around the brain and then do an FFT frequency scan in parallel and such. Then I want to feed all these signals as sensory input into a deep learning neural network comprised of about 30k CUDA cores.

This should be capable of spatial filtering using computational signal processing to function as a three-dimensional EEG of the brain, to precisely locate the source and propagation of epileptic activity within the brain’s neural networks.

No -

I won’t argue with the fact that the signals are very weak, but I think it’s foolish to think that the brain only operates on delta, theta, alpha, beta, and gamma brainwave activity. The visible spectrum is from about 430 THz to 770 THz, and the eye’s are electromagnetic spectrum quantum mechanical sensors that feed the brain’s neural networks with up to about 187 terabytes per second worth of data, according to the Shannon–Hartley channel bandwidth theorem. Other animals can actually see in the infrared range as well, and still further magnetoreception has already been demonstrated to exist within other animals too. So I don’t think they are even close to the right ball park. The thing is though, you don’t really need to know the exact frequencies, because if you know anything about square waves, FFT harmonics will get you the input signals that you want, and once you have those signals the deep learning computer neural network which you are feeding the RF sensor data into will find the cause and effect relationship between the input and output. All you may need are some visual, auditory, and motor test patterns to calibrate the brain’s neural networks to correlate with the computer’s neural networks. Then you can do FFT spectrum analysis using beamforming and a three-dimensional computational model for mapping out the neural networks.

Then if that actually works, the next logical step would be transmit beamforming for low-power transcranial stimulation, and then if that works perhaps the next step could be non-invasive ablation of the areas that trigger the epileptic activity in treatment resistant patients.

I’m afraid not. LimeSDR-USB is indeed 2x2 MIMO, but this means 10x of them would give you 20x channels and not 60x. There are more U.FL ports than Tx/Rx channels because each channel has multiple ports that are optimised for a particular frequency range, which are then selected as part of configuration.