I see that my posting implied E-field measurements. While actually the second half of my post was and the first was the one that was relevant to the topic.
Going OT on the first post of a thread! That is probably a personal record.
Anyway bear with me, I'm trying to understand how this works.
One of the reasons I brought this up is that I see several stations not registering strikes closer than appx. 50Km. I would think they were the best sources of timing information since the signal have been subject to less of the uncertain effects of the atmosphere.
I did believe it was common that the server control the gain thus making it sort of an AGC. That would to me seem ideal, as the server "knows" what stations will be close to a thunderstorm, and the local disturbances of a station. And to do that, it might need to turn it all the way down to zero gain through the whole amplifier chain. In a thunderstorm the receiver is after all located inside the tank circuit of the transmitter! I would think attenuation rather than gain would be in order. I thought the manual GC was just to make it possible for the user to optimize the antenna and placement.
That is why I thought the PGA as first stage might be a problem, since even if gain is turned all the way down to zero, it might be driven to clipping. And what usually happens then is it takes some time to come out of it. That recovery time will destroy the timing information and thus render the station unable to supply any meaningful information.
Anyway it all may be a moot point if the grid of stations is small enough. Then data overload on the servers might be more of a problem than signal overload in some stations. Signal processing on each station may be how they do or will deal with that.
Did any