I have a Davis VP2, my fourth Davis weather station over about 20 years. Let me stipulate at the outset that I am not an atmospheric scientist, physicist, meteorologist, or otherwise credentialed person in meteorology or almost anything else STEM. Let me also stipulate that my instrument siting is less than ideal, at least as measured against specifications by the WMO, for instance. No doubt many of us amateurs are in the same boat.
Recently I changed out my Davis VP2 tipping bucket rain sensor for the newer tipping spoon sensor. It had seemed to me generally, if not every time (oddly), that the Davis tipping buckets under-reported by c 20-30% what I collected in the standard 4" plastic rain gauge we use for the CoCoRaHS project, and which I view as authoritative: no moving parts, no electronics, an none of the associated errors. The tipping spoon in my recent experience has been more accurate than the buckets, ie, more aligned with the plastic gauge. But in a recent very heavy rain, the 4" gauge collected 3.40 inches, while the tipping spoon measured 2.38 inches, significantly less. Others in this thread report the opposite, that the tipping spoon has over-reported heavy rain.
It's unsettling to find reliable reports of substantially different tipping bucket or tipping spoon rain measurements in similar conditions. Presumably factory settings are consistent; it would make sense if discrepancies as measured against some standard (the 4" plastic gauge) were consistent too. They seem not to be.
We also recognize, as the instructions for my NovaLynx rain gauge calibrator advises, that a "rain gauge can only be calibrated to one rainfall rate at a time, and accuracy falls off above and below that rain because of the systematic error." "Tipping bucket rain gauges," the NovaLynx people say, and presumably tipping spoon rain gauges as well, "are subject to a systematic mechanical error which is a function of rain intensity . . . the error is non-linear, so a calibration curve is sometimes used to correct the data."
So there are two issues, one Davis might solve by stating to what rain intensity their gauges are calibrated, calculating the error to which their gauges above and below that intensity are subject, and providing a way to correct for the error. Ideally, Weatherlink would show an "absolute" rain amount as measured by the sensor, but also a "corrected" rain amount number based on compensatory software in the Weatherlink. I don't know what to do about the other issue, that some report the same mechanism, buckets or spoon, variously over-reports or under-reports similar conditions.
This makes me wonder about the "official" measurement instruments at, eg, the National Weather Service, The Met, and similar agencies elsewhere. Are they subject to similar errors? How do they correct for them? Or do they? Surely they do, we think, but do they really, and how do they do it?