If a perfect calibration is 5.44ml per tip, and these are all seemingly calibrated to 5.00ml per tip, then these will top more often than they should and have a positive bias, right? Could Davis have done this intentionally? Maybe in their tests they saw that due to stations having to be placed higher than recommended for rain alone, and missed raindrops due to wind and having a smaller funnel than the more accurate 8" gauges, they could have decided to create the positive bias to account for those issues. Seems that as I read all these comments about comparing the new tipper to the higher quality and larger rain specific gauges, most people that had both units in essential the same spot had essentially identical readings between the different units. If all those units are calibrated to 0.500ml per tip (I don't know that they are, just assuming) doesn't that mean they are reading as accurately as they can?