#1 rule with a remote Davis station is to never loose power
#2 rule is never loose power
I don't disagree in the slightest with the sentiment or objective here.
However, in the real world and at sites where no mains power is available, without spending a large (and potentially unacceptable) amount of money on overengineering eg a solar/wind PSU then there _are_ going to be occasions when the power drops out at remote sites - potentially for several days at a time. This is going to be especially true during mid/high latitude winters. The question then is how best to deal with this situation.
Unavoidably this situation is going to lead to some loss of data. But then the aim of the remote AWS design should presumably be such as to minimise these data losses. And if you are going to be using a Davis station in this situation (as still, arguably, the best choice for a cost-effective AWS) then I'd suggest that the best solution is based on:
1. Using an Envoy console (so that it will reboot automatically when power resumes);
2. Using Loop data as the primary source of weather data (because it avoids timestamp issues);
3. Using a 1-2W miniature PC to preprocess the data at the remote site because, realistically, this is the only way to control the data flow and assure its validity (eg vs time). Of course this PC also needs to be programmed to reboot automatically into the correct processing configuration and resume data uploads as soon as possible after a power outage.
(Of course the Davis Vantage Connect product looks like being an alternative way of meeting the same criteria, although you're then locked into the Davis way of doing things and the associated plan prices.)
So my rules would be slightly different:
[1] Try, as far as is practicable and affordable, to reduce the chances of a power outage to an absolute minimum;
[2] Use a data architecture that does allow automatic rebooting (albeit with some data loss) should a power outage happen in practice.