So data can be collected at around circa 6 seconds - I understand your hardware and software latencies argument. So what is the best way forward to get the data "out" of the console as soon as it is available? Can the WFL write the data to a share or can I have a server poll the data from the WFL and manage the sync?
In my opinion for custom export best option is this one which is not existed yet - PHP.
MQTT has always open connection with server and connection is negotiated. Little more job to do by WFL.
FTP needs to send set of commands and WFL has to wait for answer for each. Timout is set to 15sec. 3-4 sec is minimal time to make FTP transfer. FTP is using 2 ports.
PHP is the same technic which is used by all weather hostings WL.com, WU, PWS, AWEKAS, WOW etc. - transfer is the fastest. Only one port is opened and data are printed to server.
All calculation job are made by PHP + database server. Using regular cheap hosting you won't establish second WL.com, but for 20-50 stations MySQL + PHP should be ok.
You have been asking about data transfer when changes occur. WFL is not working like this. Cumulus is opening connection with console and receive console real time data with console interval 2-2.5 sec. WFL is sending command to console before job (WU, PWS, FTP exports). When you don't set any activity for WFL then it's standby and waiting for connection from PC software. Console has also sleep mode, after 2 min from last transfer it's saving energy.
Continues LOOP data reading it's a very nice feature, but WFL has to be more flexible. It has to be ready to read:
- real time data
- archive data
- settings
- give access to PC software
That why was better in my opinion to set 3 sec. interval for MQTT, RapidFire, than changing whole logic to continues console readings.
I could add only function which would check if frame is not the same which was transferred 3 sec. before, then drop unnecessary transmission.