From that flowmeter page, regarding repeatability:
*4 This specification is valid when the flow velocity distribution is stable. This value does not take into account the effects of pulsation or fluctuations in flow velocity distribution due to facility factors. Convert the F.S. (full scale) listed in the table according to the rated flow range.
You are using a diaphragm pump, or maybe it has a pulsation dampener?
As someone else said, if assuming the system characteristic does not change much over time (say one calibration per day), I might calibrate a simple timed application: if the calibration determines that 123.456s from pump on to pump off yields 5gal, then you create a rung with a TON running the pump on/off switch, which is triggered by a start bit, which is ORed with the pump_on bit for a seal-in, and canceled by the .DN (done) bit.
Code:
START gal5.DN gal5
---+----] [----+----]/[---+--|TON|---+---
| | | |
| pump_on | | pump_on |
+----] [----+ +----( )---+
Then you are not fiddling around with second-order effects of pulsation, variability of scan time, non-linear events at on/off, the flow meter and scaling. Instead you make the system characteristic your flow meter. One calibration per day, or week, involving a bucket and a weighing device, could be described in a one page procedure, or re-calbration could even be integrated into the PLC:
- hook up weigh scale output to PLC
- put empty bucket (that's Bouquet) on scals
- press calibrate
- PLC runs pump for 10s, 20s, 30s,
- with 15s in between each run
- PLC weighs result at 10s after each off
- assesses linearity of result
- predicts and sets timer preset for 5gal, if linearity is acceptable.
- issues "calibration complete" or "calibration failed" result
I know some (including me) will say this violates KISS, and if a calibration says the second-order effects are small and counting pulses or summing per-scan ([flow measurement] times [scan time]) values is accurate enough, then certainly go that route as a first pass. But if not, and you end up adding offsets and other compensations to that model, then calibrating the system as whole to solve for a timer preset probably
is KISS.
OT but somewhat relevant/another drbitboy's dad story:
My dad was the GE Large Steam Turbine representative for a throat tap nozzle calibration at the Cornell University hydraulics lab
(which was very good because its head came from Beebe Lake and so provided stable flow over each calibration run as opposed to pumps at other institutions' labs that were not steady, but was eventually bad because that available head was fixed and limited the maximum flow rate as turbines, and nozzles to be calibrated, got bigger).
Dad shows up and Professor (forgot his name) shows him the multi-sheet calibration procedure. My dad asks "What's this? A flow calibration should be a one-page calculation." Professor says "Well, we have a leaky weigh tank, and we calibrate the leak on each run."
So what I am saying is "calibrate your system as a whole and then running it is simple."