dcooper33
Lifetime Supporting Member + Moderator
Got into a bit of a debate the other day, and wanted to get some gurus' opinion/expertise on the subject.
When duration of an output's ON (or OFF) time is critical, and that control is done in a fast periodic task (say 2.0ms), is it "better" to use a TON within the task, or to count scans that the output is on, and increment the count by the period each time?
We have a roll-yer-own control algorithm that is periodically adjusting the pre-set on time of an injector solenoid based on flow-meter feedback. My contention was that not only is a TON simpler for a technician to understand, but that the potential error is always less, as the error can only ever be in one direction (actual time > preset time), and that max error <= max elapsed time between task triggers. If you are counting scans, then the max error is still subject to elapsed time errors on the "final" scan, but that method is also subject to cumulative errors on the "duration" scans. Of course we all know that a 2ms period is not always 2.0. It is probably going to be extremely close most of the time, but of course there is no guarantee that 2.0 doesn't occasionally take 3.25, or 1.6. Then there is overlap. Everytime you have an overlap while the solenoid is on, then you add >= Task Period to your error.
Now the other argument is that the PLC "averages" the time between periodic tasks so that it very nearly equals the nominal time, but I've never read any literature on the matter, so I don't know what kind of time-base we are talking about here.
So my feeling is that overall there is much more uncertainty with the scan-counting method, rather than using the real-time clock built into a TON. But I know a lot of programmers consider scan-counting a "best practice", some of whom I highly respect. So I'd like to hear some other perspectives on the matter. What do you guys use? What are some pros/cons of both approaches?
Cheers,
Dustin
When duration of an output's ON (or OFF) time is critical, and that control is done in a fast periodic task (say 2.0ms), is it "better" to use a TON within the task, or to count scans that the output is on, and increment the count by the period each time?
We have a roll-yer-own control algorithm that is periodically adjusting the pre-set on time of an injector solenoid based on flow-meter feedback. My contention was that not only is a TON simpler for a technician to understand, but that the potential error is always less, as the error can only ever be in one direction (actual time > preset time), and that max error <= max elapsed time between task triggers. If you are counting scans, then the max error is still subject to elapsed time errors on the "final" scan, but that method is also subject to cumulative errors on the "duration" scans. Of course we all know that a 2ms period is not always 2.0. It is probably going to be extremely close most of the time, but of course there is no guarantee that 2.0 doesn't occasionally take 3.25, or 1.6. Then there is overlap. Everytime you have an overlap while the solenoid is on, then you add >= Task Period to your error.
Now the other argument is that the PLC "averages" the time between periodic tasks so that it very nearly equals the nominal time, but I've never read any literature on the matter, so I don't know what kind of time-base we are talking about here.
So my feeling is that overall there is much more uncertainty with the scan-counting method, rather than using the real-time clock built into a TON. But I know a lot of programmers consider scan-counting a "best practice", some of whom I highly respect. So I'd like to hear some other perspectives on the matter. What do you guys use? What are some pros/cons of both approaches?
Cheers,
Dustin