I have some 1000ms TON instructions in a ControLogix L62 than on .DN totalize a flow reading. We noticed that the result is only about 75% of what it should be.
We did a test where we counted the .DN from the timer in a CTU block and compare this to system clock seconds (Datetime[5]).
Over time, Datetime[5] begins pulling away from counter .ACC.
These rungs were in a continuous task with a typical scan time of ~10ms. I tried the same test in a scheduled task and got the same result.
Is this a known issue of some kind? Could it be something we are doing wrong with our processor setup? We are using only a fraction of the availble memory.
Thanks,
Warren
tomalbright hinted on the inherent inaccuracy problems of timers, let's take it a step further....
Firstly let's get one thing out of the way and move on....
Timers in A-B PLCs are
not Timers at all - they are instructions that perform a bit of maths and set some flags. The "timing" element of a timer is always referenced back to the system clock.
Here's how TON timers work, in sufficient detail to understand why I say they aren't timers - TOF and RTO work the same way - with the obvious differences...
When the rung evaluates as false, the timer accumulator is reset to zero, and the EN TT and DN flags are reset.
The first time the rung evaluates as true, the EN and TT flags are set, and the current system clock data is recorded in the unused bits in the Timer tag (no, you can't get at them).
The next time the timer rung is scanned (assume still true), the current system clock data is compared to the stored data, and a bit of maths works out how many milliseconds have elapsed since it was last scanned. These milliseconds are added on to the accumulator value, and the stored current system clock data is updated with the new time. Then the instruction compares the accumulator value to the preset value, and, if greater than or equal to, sets the DN bit, resets the TT bit. No further accumulation of time will happen once the DN bit is set (i.e. the timer freezes).
Go back one paragraph to see what happens on the next scan of the rung.
So, if your happy with that description, it's easy to see that a timer does not "count" milliseconds into its accumulator, it is simply "bumped up" by how many milliseconds have elapsed since the previous scan of the instruction. The number of milliseconds added is the time between scans, usually the same as the program scan time.
Consequences of this .....
1. A timer can (and will) "overshoot" the preset value. If you had a scan time of 50 mS, your 1000mS timer could easily "time-out" (i.e.set the DN bit), after 1049 mS.
2. Never attempt an EQU instruction on the accumulator value, it will invariably miss the target altogether, use GEQ or GRT to trigger whatever it is you want (with a one-shot if you need to).
Now what if you are using a self-resetting timer to generate a "time-period" trigger. Usually this is done with an XIO of the DN bit on the rung to reset the timer. Be aware that it will then take an additional
2 program scans before the timer starts timing again. The next scan after the DN bit gets set will reset the timer, thus resetting the DN bit, which will be picked up on the next scan to start it timing again.
All said and done, it is best
not to use Timers where you need any accuracy, such as your case where you are using it to calculate a time-based parameter - flow totalisation.
If you want to do something every 1000mS, create a periodic task (scheduled 1000mS), and choose its priority level dependent on how accurate you want the task to run - highest priority has absolute control, and therefore, absolute precision.
I hope this explanation gives you food for thought, it is fundamental to good program design to understand the instructions, and the way the processor evaluates them.
Warren, can you post your test rungs ? (Zip the ACD file)
I could write a test routine myself easily, but I'm intrigued by your 75% error findings and want to know how you tested it.