ControlLogix and time/timers

CapinWinky

Member
Join Date
Aug 2011
Location
Virginia
Posts
566
I've recently written several bits of code that needed record relative time between events to a high accuracy. I used the [WallClockTime -> CurrentValue] which is a LINT that is microseconds since 1970, but everything goes to **** if the operator changes the system time from the HMI. That made me look for an alternative and it looks like I have several to choose from. I was hopping someone that knows for certain could shed some light on the differences between them.

CST -> CurrentTime: This seems like the base value that Wallclock is offset from. Does it change if the system time is changed?

TimeSynchronization -> CurrentTimeMicroseconds: Not sure how this is different from CST. Is this a separate clock for CIP?

TimeSynchronization -> SystemTimeAndOffset -> SystemTime: Is this exactly the same as CurrentTimeMicroseconds?

WallClockTime -> CurrentValue: Microseconds since 1970. Since we use this to display the current date/time on the HMI and allow you to set the date/time from the HMI, this is a bad choice for timing in the program.

TON/TOF: Which time do these use to run? How high can they count? I know they require a scan every 69 minutes which is probably because they are using a DINT with microseconds as their source (would roll over after 70.58 minutes).
 
not sure how you came up with 79 minutes. the max Dint is 2147483647/60000= 35791 minutes / 60 = 594 hours / 24 = 24.8 days

But your situation might be best served using a periodic task.
 
I believe he was referring to how often the timer instruction must be executed, not how long a timer can time for. When a timer instruction is executed it stores a system time reference in the CLX TIMER tag (32 bit) or in the lower half of word 0 for SLC/MLX/PLC5 (8 bit). If the timer instruction is not executed before the stored time reference rolls over, updating the .ACC and refreshing the time reference, then timer accuracy is lost. See technote 21729 for details.

To OP, you might want to take a look at using event tasks.

edit to add tech note number.
 
Last edited:
The 69 minute number is valid but it is not what you (cwal61) are thinking it is. It is not the maximum time a timer can time up to. It is the maximum intervals between scans for a timer to continue timing and maintain accuracy.

Usually when you start a timer such as a TON, you keep the rung true so the timer will continue to time. But imagine that the timer is in a subroutine and you want to trigger the subroutine to start the timer but for whatever reason, you do not want to continue calling the subroutine. So, you stop calling the subroutine and the timer accumulator freezes. Then say two minutes later you call that subroutine again you will notice that the timer will update with the elapsed time between those two scans being added to the accumulator. Even though the accumulator was not updating (since the logic was not being scanned) the elapsed time is still timing.

So the timer can keep an accurate track of this interval as long as it is scanned at least once every 69 minutes.

As for the total time a single timer can run, that is indeed the 24+ days number mentioned above.

Timing off of the CST or the WALLCLOCKTIME will be affected by a user changing the date/time. A timer will not. But timers can have their own accuracy issues.

OG
 
I am using an event task, which is doing a good job of letting me record the times with a high accuracy. My only issue is which time I should be recording. I don't care about date/time, I'm only worried about the time elapsed between events. I can't use the timer functions like TON or RTO because they only work on milliseconds (I'm trying to time with true 0.1ms precision); even if I was only doing ms precision, you actually loose some time calling/resetting them, which could be a problem if the machine is actually running continuously when the timer finishes at 24days, 20hours, 31minutes, 23.647seconds (the maximum possible value for PRE).

I've switched from WallClockTime to CST to try and avoid problems from the operator changing the date, but the documentation for CST has some vague information about update services changing time set points without actually saying why that would happen.

I would really like a micro or nanosecond timer that starts at zero when you power up and just keeps counting up/rolling over without any risk of it being altered by synchronizing with the network or any other BS. Of course, if said timer rolled over too frequently, that would be pretty useless too; a nanosecond LINT should be good for about 585 years, but a nanosecond DINT is only good for 4.3 seconds.
 
Nano-second precision is a bit ambitious, seeing how electricity can travel only about 11 inches in a nano-second. Add semiconductor switching times and microseconds starts to look a bit more realistic. Pile computer microprocessor processing time on top of that with its associated task switching time and memory write/fetch requirements and now you are talking about millisecond precision as a realistic target.

Because a TON is not a device, but rather a computer instruction that operates on a memory location you can actually use multiple TON instructions addressed to the same tag multiple places in a computer program. In fact AB actually recommended this for long scan routines on the older platforms with the 8 bit time reference. So if your TIMER is globally scoped you can still use event tasks but also periodically execute a timer instruction addressed to the TIMER tag in a periodic task just to refresh the time stamp reference and update the ACC.
 
I think 100 or 200 microsecond precision is a perfectly reasonable expectation and empirical testing is showing that the CST time getting us into that ballpark (looks closer to quarter ms, which is on the hairy edge of acceptable). I just don't want the CST clock to randomly jump time to perform some mystery sync routine and result in a wildly inaccurate duration record and a resulting cluster fudge of timing errors (like what happened when someone changed the system time on the HMI yesterday when I was still using the wallclocktime).

If I was working in the ms precision range, I till wouldn't us a TON since this particular piece of equipment is being sold as continuous operation with maintenance down time every 60 days. The way TON is implemented by Rockwell, it would lose time to rounding every scan of the TON function (a TON that is called more often loses time faster). Then, you would lose a few scans worth of time when you reached the 25 day limit and had to restart the timer. If you avoided resets by subtracting time from the accumulated time, you would still lose time from the rounding problem. Essentially, every time a TON is called, the fractions of a ms are lost when the elapsed time is added to the accumulator. If you call it every 1ms, you would be off by days at the 25 day mark.

Just to adjust your thinking a little bit, I did an application on B&R that was demonstrably precise to single digit microseconds timing the duration of a 5V input without doing anything fancy. With their reACTION tech you can get an input and set an output with 1 microsecond precision.
 
Thanks for info. That's the best thing about this forum you can count on learning something you didn't know. I have not had an instance where I would have that type of issue with a timer but it is good to know.

Thanks again.
 
Not sure how you would ever get truly in the realm of 100 uSec accuracy with a CLX processor, even with WallClockTime and Event tasks, since the over head of calling an event task is higher than your tolerance, but one method I've used to time event durations is to keep a microsecond accumulator and do a GSV each time I entered the routine and add the microsecond delta (with 1,000,000 rollover) to my accumulator. Each time acc rolled over 60,000,000 add 1 minute to a counter. You get the idea. This method is not affected by changing system time. I'm not aware of any method (with the possible exception of CST to a master PLC) that would alter the system time at the uSec level.
 

Similar Topics

Is this even possible? I was trying to read a SLC mapped Timer ACC over DF1 (PCCC) and I get an Illegal address. My thought is maybe it doesn't...
Replies
0
Views
1,242
Hello everyone, has anyone managed to communicate Twido with Controllogix using the Ethernet connection?
Replies
1
Views
86
I'm trying to integrate a Beckhoff IPC with a Controllogix PLC. There is some documentation, on the Beckhoff website, on how to do a PLC-PLC comms...
Replies
0
Views
148
Why does the controllogix redundancy modules use a single mode fiber vs multimode fiber?
Replies
1
Views
117
Hello, I have two 16 point input cards and 1 16 point output card showing module faulted on my IO tree in Logix Designer. The fault code is...
Replies
7
Views
246
Back
Top Bottom