TIA Portal - Strange Behavior with Add Function

Jieve

Member
Join Date
Feb 2012
Location
USA
Posts
274
I have a small system controlled by a Siemens S7-1200 PLC. I created a totalizer function block (TIA v17), where I'm counting total revolutions from an RPM value every program cycle (~13ms). The output (Real) is incremented by a running sum using an "Add" instruction.



What I've found is that the result seems to work fine and is very accurate as long as the number of revolutions remains small. However, once it starts getting large (7-8 digits) the add function no longer adds correctly. Each program cycle I'm adding a small number (~0.4) to the total. At 2000rpm and 1000000 revolutions things seem fine; every minute 2000 revs is added. However, at 9000000, instead of adding 2000 revolutions every minute, it might add only 100 revs/minute. It's as if the add instruction doesn't have enough time to process before the next cycle.



Has anyone ever seen this before?
 
Last edited:
Don't use reals for summating, use either longint or doubleint.


For example, any number less than or equal to 8192.0/(2^24) added to 8192.0 will have no affect when using 32 bit floating point arithmetic. The exponent of the two numbers being added have to be the same so by the time you have shifted the bits in the small number to get the exponents to match the small number is zero.
 
Last edited:
Interesting.


Also managed to solve it using LReal instead of Real.



Tech support from my local Siemens supplier found this statement in the S7-1200 manual:


Calculations that involve a long series of values including very large and very small numbers can produce inaccurate results. This can occur if the numbers differ by 10 to the power of x, where x > 6 (Real), or 15 (LReal). For example (Real): 100 000 000 + 1 = 100 000 000.
 
Using LReals was what I was going to suggest; it is about 20-bits, or a factor of a million, better than long integers.

Floating point data types comprise three bit fields: sign; exponent; mantissa.

32-bit Reals comprise 1 sign bit, 8 exponent bits, and 24 mantissa bits. 224 is 16,277,216, which is why around the sixteen-millionth ADD the codes starts losing precision. Also note that 103 ≈ 210 and 3/10 = 0.3, or log102 ≈ 0.3. So 24 bits (binary digits) can represent roughly the same range at 0.3 * 24 = 7.2 decimal digits, which is why "once it starts getting large (7-8 digits)" the ADD operation starts losing precision i.e. not enough bits.

64-bit LReals comprise 1 sign bit, 11 exponent bits, and 53 mantissa bits. So your current solution should be good until around 0.3 * 53 ≈ 16 decimal digits. It is up to you to understand your process and decide if that is enough.

Links: here and here.

Do they not teach this stuff any more?
 
The Patriot Missile Lesson

Research the incident where the Patriot missile missed the incoming Scud and a bunch of people (28?) were killed or wounded. It came from totalizing time in a REAL instead of in a large integer. I understand it was on the way to being a recognized bug in the Raytheon firmware before the incident but too late for those that were involved.

People tend to think that the easy availability of floating point in today's PLCs means that we don't need to understand how the numbers are represented and any problems can be solved with using more bits. I'm not saying that a real is wrong for your application but I suspect that it is.

- - - - -

When studying issues from bad numerics it's interesting to look at the French rocket that blew up with a real to 16-bit integer math overflow and the US Navy ship that swam around in circles from a divide by zero error.

All this is to say that we need to be constantly aware of the capabilities of our datatypes, apply them carefully, and examine the WHY and HOW of every type conversion. I personally prefer a PLC that strongly enforces data types and requires explicit conversions whenever they are needed. This may be in opposition to some people's preferred products.

- - - - -

On a related note - someone told me that whenever you use a 'Cast' type conversion operation in C you are telling your compiler that you are smarter than it is. This is not a reason to never use casts - it is a reason to look closer at wherever you are using one.
 
Exactly.

As I inferred, using an LReal reference to utilize more bits to solve this problem only kicks the can down the road; heck, it kicks it out past Pluto; it is incumbent on OP to determine whether that is far enough or if the process could trip over that can again.

If I were OP, I would be far more interested in why LReal solved the problem and what the limits of that solution are, than that the problem was solved.


TL;DR (stream of consciousness)


There are no variables in C (or in any language for that matter), only references; once we understand that we become better programmers.

A data type declaration is a declaration to the compiler how to interpret the bits (or bit) starting at a reference.

I am better than the compiler when nuance is required. Except when I am not.

If I want to point a weapon at my foot and turn off the safety, ...?

It is only ones and zeros; it cannot be hard.

C: all the power of assembler with all the convenience of assembler.

There are roughly π x 107 seconds in a year. A 32-bit Real stops incrementing by 1.0 after it reaches 16,777,216.0, which is 1.7 x 107, so about half a year's worth of seconds.

Interestingly, adding 1.0 to a Real with an initial value of 16,777,218.0 will increase it, but by 2.0 to 16,777,220.0. Do you know why?
 
If I were OP, I would be far more interested in why LReal solved the problem and what the limits of that solution are, than that the problem was solved.


Who said I wasn't interested in the why? ;) And I did go back to check that this wouldn't turn out to be an issue ... I think my calculation led to sometime on the order of 64 years runtime before it became an issue again (although don't quote me on that, I may be mis-remembering).



This begs the question though: for general functions where the inputs could have varying magnitudes, would a better solution be to simply start with use of LReal where calculations are involved? Of course if you know your limits, then they can be applied/tested. But for the general case, using LReal covers more bases.



Curious what others do in these cases.
 
REALs and LREALs are fine and usually the best for most math applications. But just mindlessly using REALs or LREALs without thinking about the application is wrong.

Even if switching to LREAL seems to fix the problem for now, the proper solution to any counting is use integers. So count INT, or DINT or LINT, and then convert to the rpm as REAL as the last step. Not only does this method not saturate in the same way as REALs or LREALs eventually will do, but it does not result in a value that shows a false high resolution.
 
I would avoid reals when doing some sort of counting function. Counting tends to be unsigned. You don't get the full benefit of all 64 bits in that case. And you have to remember that reals are not an exact representation of most values, they are an approximation.

A real can do a great job with a half or a quarter but it is clueless about what to do with a tenth. it can only do the best that it can with a tenth and hope that is good enough. It's been awhile since I read about the Patriot missile problem but I think it was totalizing hundredths of a second. Since a real cannot exactly handle a hundredth there was an accumulation of approximation - which became a serious error. A better solution would have been to accumulate (count) it into a big integer which could have then been converted to a lreal when needed for calculation purposes.

In your case if you are counting integer counts the problem of approximation of the stuff to the right of the decimal point is not an issue but you cannot utilize all 64 bits. The sign and the exponent bits are wasted in that case. Only a big fat unsigned integer can use all of them with no approximation and no accumulating error.

This is coming from an old guy that remembers a world where floating point was not an option. Some things are much better now...
 
But just mindlessly using REALs or LREALs without thinking about the application is wrong.

When I was learning how to program, this was beaten into you for any variable type. Granted most of programming was for constrained microcontrollers, but the reasoning still stands for all platforms. :)
 
UDINTs (64-bit unsigned integers) only kick the can 2k times further down the road than the LReals. OP was saturating Reals in maybe half an hour; if the values being totalized were reals in the first place, then totalizing into any kind of integer involves fixed-point decisions and other messiness.

OP did their homework and LReals solved it.

Certainly a blanket "always use LReals for totalizing" statement is too broad, but that is not what OP said, and "LReals as probable first choice, plus homework" is not too broad.

For Allen-Bradley, if gradually-increasing absolute inaccuracy of the total is acceptable, but a PLC fault is a dealbreaker, then reals are the only choice.
 
Certainly a blanket "always use LReals for totalizing" statement is too broad, but that is not what OP said, and "LReals as probable first choice, plus homework" is not too broad.
Yes that seems to be what Jieve writes:
This begs the question though: for general functions where the inputs could have varying magnitudes, would a better solution be to simply start with use of LReal where calculations are involved? Of course if you know your limits, then they can be applied/tested. But for the general case, using LReal covers more bases.

I disagree with the "LReals as probable first choice [..]".
1st understand the problem, then choose the solution.
Not choose a solution, then go back and change it when you have understood the problem.

Maybe I am being cranky here.
When I started we had at most 16 bit integers and shift left for multiplications and shift right for divisions. And if the multiplications or divisions where not by a multiple of 2, you had to combine multiple SHL or SHR plus ADDs or SUBs to get to the desired result.
And you had to think hard about how to not saturate and also not lose resolution.
The luxury of having floating point math available does not mean you should not understand the problem as well as the solution.
 

Similar Topics

Hi Guys, In TIA Portal, I observed a 'Less than' Instruction transition to TRUE when the compared value is not less than the desired stated...
Replies
13
Views
4,705
Hi All, Someone at work has put a PLC system on my desk, that's just been taken off an idle production line. He said "It's an S7 PLC. We don't...
Replies
10
Views
262
Hi guys , I'm new with Allen Bradley, but I have experience with Tia portal (s7 1200 ,s7 1500) I want to convert my project from Tia portal to...
Replies
2
Views
210
Hi, I have had problem with upgrading some projects from v16 to v18. I tried it on 3 diffrent computers. I want to post this so that anyone that...
Replies
3
Views
176
Hello gentlemen, Im working on a small project on TIA Portal, about establishing a Modbus TCP connection between my ET200SP plc and a socomec...
Replies
12
Views
312
Back
Top Bottom