ControlLogix (RSLogix 5000) Scaling Mystery

Seems a whole lot of something over nothing...The module scaling has always been just that....scaling. It's not limiting. 95% of the Automation world understand this, so the condescending term of calling values out of range as "garbage" is getting ridiculously old. If you want to limit the value for an HMI or other reason, it's one single compare instruction and CLR or MOV 0. I do it for levels all the time where a negative number is impractical.

In my mind, something that is garbage is useless. Yet, he gives value to these numbers. A value that is lower than his engineering limits is valued at the lower engineering unit. Seems contradictory and confusing to me. But since he can't (or won't) divulge what and how he is trying to accomplish, I don't think I'll be able to help.
 
Seems a whole lot of something over nothing...The module scaling has always been just that....scaling. It's not limiting. 95% of the Automation world understand this, so the condescending term of calling values out of range as "garbage" is getting ridiculously old. If you want to limit the value for an HMI or other reason, it's one single compare instruction and CLR or MOV 0. I do it for levels all the time where a negative number is impractical.

Well, I apologize if you took it that way. I'm not a PLC programmer, and I just got immediately aggravated when I saw the engineering range defined, but then started seeing these out-of-range values coming into the program ... and I didn't know why.

Now I know.

But I *still* call them "garbage" because they are not values that the program itself needs to see. They are values that a *programmer* cares about if he chooses to write code for them. And I can tell you that this program I'm looking at was written by a well-known, respected Integrator, and it contains *no accomodations* for the handling of out-of-range values.

So not everyone is caring about these out-of-range numbers. And what it sounds like to me is - for at least some of those who say they *do* care about them ... all they're doing is filtering them out. So what good is *that*?
 
Last edited:
It can be quite useful for operators and maintenance techs, even if the programmer doesn't program in something to detect the out-of-range and alarm accordingly.

Many times in my life as a maintenance tech, I'd get called to a control room to look at, say, a tank level that wasn't reading right. One time, it's reading zero, but I can see water in the tank. Walk straight over to the level transmitter and open up the valve connecting it to the tank. Another time, it's reading -25%. Grab a spare transmitter on my way over to the tank, switch the plugs, and prove that the old transmitter is faulty by plugging in the new one and seeing the value come back to 0%.

When I was an apprentice, I worked in quite an old factory that had huge amounts of analog pneumatics. So instead of 4-20mA it was 20-100kPa. Some of the really old stuff was 0-15psi (roughly 0-100kPa), but again - someone at some point realised that having a "live zero" made it a whole lot easier to tell if your tank was empty, or if you'd blown an air hose. It became industry standard very quickly, because it's inherently self-checking.

The same principle is applied to 4-20mA. You may say it's junk and not worth anything, but it's been done that way for longer than I've been breathing air, and it's not for no reason. Sure, some modern PLC's have "under-range" and "over-range" bits. But if all I can see is "under-range", I don't know if it's 3.99mA because the tank is dead empty and the zero on the transmitter has drifted 0.06%, or because someone has driven a forklift through my cable tray. If I get a "sensor under-range" alarm and I can see the raw value, then I can tell which one it is instantly without having to get out my test equipment and complete a hazard analysis to open the electrical panel and disconnect some wiring to test.

If I came across a PLC that artificially clamped values without me configuring it to do so, I'd throw it in the bin.
 
You might like to consider that input of 3.99 or 20.01 mA are not really "out-of-range" of a 4-20 mA transmitter. Analog measurement, and the conversion of the physical to data to current output is never going to be "spot-on".

You have to make a call that some values below and above range are just as valid as those "in-range", and not "garbage".

And if your program, as supplied, " contains *no accomodations* for the handling of out-of-range values. " then I suggest it is either time to write a specification that they work to, or change your software supplier.
 
daba:

So I will be filtering out these negative values, but should I also include logic to calculate when these negative values cross some threshold requiring a Maint Tech to investigate/calibrate? Or do people just display the negative number?

And I was trying to figure out where that 3.96 mA value came from ... are these negative numbers always going to immediately jump to -25.xxx for a 4-20mA range? And the .xxx part is the amount below 4 mA? So if the number drifted to -26.5, that would be 1.5 below 4mA, or 3 mA less .5/16, or 2.97 mA? Is that it?
 
daba:

So I will be filtering out these negative values, but should I also include logic to calculate when these negative values cross some threshold requiring a Maint Tech to investigate/calibrate? Or do people just display the negative number?

Yes. I normally look for < 3.0 mA or > 21 mA.

And I was trying to figure out where that 3.96 mA value came from ... are these negative numbers always going to immediately jump to -25.xxx for a 4-20mA range? And the .xxx part is the amount below 4 mA? So if the number drifted to -26.5, that would be 1.5 below 4mA, or 3 mA less .5/16, or 2.97 mA? Is that it?

No. It will not "jump" unless the actual analog raw counts jump.

The scaling will be linear throughout the range of the card. I don't know what your card range is, but often it will be from 0 to 22.0 or 21.5ma...check your card documentation.

3.96mA is just an example of what a sensor might give you that is very close to 4mA but not necessarily a bogus value. You may want to clamp a copy of the scaled number so that if it is less than the expected 4mA, you just use the engineering units that should correlate with 4mA in subsequent math, logic and displays.

Sometimes it can be useful to let the out of scale numbers be shown. That can be a troubleshooting aid. If the end user sees "-25.0 units" he can be pretty sure that a sensor is lost, fuse blown, something is wrong. If you do clamp the result and display only valid numbers, then you definitely should include an alarm and somewhere it is handy to show the raw counts or the value in milliamps for technicians to reference when working on the problem.

Another reason that the card doesn't clamp the signal for you is that it allows you to set up your logic so that you "teach" or record two points anywhere along the slope without having to ... for example ... completely empty a tank or completely fill it up. You know that you have 7 feet of liquid in a tank and you get 6.243 mA and with 23 feet you have 13.6mA...set up your scale block for those two points and now it should be reasonably accurate from 0 to however tall the tank is up to the limits of the sensors.

I pulled those numbers out of my hat so don't take my example too literally.
 
Yes. I normally look for < 3.0 mA or > 21 mA.

I typically use 3.5ma and 20.5mA. Two reasons: some analog cards (e.g. 1769 Compact Logix) only read down as low as 3.2mA - anything below that still shows as 3.2mA. So 3mA is no good. Likewise, some cards only go as high as 20.5mA, so you'll never read 21mA. Second reason, some sensors output 21mA on fault. If their "21mA" is actually "20.99mA", I will again fail to detect it if my threshold is set to 21mA. In any case, the exact figures you use will have to be determined by the specifications of both your analog input module and the sensor you have connected to it.


Sometimes it can be useful to let the out of scale numbers be shown. That can be a troubleshooting aid. If the end user sees "-25.0 units" he can be pretty sure that a sensor is lost, fuse blown, something is wrong.
Exactly - an operator who sees "0%" on a tank level might just start a pump to refill the tank and think nothing else of it until the tank is overflowing, and they finally notice the alarm text indicating a sensor fault. Whereas, if they see -25%, then with or without an alarm, the operator is going to know something is rotten in the state of Denmark, and will hopefully call maintenance instead of overflowing a tank.
 
OkiePC:

I realize the scaling is linear across the scaling of the card, but that's on the engineering units side. I was talking about the *negative* side. I was thinking it *wasn't* linear there, and it jumps directly to -25 (if it's below 4 mA) because 4-to-20 mA range is a span of 16, 4mA is 25 %, therefore -4mA is -25%, and then anything further negative than -25.xx% would lower the -4 mA number downward.

Is this not correct? I thought that's how people were coming up with the 3.96 mA figure. It *does* calculate out that way...

ASF:

But the tank won't overfill as long as you have alarm limits set to shut the valves, which you *should* have anyway. Now, if the instrument is just plain *dead*, then you should either have a secondary protection instrument (ie: a high-level switch) which will tell you, OR some kind of "No Weight/Level Change" logic to detect the level not changing with the valve open.

But anyway - for this particular card (which is a 0- 20 or 21 mA raw range), this program currently has the scaling set to 4-20 mA with an engineering range of 0 to 10,000 (ppm). Are you saying I should change that to something like 3.5mA to 20.5 mA with the same engineering range? And if so, wouldn't that conflict with what the instrument is supposedly being calibrated to (4-20 mA)? Wouldn't it be cleaner to just leave it 4-20 mA with 0-10,000, and then write code to interpret the over/under range number to see if it's far enough out to justify maintenance taking a look at it? I mean, -25.xxx *looks* pretty bad, but it's only a 'hair' below 4 mA, so that's probably not going to be a maintenance concern, right?
 
Last edited:
OkiePC:

I realize the scaling is linear across the scaling of the card, but that's on the engineering units side. I was talking about the *negative* side. I was thinking it *wasn't* linear there, and it jumps directly to -25 (if it's below 4 mA) because 4-to-20 mA range is a span of 16, 4mA is 25 %, therefore -4mA is -25%, and then anything further negative than -25.xx% would lower the -4 mA number downward.

Is this not correct? I thought that's how people were coming up with the 3.96 mA figure. It *does* calculate out that way...

ASF:

But the tank won't overfill as long as you have alarm limits set to shut the valves, which you *should* have anyway. Now, if the instrument is just plain *dead*, then you should either have a secondary protection instrument (ie: a high-level switch) which will tell you, OR some kind of "No Weight/Level Change" logic to detect the level not changing with the valve open.

But anyway - for this particular card (which is a 0- 20 or 21 mA raw range), this program currently has the scaling set to 4-20 mA with an engineering range of 0 to 10,000 (ppm). Are you saying I should change that to something like 3.5mA to 20.5 mA with the same engineering range? And if so, wouldn't that conflict with what the instrument is supposedly being calibrated to (4-20 mA)? Wouldn't it be cleaner to just leave it 4-20 mA with 0-10,000, and then write code to interpret the over/under range number to see if it's far enough out to justify maintenance taking a look at it? I mean, -25.xxx *looks* pretty bad, but it's only a 'hair' below 4 mA, so that's probably not going to be a maintenance concern, right?

Linear scaling is just slope/intercept(ie y= mx+b). If you plot that, You'll see it stays linear across all values of x.

So in your case, If we use the ma values, your 'equation' works out to
scaledvalue = (ma_value*625)-2500
 
Last edited:
OkiePC:

I realize the scaling is linear across the scaling of the card, but that's on the engineering units side. I was talking about the *negative* side. I was thinking it *wasn't* linear there, and it jumps directly to -25 (if it's below 4 mA) because 4-to-20 mA range is a span of 16, 4mA is 25 %, therefore -4mA is -25%, and then anything further negative than -25.xx% would lower the -4 mA number downward.

Is this not correct? I thought that's how people were coming up with the 3.96 mA figure. It *does* calculate out that way...

Not quite...
4-20ma signal scaled 0-10,000 ppm.
At 0ma the scaled value is -2500ppm. Not -4ma.
At 3.96 ma the scaled value is -25ppm.

y=mx+b. m=y1-y2/x1-x2.
In this case that is 3.96=(20-4)/(10,000-0)*-25+4

There are no weird discontinuities on the negative side.
 
Wouldn't it be cleaner to just leave it 4-20 mA with 0-10,000, and then write code to interpret the over/under range number to see if it's far enough out to justify maintenance taking a look at it? I mean, -25.xxx *looks* pretty bad, but it's only a 'hair' below 4 mA, so that's probably not going to be a maintenance concern, right?

You should scale it 4-20ma = 0-10,000ppm. Anything else is just weird.

If you want to program maintenance alerts, that's a separate issue.

-25 on a 0-10,000 scale doesn't look bad at all. It's 0.25% off, that's very good for an analog measurement. You are obsessing over it because it is negative. Next month it could very easily be +25, what will you do then? It is the nature of an analog signal. While digital electronics have improved many 4-20ma transmitter's performance it is still common to see +/-0.5% error on a 4-20ma signal.
 
ASF:

But the tank won't overfill as long as you have alarm limits set to shut the valves, which you *should* have anyway. Now, if the instrument is just plain *dead*, then you should either have a secondary protection instrument (ie: a high-level switch) which will tell you, OR some kind of "No Weight/Level Change" logic to detect the level not changing with the valve open.
Absolutely. But "should" or "as long as" or "some kind of" doesn't help you when a thousand litres of caustic solution is overflowing your bund. I can tell the poor sucker who has to clean that up until I'm blue in the face that "someone should have put an emergency high level cutout on that tank", but he's not going to give a flying f***. The more tools you give an operator to detect a problem before it becomes a disaster, the better - if you artificially limit the values so that it looks right when something is wrong, you're not helping them.

But anyway - for this particular card (which is a 0- 20 or 21 mA raw range), this program currently has the scaling set to 4-20 mA with an engineering range of 0 to 10,000 (ppm). Are you saying I should change that to something like 3.5mA to 20.5 mA with the same engineering range? And if so, wouldn't that conflict with what the instrument is supposedly being calibrated to (4-20 mA)? Wouldn't it be cleaner to just leave it 4-20 mA with 0-10,000, and then write code to interpret the over/under range number to see if it's far enough out to justify maintenance taking a look at it? I mean, -25.xxx *looks* pretty bad, but it's only a 'hair' below 4 mA, so that's probably not going to be a maintenance concern, right?

If your sensor is calibrated so that 4-20mA = 0-10,000ppm, then scale it exactly that way in your PLC. The talk of 3.5mA/20.5mA/etc relates to how you handle out of range values.

If, having scaled your input 4-20mA = 0-10,000ppm, you get a value of -25ppm, that represents a value of 3.96mA - which is close enough to 4mA, so you know that the sensor is fine, your concentration has just bottomed out, and there is a very slight error on the calibration (0.25%). Not an issue. However, if you get a value of -310ppm, that represents just under 3.5mA, which probably indicates a problem with your transmitter - it's probably faulted and is be default putting out an off-scale-low value. Likewise if you get a value of 10,320ppm, it's faulted and is putting out an off-scale-high value. If you get a value of -2,500ppm, you know that your analog input is reading 0mA, so there's either a wiring fault or your transmitter is completely cooked. And, as long as you're not masking your out of range values, all of these things, you can ascertain without so much as picking up a screwdriver or opening a laptop, just by looking at your HMI.

How you handle those out of range values as far as alarming or inhibiting the process is entirely up to you and the process.
 
A true story from the days of my apprenticeship. I got a call from an operator telling me that the level sensor on a certain tank was faulty. I walked up to the control room, and had a look at the SCADA screens. The tank was reading 0%. Straight away I smelled a rat, because if the sensor had failed, I was expecting to see it reading -25%, like every other failed sensor I'd seen. I checked the trend for that tank level and observed a nice gentle slope from about 80% all the way down to 0% over the course of a fortnight.
"Uh...Jimmy? When was the last time you had a delivery of [product]?"
"No idea. Why?"
"Maybe you should find out."
I showed him the graph and his face dropped a mile, when he realised he'd forgotten to order the truck to refill that tank. The sensor was fine, the tank was empty.

It took me longer to walk to the control room than it did to work out the "problem". Had all the sensors on that site been clamped to zero, I'd have spent a good half hour tracking down the electrical drawings, physically inspecting the tank level, getting online with an S5 PLC from the 1970's, breaking into the current loop to test for mA, or otherwise faultfinding a nonexistent fault. Sure, it was the presence of a trend that allowed me to prove it without resorting to any of that, but the fact that there was a clear and obvious difference between "hardware error" and "operator error" meant that I cottoned on to where the fault actually was straight away.
 

Similar Topics

I have installed rs logix 5000 v 20, and I have a project file (.acd) ehere the controller is 1756-L71..... my software does not contains 1756...
Replies
3
Views
3,797
I have a friend/customer that has a old program, they say its version 8.... anyone have anything they can open it with and print a PDF I can go...
Replies
6
Views
2,677
... more PLC101 questions for you experience Ladder lovers: Do these PLCs boot up with junk values in memory, and if so how do you typically...
Replies
12
Views
7,537
More dumb questions from the 'Janitor: In this program I've been looking through (that I didn't write), I see hard-coded constants which are...
Replies
5
Views
2,785
Hi, I wish to verify the CPS instruction using RSLogix 5000 v20 on Controllogix PLC. Is there any method i could do this and generate a...
Replies
1
Views
4,237
Back
Top Bottom