Deviation Alarm Calculation

seat14

Lifetime Supporting Member
Join Date
Dec 2005
Location
Seattle
Posts
10
In my PLC I have all my formulas in Celsius, but at the HMI they can enter Celsius or Fahrenheit. The problem I am having is when I calculate my temperature deviation for alarms and someone enters in using Fahrenheit. 1 deg. Fahrenheit converts to a – 17.22222 Deg. Celsius. So I end up trying to subtract -17.22222 from Setpoint Deg. Celsius
Actual <= Temp Setpoint - Neg_Deviation_Temp = Alarm
70 deg. C <= 80 – -17.222 = Alarm
 
Customer requirements

I do work in Canada and United States, so all programs have the capability to convert and show both.
 
For every added 1°F, add 0.5555556°C. You tried to calculate absolute value when you need difference, as in deviation from setpoint.

Freezing water = 0°C or 32°F
Boiling water = 100°C or 212°F

1°F difference = (100-0)/(212-32)°C difference

Should end up like this:
70 °C (actual value) <= 80°C (SP) – 0.5555556*x°F = Alarm

Where x°F is max allowed deviation from SP.
 
Last edited:
Maybe my understanding for deviation is wrong, but if I enter 1DEG F, I would expect it to be converted to 0.55DEG C, so it should end as
80 - -0.55, which is different, than your example
 
I do work in Canada and United States, so all programs have the capability to convert and show both.

What HMI are you using?

My approach to this problem would be to have a hardcoded (or not) variable in the PLC that is used wherever units may come into play and use that variable to decide whether to display/calculate in F or C.

The same variable could also be used by the HMI to determine which unit to display to the operator so that you only "configure" the system once. I am, obviously, assuming that the same system doesn't have to change variables daily.

Some SCADA systems will have a range conversion facility in the analog signals that you can use to convert units both ways. I worked in a system that had the facility of being used by Americans or Europeans that did the entire conversion on the SCADA level and the PLC worked in metric units only. The solution did involve a fair amount of scripting and care into how the signal was defined, but worked well.
 
In my PLC I have all my formulas in Celsius, but at the HMI they can enter Celsius or Fahrenheit. The problem I am having is when I calculate my temperature deviation for alarms and someone enters in using Fahrenheit. 1 deg. Fahrenheit converts to a – 17.22222 Deg. Celsius. So I end up trying to subtract -17.22222 from Setpoint Deg. Celsius
Actual <= Temp Setpoint - Neg_Deviation_Temp = Alarm
70 deg. C <= 80 – -17.222 = Alarm

I believe that there's a better way to implement your logic based on what I understand. You should keep the conversion for the setpoint and the target only. The deviation should be in degrees and not require and conversion.

For example, Setpoint can be set to 100C or 212F. The maximum deviation is +/- 5degrees. Regardless of the scale, the system will "fault" or "alert" at 95 / 105 if in C or 207 / 217 if in F.


The other route I'd see is as follows:
Allow them to enter the deviation and calculate min/max. In other words, if they enter a deviation of 1deg F, don't convert that to celcius, but rather calculate the two temperatures for the alarms. Example: Setpoint = 100F, Deviation +/- 1deg => Setpoint 37.77C High Alarm 101F or 38.33C / 99F or 37.22C.

Cheers.
 
For my projects I do all calculations an math in one standard, e.g. metric

All the conversion is done in a separate routine.

If the client wishes to use inches instead of metric, they only see the output or input to the conversion routine. All calculations are still completed in metric format within the CPU.
 
Do it in what ever scale you want then when you select range in HMI make it equal a digital bit so off = Deg. C on = Deg. F then the maths is simple Deg. C to F = Temp Times 9 Div. 5 + 32 (212 = (100 * 9) / 5 + 32) or
F to C = Temp = Times 5 Div. 9 -32 (100 = (212 * 5) / 9 - 32)
So two types of calculation blocks can be used dependant on the state of the digital bit then if you choose Deg. C in the PLC to do all your work then you use the not of the bit to transfer the values direct or if the bit is on process them through the calculation logic.

Note: this is the formula I was taught in junior school.
 
Last edited:
I think everyone is surprised when they come across the C to F conversion which is just 9/5. This is for the difference between two temperatures.

You only add or subtract 32 when you are talking about a temperature of something.

Consider for example being allowed to heat up at 1°C/minute. This is the same as being allowed to heat up at 1.8°F/minute, and you will be very very wrong if you heat up at 33.8°F/minute.
 
I think everyone is surprised when they come across the C to F conversion which is just 9/5. This is for the difference between two temperatures.

You only add or subtract 32 when you are talking about a temperature of something.

Consider for example being allowed to heat up at 1°C/minute. This is the same as being allowed to heat up at 1.8°F/minute, and you will be very very wrong if you heat up at 33.8°F/minute.

+1 to this
 
I think everyone is surprised when they come across the C to F conversion which is just 9/5. This is for the difference between two temperatures.

You only add or subtract 32 when you are talking about a temperature of something.

Consider for example being allowed to heat up at 1°C/minute. This is the same as being allowed to heat up at 1.8°F/minute, and you will be very very wrong if you heat up at 33.8°F/minute.

That is why I give them a choice of "Ratio" or "Temp" when they are converting between C and F.

Time Temp Calculator.JPG
 
The alarm limits on temperature are a bit of a problem that's the why in Europe decimalisation has become a standard (well in most cases) so Deg. C is easy, litres instead of gallons, kilo instead of pounds and thank god in England we got rid of the gallon as this was different to japan & America. Although apart from temperature they are reasonably easy to convert. however, as individuals some old stick in the mud types prefer to recognise 70 Deg. F as a base for a good summer temperature (they cannot get their heads round 21 Deg. C) and as water freezes at 0 Deg. C and Boils at 100 decimal makes sense.
We have the Babylon's & Egyptians to thank for duodecimal base of 12 and 365 days a year with a last minute thought of an extra day every 4th one. In England although most things in electrical or electronic have converted to metric the hammer bashers (Mech. Eng.) still use BSP and inch etc. Anyway a percentage deviation seems the best way out in this case or perhaps the best way is for the rest of the world to convert to totally metric, then perhaps the Hubble telescope may have worked first time LOL.
 

Similar Topics

Hello I am measuring the temperatur from 8 temperature sensors. They are all supposed to be within range of eachother but if one of them gets a...
Replies
13
Views
3,712
OK guy's and gal's I need help with this one. I have 2 oxygen sensors. I need to set up an alarm if the signal between the two senors deviate...
Replies
7
Views
7,008
In Profinet (or Profibus) communication between S7-1500T (or Simotion) controller and S120 () drives, and when working with DSC (Dynamic Servo...
Replies
0
Views
582
I'm working with the Delta temperature controller DTB969RRE-D. It has four PID Settings: PB, Ti, Td and "Integral Deviation Value". Does anyone...
Replies
6
Views
2,155
Does anyone has an idea in what range the speed of a sensorless vector control driven motor will deviate from the real measured speed?
Replies
4
Views
2,383
Back
Top Bottom