bgtorque
Member
Hi All
I'm from an Allen-Bradley background and i'm using an S7-300 on a project. With scaling a channel in AB I had a Scaling Block setup which would take in a raw input (counts) and multiply it by a scalar to return a value in the appropriate engineering units. I then had an ADD block that would let me add in an offset. So to calibrate the channel, say for a pressure transducer I would simulate a voltage or current input and note counts and then a different voltage or current and note counts again. Then looking at the gradient vs what those currents or voltages should represent in kPa I would derive a scalar to apply to the counts. After applying this scalar I would then simulate another pressure and apply the appropriate +ve or -ve value in the offset to get the correct reading in kPa. So, all I would need to do in the HMI is allow the appropriate calibration engineer access to change the scalar and the offset values (for calibration you would make the offset 0 and the scalar 1 initially and then do the calibration).
With TIA there is a Scaling function already under the basic instructions called scale. This looks like you stipulate the theoretical high and low value of the sensor (say 100kPa to 1000kPa) and the block works it out for you. My question is really, how to you calibrate this in reality, since that pressure transducer might return 4-20mA = 87.558kPa - 1057.849kPa, for example?
I'm from an Allen-Bradley background and i'm using an S7-300 on a project. With scaling a channel in AB I had a Scaling Block setup which would take in a raw input (counts) and multiply it by a scalar to return a value in the appropriate engineering units. I then had an ADD block that would let me add in an offset. So to calibrate the channel, say for a pressure transducer I would simulate a voltage or current input and note counts and then a different voltage or current and note counts again. Then looking at the gradient vs what those currents or voltages should represent in kPa I would derive a scalar to apply to the counts. After applying this scalar I would then simulate another pressure and apply the appropriate +ve or -ve value in the offset to get the correct reading in kPa. So, all I would need to do in the HMI is allow the appropriate calibration engineer access to change the scalar and the offset values (for calibration you would make the offset 0 and the scalar 1 initially and then do the calibration).
With TIA there is a Scaling function already under the basic instructions called scale. This looks like you stipulate the theoretical high and low value of the sensor (say 100kPa to 1000kPa) and the block works it out for you. My question is really, how to you calibrate this in reality, since that pressure transducer might return 4-20mA = 87.558kPa - 1057.849kPa, for example?