clay.float
Member
Greetings all,
In a project I'm working on, we have an 4-20ma temperature transmitter linked to an S7-1200's analog input (0-10V reading 2-10V, 12-bit) via a 500Ω dropping resistor. There are some issues occurring in the field where the temperature is fluctuating from the reference calibrating probe more than we would expect.
Would we improve reliability, accuracy, precision, or general frame of mind by moving the temperate transmitter to an analog input like the one on the SM-1234? (0-20ma reading 4-20ma, 13-bit (12+sign) )?
Specs for the S7-1200 input give an accuracy of +/-3%-3.5% while the SM-1234 has an accuracy of +/-0.1%-0.2%. If our reading on a 0-100°C probe is 3% off, that's over 6°F; if it's 0.2% off, that's just over 0.4°F.
Could that the be the source of our inconsistencies? Otherwise, are there any specific advantages / disadvantages to using a native 0-20ma input vs 0-10V via resistor?
Many thanks in advance!
In a project I'm working on, we have an 4-20ma temperature transmitter linked to an S7-1200's analog input (0-10V reading 2-10V, 12-bit) via a 500Ω dropping resistor. There are some issues occurring in the field where the temperature is fluctuating from the reference calibrating probe more than we would expect.
Would we improve reliability, accuracy, precision, or general frame of mind by moving the temperate transmitter to an analog input like the one on the SM-1234? (0-20ma reading 4-20ma, 13-bit (12+sign) )?
Specs for the S7-1200 input give an accuracy of +/-3%-3.5% while the SM-1234 has an accuracy of +/-0.1%-0.2%. If our reading on a 0-100°C probe is 3% off, that's over 6°F; if it's 0.2% off, that's just over 0.4°F.
Could that the be the source of our inconsistencies? Otherwise, are there any specific advantages / disadvantages to using a native 0-20ma input vs 0-10V via resistor?
Many thanks in advance!