I am trying to understand how the PID block behaves, specifically how the settings in the Scaling Tab effect the output of the Controller.
I loaded up a test program on a development system and for the purposes of this test I have the following:
Independent Gains - Kc - 1.0 Ki = 0. Kp = 0.
Setpoint = 50.0
Feedback (PV) = 10.0
Bias = 0.
CV Scaling: 0% = 0. 100% = 100.
Thus, Error = 40.
The reason for this setup is I can easily calculate (supposedly) the Output, based on Kc * Error.
On the Scaling Tab, if I have the Scaling set to 0 to 100 for BOTH the Unscaled PV and Engineering Unit Min / Max, I get a 40% output from the Controller.
However, if I set the PV and Engineering unit scaling to 0 to 500, I get an 8% output. However, the Error remains at 40!
So obviously calculating the Output is not simply just multiplying the Error by Kc. What am I missing? I know it has something to do with scaling...
I loaded up a test program on a development system and for the purposes of this test I have the following:
Independent Gains - Kc - 1.0 Ki = 0. Kp = 0.
Setpoint = 50.0
Feedback (PV) = 10.0
Bias = 0.
CV Scaling: 0% = 0. 100% = 100.
Thus, Error = 40.
The reason for this setup is I can easily calculate (supposedly) the Output, based on Kc * Error.
On the Scaling Tab, if I have the Scaling set to 0 to 100 for BOTH the Unscaled PV and Engineering Unit Min / Max, I get a 40% output from the Controller.
However, if I set the PV and Engineering unit scaling to 0 to 500, I get an 8% output. However, the Error remains at 40!
So obviously calculating the Output is not simply just multiplying the Error by Kc. What am I missing? I know it has something to do with scaling...