Slow Acting Long Time Interval PID Tuning

I have seen some PID algorithm in PLCs and also in panel temperature controllers in which the derivative effect makes that rebound that you describe but the sum of both corrections usually goes in the correct direction of the control action.

But if you only regulate every 45 seconds, it may be that the first correction in the opposite direction has an effect that is too long and harmful.

I installed in the past about a hundred large industrial furnaces with tens of tons of material inside and always used a fixed PID run period of 3 seconds.

Im back at 5 seconds now. 45 seconds was much to long. I cant confirm a change immediately due to the 4 tanks I have active are not calling for cooling at the moment.
 
Peter, I do not know where you get a deadtime in hours from!
In the attached image I have added three lines and an elipse. To me, as soon as the cooling valve output is at 0.00% (light brown line I added at 7pm), the Temperature starts climbing. OK maybe not instant, but it is closer to five minutes than the hours-long deadtime Peter suggests.

This is correct, although the media inside the tank does have a lot of thermal inertia, it does quickly respond to the addition or removal of the coolant. The graph is a good example of when the valve gets to 0% and shuts off completely the media which is always heating its self will immediately change direction and start to climb again. On the top side when the valve gets to about 6-8% open the temperature will round off and change direction.

Trying to keep this response as short as possible...

A couple other follow-up pieces of information from the OP include valve type (ball valve), presence of valve stiction leading to intentional 1% output resolution, and a programmed deadband.

... One could try the phase shift time (~120 minutes) as a starting point, or maybe less if there is concern with such a large change.

In summary, one or more open-loop step response tests, up and down, will provide more definitive information than what is inferred from the closed-loop snapshot. In the absence of that, a starting point for incremental tuning is to increase P action and decrease I action. Half way to some IMC-based estimates for the configured PIDE is P=50, I=60, D=0, zero deadband.

So the output on the ball valve can be whatever I want it to be. Currently have it chopped to 1% increments that vary from 0-8% more or less however I can limit the span of the higher range of the valve (after the PID CV output) to give the valve more resolution in CV position. I have about 9900 positions to play with between 0 and 100%. I just need to be careful not to make it to small or else the valve wont move at all or return to same position.

I uploaded the CSV file for Peter so he can can run it through a smith predictor. I wonder if he will come to something close to what you have suggested as a start.

Those responses are very good by the way 👨🏻‍🏫
 
DR: I must have missed the bit about the temp probe not being in the product, never heard of that before really weird.

The temperature probe (pt100) is inside of a temperature probe well that extends 14" into the tank. The space between the jackets is insulated with a thermal barrier. It is well designed for this application.

I did have that a problem this week where a tank was heated by steam and the temperature probe an (E+H TMR31) was right next to the condensate return line and got heated artificially. Very annoying to say the least I was not happy with the manufacturer.
 
How big is the tank (volume)? Is it water or water-ish?


What are typical in and out temperature pairs for the glycol at say 2, 4, 6, 8% CV?
 
Here 'tis.


The initial run is looping at
Code:
sse = 332140.57780977595
On my Linux laptop.




Update: there is a problem with the data; updated Tank_data.txt to replace 0.0 PV values with estimates; added Tank_data.png (from bad data):

P.S. this really has my laptop fan earning its crust.


Tank_data.png




with 0s replaced by estimates:


Tank_data_massaged.png
 
Last edited:
Once more, with feeling. Data points that repeat and are contiguous are merged; drops the number of samples by a factor of 5+.


Tank_data_auto_massaged.png
 
Last edited:
@drbitboy
I am surprised the program didn't do better but the data isn't very good.
Now that you have fixed the data a bit I will look at it.
The program will print out a dead time. What was it. It looks like the control output starts to increase at about 7000. This causes the process variable to increase at about 2400. That is a dead time of about 1700 seconds. It doesn't look like the estimated value is taking into account the dead time.
 
I'm surprised that this has not been thought out very well, using ball valves is not recommended for this type of control, they are notoriously bad for this, I did mis-understand the bit about the temp probe, also the OP has not stated what this product is, very concerned if this is food, also, the splitting problem is often caused by a number of things, using modified starch rather than natural (if present), the introduction of fat based creams, too quick heating to high temperatures & too slow or too fast cooling, all can produce fat globules to form (splitting) as seems to be the problem, no agitator ? I think I read this right, all it needs is a slow scraper type, all our sauce cooking/cooling systems were different vessels cook in one cool in the other we mainly used vacuum cooling, the product contained large particles (veg) so agitation had to be controlled to prevent break up. If there is no agitation then how can you ensure temperature validation with a probe stuck in the middle of the product. how do you control hot spots the list goes on.
 
@drbitboy
I am surprised the program didn't do better but the data isn't very good.
Now that you have fixed the data a bit I will look at it.
The program will print out a dead time. What was it. It looks like the control output starts to increase at about 7000. This causes the process variable to increase at about 2400. That is a dead time of about 1700 seconds. It doesn't look like the estimated value is taking into account the dead time.

Hey Peter,


Update: Deadtime printed out is 0.006 (what are the units?)


Here is the STDOUT from the scripts, with many, many (many) intermediate iterations removed.


Best regards,


Brian T. Carcich



Code:
$ python SysID_SOPDT_drbitboy.py Tank_data_massaged.txt 
sse = 113461.0375571126
sse = 119575.10740949938
...
sse = 3.2094083008643457
sse = 3.1889388420142573
then
Code:
   final_simplex: (
array([[1.42646764e-02, 6.15809816e+00, 1.05357171e+01, 1.19369230e+01, 5.80300182e-03],
       [1.42646764e-02, 6.15809816e+00, 1.05357171e+01, 1.19369230e+01, 5.80300182e-03],
       [1.42646764e-02, 6.15809816e+00, 1.05357171e+01, 1.19369230e+01, 5.80300182e-03],
       [1.42646764e-02, 6.15809816e+00, 1.05357171e+01, 1.19369230e+01, 5.80300182e-03],
       [1.42646764e-02, 6.15809816e+00, 1.05357171e+01, 1.19369230e+01, 5.80300182e-03],
       [1.42646764e-02, 6.15809816e+00, 1.05357171e+01, 1.19369230e+01, 5.80300182e-03]]),
array([3.18893884, 3.18893884, 3.18893884, 3.18893884, 3.18893884, 3.18893884]))
            fun: 3.1889388420142573
       message: 'Optimization terminated successfully.'
          nfev: 529
           nit: 200
        status: 0
       success: True
             x: array([1.42646764e-02, 6.15809816e+00, 1.05357171e+01, 1.19369230e+01, 5.80300182e-03])
Is sse sum of squared errors?

Agreed that the data may not be the best for this approach; I wonder if the data are not suited to this algorithm (only a few dozen dscrete PVs, and less than a dozen discrete CVs).


Update: Whoops! Sorry @PeterN, more data are printed out after the plot is closed:


Code:
RMS error          =   0.084
The open loop gain =   0.014 PV/%CO
Time constant 0    =   6.158
Time constant 1    =  10.536
Ambient PV         =  11.937 in PV units
Dead time          =   0.006
Time units are the same as provided in input file
The closed loop time constant =   1.054
The controller gain           = 1104.699 %CO/unit of error
The integrator time constant  =  16.694
The derivative time constant  =   3.886
 
Last edited:
This is somewhat similar to @GrizzlyC's process process is dealing with:


https://www.irjet.net/archives/V4/i4/IRJET-V4I4242.pdf


Level in that paper's modeling is analogous to heat in @GrizzlyC's process, where temperature is a proxy for heat*.


I'm working on a diagram to make this more clear; this is similar to a ChemE lab we did at Clarkson four and a half decades or more ago; I'll bet @OldChemEng remembers it.


* I am being sloppy when I say "heat;" sometimes it is the total heat, sometimes it is specific heat (not specific heat coefficient).
 
Last edited:
Yes, SSE is the sum of errors squared.
The problem that sometimes occurs is that there are local minimums. I used two minimization routines in hopes of not getting stuck in a local minimum. The problem is that if a bad initial guess is made the optimizing routine may always go to the wrong local minimum. Try again with a dead time of 180 minutes or so. You see the algorithm may be trying to sync up with the wrong cycle.

It is good to see someone making use of the code.

I have a video about basic system identification
https://www.youtube.com/watch?v=qzr6eL90Aok
This is a simple example with only two parameters. This is like being able to go any direction and the SSE is the elevation. The goal is to go down hill to the lowest elevation. However, in the video you can see the valley floor does not have much of a slow in the time constant direction so almost any time constant will do unless the tolerance for being done is very small.

This is a more advanced example with one gain and 3 poles.
This is basically the same. One is trying to go down hill in 4 dimensions.
https://youtu.be/lermULNDz3M
 
T
So the output on the ball valve can be whatever I want it to be. Currently have it chopped to 1% increments that vary from 0-8% more or less however I can limit the span of the higher range of the valve (after the PID CV output) to give the valve more resolution in CV position. I have about 9900 positions to play with between 0 and 100%. I just need to be careful not to make it to small or else the valve wont move at all or return to same position.

One watch-out as P gain is increased is the potential to bottom-out the valve for longer periods of time, adding non-linearity to the system and making it difficult to eliminate the oscillation. As drbitboy pointed out, one option is to increase the glycol temperature to require more flow for the same heat removal. Less desirable might be some means of reducing flow capacity to the control valve (e.g., fixed flow restriction in the cooling fluid line). In either case I see it as better off to move up the valve curve, especially with the favorable (not necessarily ideal) ball valve characteristic, than frequently reaching a CV limit in automatic control.

(Also pointed out by others, it is not ideal to accommodate a process equipment design flaw with the control system. Based on the limited information in this one snapshot and discussion it would be speculative make this charge, and whether physical changes could be cost-justified.)

As far as output resolution goes, more is better to the point it becomes unreliable. I presume that whatever mechanism is in place to get the PID CV output to the valve itself is fast and reliable, implying the specified resolution of about 0.01% is sufficient over the designed travel. Since you are dealing with a symptom common to valve stiction, my inclination would be to reduce the current 1% step as low as possible, yet ensuring repeatable valve movement at each step. This may be hard to precisely determine without a flow measurement. I would err on the side of a larger, yet predictable, output resolution as opposed to too fine where the actual movement is not repeatable in time and distance.
 
Agreed that the data may not be the best for this approach; I wonder if the data are not suited to this algorithm (only a few dozen dscrete PVs, and less than a dozen discrete CVs).


Two suggestions: (1) apply smoothing functions to the plant data, and add a CV offset to create a smooth bottom to its oscillation instead of the discontinuity at zero, and (2) force a first order w/ deadtime system so there are only three optimization parameter (if I understand correctly), and run a series of evenly-spaced grid "searches," and then plot the objective on a series of 3D plots to visually interpret the "surfaces" that simplex is operating on. This may help guide the starting values to find the minimum objective of interest.
 

Similar Topics

Hey guys, I have a Controllogix and I am tying to control a PID loop for Chlorine (CL2). The trouble is the mixing point is 15 minutes away from...
Replies
16
Views
6,716
Hi All, we've recently upgraded from FTView SE v10 to v12. Since the upgrade we've been having a problem where the HMI is slow to update tags in...
Replies
0
Views
38
Hi, I have some problem with View Point which I'm using to manual control of conveyors. On begin when in network was only PLC and HMI View Point...
Replies
0
Views
62
Hi. Importing a 2014 aapck in 2023: no problem using it, adding windows, works very well, no problem whatsoever. Creating a new project: as...
Replies
2
Views
707
I am having a weird experience using KepwareEx6 as an OPC Server for a set of SLC processors where the tag data is not updating remotely at the...
Replies
2
Views
530
Back
Top Bottom