Thank You Sponsors!
















Key Insights to Control System Dynamics

The post Key Insights to Control System Dynamics first appeared on the ISA Interchange blog site.

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.


Caroline Cisneros, a recent graduate of the University of Texas who became a protégé about a year ago, is gaining significant experience working with some of the best process control engineers in an advanced control applications group. Caroline asks some questions about dynamics that play such a big role in improving control systems. The questions are basic but have enormous practical implications as seen in answers.

Caroline Cisneros’ Question

Is an increase/decrease in process gain, time constant, dead time, controller gain, reset, and rate good or bad in terms of effects on loop performance?

Greg McMillan’s Answers

This is an excellent question with widespread significant implications. I offer here some key insights that can lead to better career and system performance. The first obstacle is terminology that over the years has resulted in considerable misconceptions and missing recognition of the source and nature of problems and the solutions needed. To overcome what is preventing a more common and better understanding see the Control Talk Blog Understanding Terminology to Advance Yourself and the Automation Profession. Also, for much more on how all of these dynamic terms affect what you do with your PID and the consequences in loop performance see the ISA Mentor post How to Optimize PID Settings and Options.

Process Gain

Increases in process gain can be helpful but challenging.

In distillation control, the tray that shows the largest temperature change for a change in reflux to feed ratio (largest process gain) in both directions has the best temperature to be used as the controlled variable. This location offers much better control because of the increased sensitivity of temperature that is an inferential measurement of column composition. Tests are done in simulations and in plants to find the best locations for temperature sensors.


Join the ISA Mentor Program

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.


In pH control, a titration curve (plot of pH versus ratio of reagent added to sample volume) with a slope that goes from flat to incredibly steep due to strong acids and strong bases can create an incredibly large and variable process gain. The X axis (abscissa) is converted to a ratio of reagent flow to influent flow taking into account engineering units. The shape stays the same, and if volumetric units are used and concentrations are the same in the lab and plant, the X axis has the same numeric values. The slope of the curve is the process gain. The slope and thus the process gain can theoretically change by a factor of 10 for every pH unit deviation from neutrality for a strong acid and strong base. The straight nearly vertical line at 7 pH seen in a plot of a laboratory titration curve is actually another curve if you zoom in on the neutral region as seen in Figure 1. If only a few data points are provided between 8 and 10 pH (common problem), you will not see the curve. The lab needs to be instructed to dramatically reduce the size of the reagent addition as the titrated pH gets closer to 7 pH.


Figure 1: Titration Curve for Strong Acid and Strong Base


The steep slope provides incredible sensitivity to changes in hydrogen ion concentration, but less than ideal mixing will create enormous noise, and any stiction in the control valve will create enormous oscillations. The amplitude from stiction can be larger than 2 pH for even the best control valve. Even if we could have perfect mixing and control valve, we would not appreciate the orders of magnitude improvement in hydrogen ion control because we are only looking at what we measure, which is pH. Thus, for pH control we seek to have weak acids and weak bases and conjugate salts to moderate the slope of the titration curve. There is also a flow ratio gain that occurs for all composition and temperature control loops as detailed in the Control Talk Blog Hidden Factor in our Most Important Control Loops.

Often the term “process gain” includes the effect of more than the process. The better term is “open loop gain” that is the product of the manipulated variable gain (e.g., valve gain), process gain, and measurement gain (e.g., 100%/span). The valve gain (slope of installed flow characteristic that is the flow change in engineering units per signal change in percent) must not be too small (e.g., large disk or ball valve rotations where installed characteristic is flat) or too large (e.g., quick opening characteristic) because the stiction or backlash expressed as a percent of signal translates to a larger amount of errant flow. Oversized valves cause an even greater problem because of operation near the closed position where stiction is greatest from seal and seat friction. Small measurement spans causing a high measurement gain may be beneficial when accuracy of measurement is a percent of span. The use of thermocouple and RTD input cards, rather than transmitters with spans narrowed to range of interest, introduces too much error. In conclusion, automation systems gains must not be too small or too large. Too small of a valve gain or measurement gain is problematic because of less sensitivity and greater error that reduces the ability to accurately see and completely correct a process change. Too high of a valve gain is also bad from the standpoint of an increase in size of the flow change associated with backlash and stiction. An increase in this flow change accordingly reduces the precision of a correction for a process change and increases the amplitude of oscillations (e.g., limit cycle).

Process Time Constant

An increase in the largest (primary) time constant in a self-regulating process (process that reaches steady state in manual for no continuing upsets) is beneficial because it enables a large PID gain. The process time constant also slows down process input disturbances, giving the PID more time to catch up. While this proportionally decreases peak and integrated errors, a large time constant is perceived by some as bad.  The tuning is more challenging, which requires greater patience and time commitment for open loop tests that seek to identify the primary time constant. The time for identification of the dynamics needed to tune the loop can be reduced by 80% or more for some well mixed vessel temperature loops by identifying the dead time and initial ramp rate (treating it like an integrating process). It has been verified by extensive test results that a loop with a process time constant larger than 4 times the deadtime should be classified as near-integrating. Integrating process tuning rules are consequently used to enable more immediate feedback correction to potentially stop a process excursion within 4 dead times. The tuning parameter changes from a closed loop time constant for self-regulating process tuning rules to an arrest time for integrating process tuning rules in order to take advantage of the ability to increase the proportional and integral action to reject load disturbances.

While the largest time constant is beneficial if it is in the process, the second largest process time constant creates effectively deadtime and is detrimental. It can be largely cancelled by a rate time setting. Going from a single loop to a cascade loop where a secondary loop encloses a process time constant smaller than largest time constant  converts a term with a bad effect (secondary time constant increasing dead time in original single loop) into a term with a good effect (primary time constant slowing down disturbances in secondary loop). The reduction in the dead time also decreases the ultimate period of the primary loop.

For true integrating and runaway processes, any time constant is detrimental. It becomes more important to cancel the time constant by a rate time equal to or larger than time constant

Any time constant in the automation system is detrimental. A measurement and control valve time constant slows down the recognition and correction, respectively, of a disturbance. An automation system time constant also effectively creates dead time. Signal filters and transmitter damping settings add time constants. See Figure 1 to help recognize the many time constants in an automation system.

A measurement time constant larger than process time constant can be deceptive in that for self-regulating processes, it enables a larger PID gain, and the amplitude of oscillations may look less due to filtering action. However, the key to realize is that the actual process error or amplitude in engineering units is larger and the period of the oscillation is larger. All measurement and valve time constants should be less than 10% of the total loop deadtime for the effect on loop performance to be negligible. This objective for a valve time constant is difficult to achieve in liquid flow, pressure control, and compressor surge control because the process dead times in these applications are so small., A valve time constant becomes large for large signal changes (e.g., > 40%) due to stroking time, particularly for large valves. A valve time constant becomes large for small signal changes (e.g., < 0.4%) due to backlash, stiction, and poor positioner and actuator sensitivity. For more on how to identify and fix valve response problems, see the article How to specify valves and positioners that don’t compromise control.

Dead Time

Dead time anywhere in the loop is detrimental by creating a delay in the recognition or correction of a change in the process variable. For a setpoint change, dead time in the manipulated variable (e.g., manipulated flow) or process causes a delay in the start of the change in the process, and dead time in the measurement or controller creates additional delays in the appearance as a process variable response to the setpoint change. The minimum possible peak error and integrated error for an unmeasured load disturbance is proportional to the total loop dead time and dead time squared, respectively. The total loop dead time is the sum of all dead times in the loop. The dead time from digital devices and algorithms is ½ the update rate (execution rate or scan time) plus the latency (time required to communicate change in digital output after a change in digital input). Most digital devices have negligible latency. Simulation tests that always have the disturbance arrive immediately before, instead of after, the PID execution do not show the full adverse effect of PID execution rate, which leads to misconceptions as to the adverse effect of execution rate. On the average, the disturbance arrives in the middle of the time interval of PID executions, which is consistent with the dead time being ½ the execution rate for negligible latency. The latency for complex modules with complex calculations may approach the update rate. The latency for most at-line analyzers is the analyzer cycle time since the analysis is not completed until the end of the cycle. The result is a dead time that is 1.5 times the cycle time. Most of a time constant much smaller than the process time constant or in an integrating process can be taken as equivalent dead time. Since dead time is nearly always underestimated, I simply sum up all of the small time constants as being equivalent dead time. The block diagram in Figure 2 shows many but not all the sources of dead time.


Figure 2: Automation System and Process Dynamics in a Control Loop


The dead time from backlash and stiction is insidious in that it does not show up for step changes in signal. The dead time is the dead band or resolution limit divided by the signal rate of change.

Simulations typically do not have enough dead time because volumes are perfectly mixed and the dead time is missing from transportation delays particularly from dip tubes and piping to sensors or sample lines to analyzers, valve response time, backlash, stiction, sensor lags, thermowell lags, transmitter damping, wireless update times, and analyzer cycle times.

For pH applications with extremely large and nonlinear process gains due to strong acids and strong bases, there is a particularly great need to minimize the total loop dead time. This reduces the pH excursion on the titration curve, reducing the extent of the operating point nonlinearity seen. Poor mixing, piping design, valve response, and coated, dehydrated or old electrodes can introduce incredibly large dead times, killing a pH loop. My early specialty being pH control sensitized me to making sure the total system design including equipment, agitation, and piping would enable a pH loop to do its job by minimizing dead time. For much more on the implications on total system design from a very experience oriented view see the ISA book Advanced pH Measurement and Control.

PID Gain

The proportional mode provides a contribution to the PID output that is the error multiplied by the PID gain. Except for dead time dominant loops, humans tend not to use enough proportional action due to the perceived bad aspects in reasons listed below to decrease PID gain. For more on the missed opportunities see the Control Talk Blog Surprising Gains from PID Gain.

Reasons to increase PID gain:

  1. Reduce peak and integrated errors from load disturbances.
  2. Add negative feedback action missing in process (e.g., near and true integrating and runaway processes) and provide overshoot of final resting value of PID output needed.
  3. Provide sense of direction since a decrease in error reverses direction of PID output.
  4. Reduce the dead time from dead band (e.g., backlash) and resolution (e.g., stiction).
  5. Reduce limit cycle amplitude from dead band in loops with two integrators.
  6. Eliminate oscillation from poor actuator and positioner sensitivity.
  7. Make setpoint response faster for batch operations potentially reducing cycle time.
  8. Make secondary loop faster in rejecting disturbances and meeting primary loop demands.
  9. Stop slow oscillations in near and true integrating and runaway processes (product of gain and reset time must be greater than twice the inverse of integrating process gain).
  10. Get the right valve open and the PID output approaches split range point.

Reasons to decrease PID gain:

  1. Reduce abrupt responses in dead time dominant loops and to setpoint changes in all loops upsetting operators and other loops. Setpoint rate limits on analog output or secondary loop setpoint and external-reset feedback (dynamic reset limit) with the manipulated variable as BKCAL_IN can smooth out these changes without needing to retune PID.
  2. Increases resonance and interaction. Making the faster loops faster and eliminating oscillations by better tuning, and valves may alleviate this concern.
  3. Increases in process gain or valve gain and dead time or decreases in primary time constant necessitates a lower PID gain. In general, a gain margin of 6 or more is advised and achieved by a closed loop time constant or arrest time of 3 or more times dead time.
  4. Eliminate overshoot of PID output final resting value in balanced and dead time dominant processes. While this is useful here, overshoot is needed in other processes.
  5. Reduce amplification of noise. Better solution is reducing source of noise and using a judicious filter that is less than 10% of dead time. Note that fluctuations in PID output smaller than resolution or sensitivity limit do not affect the process.
  6. Reduce faltering as process variable approaches setpoint. Too much proportional action will momentarily halt the approach till integral action takes over resuming approach.

PID Reset Time

The integral mode provides a contribution to the PID output that is the integral of the error multiplied by the PID gain and divided by the reset time. External-reset feedback (dynamic reset limit) suspends this action (further changes in output from integral mode) when manipulated variable stops changing. Except for dead time dominant loops, humans tend to use too much integral action due to the perceived good aspects in reasons listed below to decrease reset time.

Reasons to increase PID reset time (decrease reset action):

  1. Reduce lack of sense of direction to reduce continual change for same error sign
  2. Reduce continual movement since reset is never satisfied (error never exactly zero)
  3. Reduce overshoot of setpoint.
  4. Prevent SIS and relief activation from high pressure or high temperature.
  5. Stop slow oscillations in near and true integrating and runaway processes (product of gain and reset time must be greater than twice inverse of integrating process gain).
  6. Get the right valve open and the PID output approaches split range point.

Reasons to decrease PID reset time (increase reset action):

  1. Reduce integrated errors from load disturbances.
  2. Eliminate offset from setpoint.
  3. Keep a valve from opening till setpoint is reached. Sometimes stated as objective for surge control, but requires a larger margin between PID setpoint and actual surge curve, resulting in less efficient operation till user flows are increased, which closes the surge valve. A better solution is a smaller margin and use PID gain action to preemptively open the surge valve.
  4. Provide a gradual response with less reaction to noise.
  5. You have a dead time dominant process.
  6. You love Internal Model Control.

PID Rate Time

The derivative mode provides a contribution to the PID output that is the derivative of the error (PID on error structure) or derivative of the process variable (PI on error and D on PV structure) multiplied by the PID gain and the rate time. It provides an anticipatory action basically projecting a value of the PV one rate time into the future based on the rate of change multiplied by the rate time. Some plants have mistakenly decided not to use derivative action anywhere due to the perceived bad aspects in reasons listed below to decrease rate time. Good tuning software could have prevented this bad practice of only allowing PI control (rate time always zero).

Reasons to increase PID rate time:

  1. Provide anticipation of approach to setpoint reducing overshoot.
  2. Cancel out effect of secondary time constant.
  3. Reduce the dead time from backlash and stiction.
  4. Prevent runaway reactions.

Reasons to decrease PID rate time:

  1. Reduce abrupt responses in dead time dominant loops and to setpoint changes in all loops upsetting operators and other loops. Setpoint rate limits on analog output or secondary loop setpoint and external-reset feedback (dynamic reset limit) with the manipulated variable as BKCAL_IN can smooth out these changes without needing to retune PID.
  2. Prevent oscillations from rate time exceeding reset time for ISA Standard Form.
  3. Reduce amplification of noise. Better solution is reducing source of noise and using a judicious filter that is less than 10% of dead time. Note that fluctuations in PID output smaller than resolution or sensitivity limit do not affect the process.
  4. Reduce kick on setpoint change. A better solution is to use PID structure to eliminate derivative action on setpoint change (e.g., PI on error and D on PV).



See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).



About the Author
Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

Connect with Greg:


Source: ISA News