Thank You Sponsors!

CANCOPPAS.COM

CBAUTOMATION.COM

CONVALPSI.COM

DAVISCONTROLS.COM

ELECTROZAD.COM

EVERESTAUTOMATION.COM

HCS1.COM

MAC-WELD.COM

SWAGELOK.COM

THERMON.COM

VANKO.NET

WESTECH-IND.COM

WIKA.CA

IIoT Solves Problems Previously Considered Unsolvable

The post IIoT Solves Problems Previously Considered Unsolvable first appeared on the ISA Interchange blog site.

This post was authored by Andrew Hird, vice president and general manager of digital transformation at Honeywell Process Solutions.

At a conference I attended, the keynote speaker observed that today is the first time in history that tools in our personal lives are better than those in our work lives. Through mobility and the pervasiveness of the Internet, we can book an Uber, change the thermostat in our homes, and even locate our children from anywhere in the world. But we cannot be confident that the boilers or seals or pumps at our multibillion dollar plants are not going to fail prematurely.

That speaker was correct, but I do not think it will be this way for long.

The Internet of Things (IoT) that drives much of our personal lives is not only invading the world of manufacturing, it is becoming the world of manufacturing. I predict that within 10 years, every modern manufacturing facility will have taken advantage of the Industrial Internet of Things (IIoT). The corporations that do it sooner will improve competitiveness and increase revenues faster.

Early on, manufacturers questioned the value of the IIoT. Skeptics are a shrinking minority.

Key issues were consistent regarding the three areas where they expect to benefit from IIoT technologies—eliminating unexpected downtime, reducing off-spec production, and enterprise supply chain integration.

Early adopters of digitization are achieving excellent results with profitability gains of multimillions of dollars. Mineral processing companies have centralized process knowledge and provided collaborative support to remote locations. Refineries have increased overall equipment effectiveness by 1 to 2 percent. Chemical companies have reduced inventories and improved customer responsiveness. Paper companies have solved key knowledge retention issues.

Although exposure to and interest in IIoT is growing rapidly, what may still be unclear is how to implement it and how to get started now. There are three important aspects to adapting the IIoT at a facility. If you get all three right, you can extract huge value.

The first step is data consolidation. Disparate systems of data need to be integrated in an asset model to apply predictive equipment analytics. This includes process data in the distributed control system, asset data, plant environment data, and data stranded in unconnected systems, such as analyzers or rotating equipment. Next, you need to be able to move that data, protected by top cybersecurity, from the individual plant or unit into the enterprise system. There, you can use the advanced process analytics capabilities and expertise that exist across the organization to identify trends.

Those first two are the more easily accomplished of the three tasks. The third step is where IIoT value is created by very few IIoT suppliers. It is the ability to use deep process and equipment domain knowledge and transform analyzed data and trends into meaningful actions.

Make no mistake, the value the IIoT brings is not about the amount of data generated—there is already an enormous amount of data available—rather it is about what is being done with that data. It is about predictive modeling and prevention. It is about combining that diagnostic knowledge with proven technologies to help predict and prevent failure. Ultimately, it is about plants that can self-diagnose problems before they happen. It is about increased run time, products that consistently meet specifications, and fully integrated supply chains that can run more efficiently with real-time visibility.

It is about solving problems that were previously considered unsolvable.

About the Author
Andrew Hird is vice president & general manager digital transformation at Honeywell Process Solutions. Andrew has been involved in the process industries for more than 20 years, holding engineering, sales, marketing, sales management, and general management positions.

Connect with Andrew
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

How Long Before Watson Comes to Engineering?

The post How Long Before Watson Comes to Engineering? first appeared on the ISA Interchange blog site.

This post was authored by Paul Gruhn, president of ISA 2019.

Hundreds of years ago, experienced master builders knew everything about their craft, designing and overseeing the building of pyramids, cathedrals, and bridges. Now the world is vastly more complicated, and no single person can know everything in a professional field.

For example, in the early 20th century, to become a doctor required a high school diploma and a one-year medical degree. By the end of the century, doctors needed a college degree, a four-year medical degree, and three-to-seven years residency training, which some believe is not enough. A doctor could spend all waking moments reading medical journals attempting to stay up to date. Unfortunately, there is too much information to absorb, and patients need to be treated.

The same is true in engineering, with debate about whether four years of college is enough or additional education should be required. Engineering has as many specialty fields as medicine. Think how many specialties there are in the field of process automation alone: analyzers, instrumentation, valves, control systems, control theory, alarm management, interface design, functional safety, cybersecurity, and more. No single person can know all these topics in depth. Companies may not be able to find and hire specialists in every field. Experienced baby boomers are retiring from industry, creating a skills gap. What is industry to do?

Enter computers and artificial intelligence. IBM’s Deep Blue computer beat Garry Kasparov in chess in 1997. IBM developed Watson, which won in Jeopardy in 2011. IBM then applied Watson in the field of medicine. Imagine what a doctor could do with access to every known ailment and treatment. Watson has such access, and doctors and nurses are starting to take advantage of it. If your doctors had access to Watson when treating you, would you want them to?

How long will it be before Watson—or something like it—makes it to the field of engineering? Probably not long. What will it mean to engineers when it does? What is the impact of Watson on doctors now and in the near future? If Watson knows everything, and you have access to it, just what are you going to study in school?

Think how our lives, learning, and knowledge have changed just in the past few decades with current technology. Rather than remembering our friends’ phone numbers, they are stored on our cell phones. People in my parent’s generation could do math calculations in their heads, but with today’s calculators we no longer have to burden ourselves. How many remember the multiplication tables? We used to remember how to drive somewhere, but GPS navigation has almost turned people into automatons. Driverless cars will probably mean that we will no longer even know how to do that. Engineers use many different design software packages, yet how many can duplicate or verify what the software is telling them? Would you want to drive across a new bridge designed by someone who could not verify what the design software recommended?

There are people in LinkedIn forums with titles implying they are in a responsible position. Yet they are asking innocent questions that clearly indicate they do not have the knowledge required to do their job. Rather than take the time to learn the topic, they merely ask strangers online (almost like asking “baby” Watson). Is this what the field of medicine and engineering are being reduced to? If doctors and engineers have access to Watson, what do they really need to know themselves? Heck, why even have the person in the middle at all? I will just skip the doctor and ask Watson myself. The drug store will eventually be automated and will dispense the medicine I need. Robots will eventually be able to perform the surgery I need, so why will we need surgeons? What will we need taxi and truck drivers for if vehicles become autonomous?

We are getting eerily close to the singularity with artificial intelligence. We may have just engineered ourselves out of existence.

About the Author
Paul Gruhn PE, CFSE, and ISA Life Fellow, is a global functional safety consultant with aeSolutions, a process safety, cybersecurity and automation consulting firm. As a globally recognized expert in process safety and safety instrumented systems, Paul has played a pivotal role in developing ISA safety standards, training courses and publications. He serves as a co-chair and long-time member of the ISA84 standard committee (on safety instrumented systems), and continues to develop and teach ISA courses on safety systems. He also developed the first commercial safety system modeling program. Paul has written two ISA textbooks, numerous chapters in other books and dozens of published articles. He is the primary author of the ISA book Safety Instrumented Systems: Design, Analysis, and Justification. He earned a bachelor of science degree in mechanical engineering from Illinois Institute of Technology, is a licensed Professional Engineer (PE) in Texas, and both a Certified Functional Safety Expert (CFSE) and an ISA84 safety instrumented systems expert.

Connect with Paul
48x48-linkedin Twitter48x48-email

 

A version of this article also was published at InTech magazine



Source: ISA News

AutoQuiz: What Type of Switch Is Designed to Detect the End of Travel of a Valve?

The post AutoQuiz: What Type of Switch Is Designed to Detect the End of Travel of a Valve? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

What type of switch is designed to detect the end of travel of a valve?

a) a limit switch
b) a terminator
c) a solenoid
d) Form C contacts
e) none of the above

Click Here to Reveal the Answer

Answer B is not correct; a terminator is an electrical device that is placed at the end of a fieldbus trunk line to prevent reflections of electrical signals back through the cable.
Answer C is not correct; a solenoid is an electrical inductive device that converts energy into linear motion.

Answer D is not correct; Form C contacts refer to a type of electrical contact that is composed of a normally closed and a normally open contact operated by the same device, with a common electrical connection.

The correct answer is A, “a limit switch.” Limit switch is a general term to describe the class of devices that are used to detect the end of travel of a valve, louver, or any other item that may be in motion. Limit switches are now also commonly used to detect jams in conveyor systems or to prove the position of a device or component (such as a gate or lane rail).

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

What Are the Opportunities for Nonlinear Control in Process Industry Applications?

The post What Are the Opportunities for Nonlinear Control in Process Industry Applications? first appeared on the ISA Interchange blog site.

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. These questions come from Flavio Briquente and Syed Misbahuddin.

Model predictive control (MPC) has a proven successful history of providing extensive multivariable control and optimization. The applications in refineries are extensive, forcing the PID in most cases to take a backseat. These processes tend to employ very large MPC matrices and employ extensive optimization by linear programs (LP). The models are linear and may be switched for different product mixtures. The plants tend to have a more constant production rates and greater linearity than seen in specialty chemical and biological processes.

MPC is also widely used in petrochemical plants. The applications in other parts of the process industry are increasing but tend to use much smaller MPC matrices focused on a unit operation. MPC offers dynamic decoupling, disturbance and constraint control. To do the same in PID requires dynamic compensation of decoupling and feedforward signals and override control. The software to accomplish dynamic compensation for the PID is not explained or widely used. Also, interactions and override control involving more than two process variables is more challenging than practitioners can address.  MPC is easier to tune and has an integrated LP for optimization.

Flavio Briguente is an advanced process control consultant at Evonik in North America, and is one of the original protégés of the ISA Mentor Program. Flavio has expertise in model predictive control and advanced PID control. He has worked at Rohm, Haas Company, and Monsanto Company. At Monsanto, he was appointed to the manufacturing technologist program, and served as the process control lead at the Sao Jose dos Campos plant in Brazil and a technical reference for the company’s South American sites. During his career, Flavio focused on different manufacturing processes, and made major contributions in optimization, advanced control strategies, Six Sigma and capital projects. He earned a chemical engineering degree from the University of São Paulo, a post-graduate degree in environmental engineering from FAAP, a master’s degree in automation and robotics from the University of Taubate, and a PhD in material and manufacturing processes from Aeronautics Institute of Technology. 

Syed Misbahuddin is an advanced process control engineer for a major specialty chemicals company with experience in model predictive control and advanced PID control. Before joining industry, he received a master’s degree in chemical engineering with a focus on neural network-based controls. Additionally, he is trained as a Six Sigma Black Belt, which focuses on utilizing statistical process controls for variability reduction. This combination helps him implement controls utilizing physics-based, as well as, data-driven methods.

The considerable experience and knowledge of Flavio and Syed blurs the line between protégé and resource leading to exceptionally technical and insightful questions and answers.

Flavio Briguente’s Questions

Can the existing MPC/APC techniques be applied for batch operation? Is there a non-linear MPC application available? Is there a known case in operation for chemical industry? What are the pros and cons of linear versus nonlinear MPC?

Mark Darby’s Answers

MPC was originally developed for continuous or semi-continuous processes. It is based on a receding horizon where the prediction and control horizons are fixed and shifted forwarded each execution of the controller. Most MPCs include an optimizer that optimizes the steady state at the end of the horizon, which the dynamic part of the MPC steers towards. 

Batch processes are by definition non-steady-state and typically have an end-point condition that must be met at batch end and usually have a trajectory over time that controlled variables (CVs) are desired to follow. As a result, the standard MPC algorithm is not appropriate for batch processes and must be modified (note: there may be exceptions to this based on the application).  I am aware of MPC batch products available in the market, but I have no experience with them. Due to the nonlinear nature of batch processes, especially those involving exothermic reaction, a nonlinear MPC may be necessary.

By far, the majority of MPCs applied industrially utilize a linear model. Many of the commercial linear packages include previsions for managing nonlinearities, such as using linearizing transformations, changing the gain, dynamics, or the models themselves. A typical approach is to apply a nonlinear static transformation to a manipulated variable or a controlled variable, commonly called Hammerstein and Wiener transformations. An example is characterizing the valve-flow relationship or controlling the logarithm of a distillation composition. Transformations are performed before or after the MPC engine (optimization) so that a linear optimization problem is retained. 

Given the success of modeling chemical processes it may be surprising that linear, empirically developed models are still the norm.  The reason is that it is still quicker and cheaper to develop an empirical model and linear models most often perform well for the majority of processes, especially with the nonlinear capabilities mentioned previously.   

Nonlinear MPC applications tend to be reserved for those applications where nonlinearities are present in both system gains and dynamic responses and the controller must operate at significantly different targets. Nonlinear MPC is routinely applied in polymer manufacturing.  These applications typically have less than five manipulated variables (MVs). A range of models have been used in nonlinear MPC, including neural nets, first principles, and hybrid models that combine first principle and empirical models.

A potential disadvantage of developing a nonlinear MPC application is the time necessary to develop and validate the model. If a first principle model is used, lower level PID loops must also be modeled if the dynamics are significant (i.e., cannot be ignored).  With empirical modeling, the dynamics of the PID loops are embedded in the plant responses. Compared to a linear model, a nonlinear model will also require more computation time, so one would need to ensure that the controller can meet the required execution period based on the dynamics of the process and disturbances. In addition, there may be decisions around how to update the mode, i.e., which parameters or biases to adjust. For these reasons, nonlinear MPC is reserved for those applications that cannot be adequately controlled with linear MPC.

My opinion is that we’ll be seeing more nonlinear applications once it becomes easier to develop nonlinear models. I see hybrid models being critical to this.  Known information would be incorporated and unknown parts would be described using empirically models using a range of techniques that might include machine learning. Such an approach might actually reduce the time of model development compared to linear approaches.

Greg McMillan’s answers

MPC for batch operations can be achieved by the translation of the controlled variable from batch temperature or composition with a unidirectional response (e.g., increasing temperature or composition) to the slope of the batch profile (temperature or composition rate of change) as noted in my article Get the Most out of Your Batch you then have a continuous type of process with a bi-directional response. There is still potentially a nonlinearity issue. For a perspective on the many challenges see my blog Why batch processes are difficult.

I agree with Mark Darby that the use of hybrid systems where nonlinear models are integrated could be beneficial. My preference would be in the following order in terms of ability to understand and improve:

  1. first principle calculations
  2. simple signal characterizations
  3. principle components analysis (PCA) and partial least squares (PLS)
  4. neural networks (NN)

There is an opportunity to use principle components for neural network inputs to eliminate correlations between inputs and to reduce the number of inputs. You are much more vulnerable with black box approaches like neural networks to inadequacies in training data. More details about the use of NN and recent advances will be discussed in a subsequent question by Syed.

There is some synergy to be gained by using the best of what each of the above have to offer. In the literature and in practices, experts in a particular technology often do not see the benefit of other technologies. There are exceptions as seen in papers referenced in my answer to the next question. I personally see benefits in running a first principle model (FPM) to understand causes and effects and to identify process gains. Not realized is that the FPM parameters in a virtual plant that uses a digital twin running real time using the same setpoints as the actual plant can be adapted by use of a MPC. In the next section we will see how NN can be used to help a FPM.

Signal characterization is a valuable tool to address nonlinearities in the valve and process as detailed in my blog Unexpected benefits of signal characterizers. I tried using NN to predict pH for a mixture of weak acids and bases and found better results from the simple use of a signal characterizer. Part of the problem is that the process gain is inversely proportional to production rate as detailed in my blog Hidden factor in our most important control loops.

Since dead time mismatch has a big effect on MPC performance as detailed in the ISA Mentor Post How to Improve Loop Performance for Dead Time Dominant Systems, an intelligent update of dead time simply based on production rate for a transportation delay can be beneficial.

Syed Misbahuddin’s follow-up question

Recently, there has been an increased focus on the use of deep neural networks for artificial intelligence (AI) applications. Deep signifies many hidden layers. Recurrent neural networks have also been able in some cases to insure relationships are cause and effect rather than just correlations. They use a rather black box approach with models built from training data. How successful are deep neural networks in process control?

Greg McMillan’s answers

Pavilion Technologies in Austin has integrated Neural Networks with Model Predictive Control. Successful applications in the optimization of ethanol processes have been reported a decade ago. In the Pavilion 1996 white paper “The Process Perfector: The next step to Multivariable Control and Optimization” it appears that process gains possibly, from step testing of FPM or bump testing of actual process for an MPC, were used as the starting point. The NN was then able to provide a nonlinear model of the dynamics given the steady state gains. I am not sure what complexity of dynamics can be identified. The predictions of NN for continuous processes have the most notable successes in plug flow processes where there is no appreciable process time constant and the process dynamics simplify to a transportation delay. Examples of successes of NN for plug flow include dryer moisture, furnace CO, and kiln or catalytic reactor product composition prediction. Possible applications also exist for inline systems and sheets in pulp and paper processes and for extruders and static mixers.

While the incentive is greater for high value biologic products, there are challenges with models of biological processes due to multiplicative effects (neural networks and data analytic models assume additive effects). Almost, every first principle model (FPM) has specific growth rate and product formation the result of a multiplication of  factors each between 0 and 1 to detail the effect of temperature, pH, dissolved oxygen, glucose, amino acid (e.g., glutamine), and inhibitors (e.g., lactic acid). Thus, each factor changes the effect of every other factor. You can understand this by realizing that if the temperature is too high, cells are not going to grow and may in fact die. It does not matter if there is enough oxygen or glucose. Similarly if there is not enough oxygen, it does not matter if the all the other conditions are fine. One way to address this problem is to make all factors as close to one and as constant as possible except for the factor of greatest interest. It has been shown data analytics can be used to identify the limitation and/or inhibition FPM parameter for one condition, such as the effect of glucose concentration via the Michaelis-Menten equation if all other factors are constant and nearly one.

Process control is about changes in process inputs and consequential changes in process outputs. If there is no change, you cannot identify the process gain or dynamics. We know this is necessary in the identification of models for MPC and PID tuning and feedforward control. We often forget this in the data sets used to develop data models. A smart Design of Experiments (DOE) is really best to get the data sets to show changes in process outputs for changes in process inputs and to cover the range of interest. If setpoints are changed for different production rates and products, existing historical data may be rich enough if carefully pruned. Remember neural network models like statistical models are correlations and not cause and effect. Review by people knowledgeable in the process and control system is essential.

Time synchronization of process inputs with process outputs is needed for continuous but not necessarily for batch models, explaining the notable successes in predicting batch end points. Often delays are inserted on continuous process inputs. This is sufficient for plug flow volumes, such as dryers, where the dynamics are principally a transport delay. For back mixed volumes such as vessels and columns a time lag and delay should be used that is dependent upon production rate. Neural network (NN) models are more difficult to troubleshoot than data analytic models and are vulnerable to correlated inputs (data analytics benefits from principle component analysis and drill down to contributors). NN models can introduce localized reversal of slope and bizarre extrapolation beyond training data not seen in data analytics. Data analytics’ piecewise linear fit can successfully model nonlinear batch profiles. To me this is similar in principle to the use of signal characterizers to provide a piecewise fit of titration curves.

Process inputs and outputs that are coincidental are an issue for process diagnostics and predictions by MVSPC and NN models. Coincidences can come and go and never even appear again. They can be caused by unmeasured disturbances (e.g., concentrations of unrealized inhibiters and contaminants), operator actions (e.g., largely unpredictable and unrepeatable), operating states (e.g., controllers not in highest mode or at output limits), weather (e.g., blue northerners), poor installations (e.g., unsecured capillary blowing in wind), and just bad luck.

I found a 1998 Hydrocarbon Processing article by Aspen Technology Inc. “Applying neural networks” that provides practical guidance and opportunities for hybrid models.

The dynamics can be adapted and cause and effect relationships increased by advancements associated with recurrent neural networks as discussed in Chapter 2 Neural Networks with Feedback and Self-Organization in The Fundamentals of Computational Intelligence: System Approach by Mikhail Z. Zgurovsky and Yuriy P. Zaychenko (Springer 2016).

ISA Mentor Program

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

Mark Darby’s answers

The companies best known for neural net-based controllers are Pavilion (now Rockwell) and AspenTech. There have been multiple papers and presentations by these companies over the past 20 years with many successful applications in polymers. It’s clear from reading these papers that their approaches have continued to evolve over time and standard approaches have been developed. Today both approaches incorporate first principles models and make extensive use of historical data. For polymer reactor applications, the FPM involves dynamic reaction heat and mass balance equations and historical data is used to develop steady-state property predictions. Process testing time is needed only to capture or confirm dynamic aspects of the models. 

Enhancements to the neural networks used in control applications have been reported.  AspenTech addressed the extrapolation challenges of neural nets with bounded derivatives.  Pavilion makes use of constrained neural nets in their fitting of models.

Rockwell describes a different approach to the modeling and control of a fed-batch ethanol process in a presentation made at the 2009 American Control Conference, titled “Industrial Application of Nonlinear Model Predictive Control Technology for Fuel Ethanol Fermentation.”  The first step was the development of a kinetic model based on the structure of a FPM.  Certain reaction parameters in the nonlinear state space model were modeled using a neural net.  The online model is a more efficient non-linear model, fit from the initial model that handles nonlinear dynamics.  Parameters are fit by a gain constrained neural-net.  The nonlinear model is described in a Hydrocarbon Processing article titled Model predictive control for nonlinear processes with varying dynamics.

To Syed’s follow-up question about deep neural networks, Deep neural networks require more parameters, but techniques have been developed that help deal with this. I have not seen results in process control applications, but it will be interesting to see if these enhancements developed and used by the google-types will be useful for our industries.      

In addition to Greg’s citings, I wanted to mention a few other articles that describe approaches to nonlinear control. A FPM-based nonlinear controller was developed by ExxonMobil, primarily for polymer applications.  It is described in a paper presented at the Chemical Process Control VI conference (2001) titled “Evolution of a Nonlinear Model Predictive Controller,” and in a subsequent paper presented at another conference, Assessment and future directions of nonlinear model predictive control (2005), entitled NLMPC: A Platform for Optimal Control of Feed- or Product-Flexible Manufacturing. The motivation for a first principles model-based MPC for polymers included the nonlinearity associated with both gains and dynamics, constraint handling, control of new grades not previous produced, and the portability of the model/controller to other plants. In the modeling step, the estimation of model parameters in the FPM (parameter estimation) was a cited as a challenge. State estimation of the CVs, in light of unmeasured disturbances, is considered essential for the model update (feedback step). Finally, the increased skills necessary to support and maintain the nonlinear controller was mentioned, in particular, to diagnosis and correct convergence problems.

A hybrid modeling approach to batch processes is described in a 2007 conference presentation at the 8th International IFAC Symposium on Dynamics and Control of Process Systems by IPCOS, titled “An Efficient Approach for Efficient Modeling and Advanced Control of Chemical Batch Processes.” The motivation for the nonlinear controller is the nonlinear behavior of many batch processes. Here, fundamental relationships were used for mass and energy balances and an empirical model for the reaction energy (which includes the kinetics), which was fit from historical data. The controller used the MPC structure, modified for the batch process. Future prediction of the CVs in the controller were made using the hybrid model, whereas the dynamic controller incorporated linearizations of the hybrid model.

I think it is fair to say that there is a lack of nonlinear solvers tailored to hybrid modeling. An exception is the freely available software environments APMonitor and GEKKO developed by John Hedengren’s group at BYU. It solves dynamic optimization problems with first principle or hybrid models. It has built-in functions for model building, updating, and control. Here is a link to the website that contain references and videos for a range of nonlinear applications, including a batch distillation application.

Hunter Vegas’ answers

I worked with neutral networks quite a bit when they first came out in the late 1990s. I have not tried working with them much since but I will pass on my findings which I expect are as applicable now as they were then.

Neural networks sound useful in principle. Give a neural network a pile of training data, let it ‘discover’ correlations between the inputs and the output data, then reverse those correlations in order to create a model which can be used for control. Unfortunately actually creating such a neural network and using it for control is much harder than it looks. Some reasons for this are:

  1. Finding training data is hard. Most of the time of the system is running fairly normal and tends to draw flat lines.  Only during upsets does it actually move around and provide the neural network useful information. Therefore you only want to feed the networks upset data to train it. Then you need to find more upset data to test it. Finding that much upset data is are not so easy to do. (If you train it on normal data, the neural network learns to draw straight lines which does not do much for control.)
  2. Finding the correlations is not so easy. The marketing literature suggests you just feed it the data and the network “figures it out.”  In reality that doesn’t usually happen. It may be that the correlations involve the derivative of an input, or the correlation is shifted in time, or perhaps there is correlation of a mathematical combination of inputs involving variables with different time shifts. Long story short – the system usually doesn’t ‘figure it out’ – YOU DO!  After playing with it for a while and testing and re-testing data you will start to see the correlations yourself which allows you to help the network focus on information that matters.  In many cases you actually figure out the correlation and the neural network just backs you up to confirm it.
  3. Implementing a multi variable controller is always a challenge. The more variables you add, the lower the reliability becomes.  Implementing any multivariable controller is a challenge because you have to make it smart enough to know how to handle input data failures gracefully. So even when you have a model, turning it into a robust controller that can manipulate the process is not always such an easy thing.

I am not saying neutral networks do not work – I actually had very good success with them. However when all was said and done I pretty much figured out the correlations myself through trial and error and was able to utilize that information to improve control. I wrote a paper on the topic and won an ISA award because neural networks were all the rage at that time, but the reality was I just used the software to reinforce what I learned during the ‘network training’ process.

Additional Mentor Program Resources

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

About the Author
Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

Connect with Greg
LinkedIn



Source: ISA News

Proportional Integral Estimator-Based Clock Synchronization Protocol for Wireless Sensor Networks [technical]

The post Proportional Integral Estimator-Based Clock Synchronization Protocol for Wireless Sensor Networks [technical] first appeared on the ISA Interchange blog site.

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.

Abstract: Clock synchronization is an issue of vital importance in applications of wireless sensor networks (WSNs). This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol.

Free Bonus! To read the full version of this ISA Transactions article, click here.

 

Enjoy this technical resource article? Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus professional networking and discounts on technical training, books, conferences, and professional certification.

Click here to join ISA … learn, advance, succeed!

 

 

2006-2019 Elsevier Science Ltd. All rights reserved.

 

 



Source: ISA News

AutoQuiz: What Is the Benefit of Industrial RFID Tags Over Barcode Systems?

The post AutoQuiz: What Is the Benefit of Industrial RFID Tags Over Barcode Systems? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

When compared to traditional barcode systems, a primary benefit of radio frequency identification (RFID) tags is:

a) low-voltage power drawn from the battery
b) faster data transmission that can be read from farther away
c) the number of software applications that can process RFID data
d) cost savings of tags
e) none of the above

Click Here to Reveal the Answer

RFID tags consist of silicon chips and an antenna that can transmit data to a wireless receiver. Unlike barcodes, which need to be scanned manually and read individually, RFID tags do not require line-of-sight for reading. It is possible to automatically read hundreds of tags a second within the field of a wireless reading device.

The other answers may describe secondary benefits in some cases, but each is highly dependent upon the type and performance of different manufacturers’ tags, readers, and software. In general, the physical RFID tags are more expensive than other forms of ID, such as barcodes, but RFID tags can have read/write capability as well as the ability to store many pieces of data, such as location or expiration dates.

The correct answer is B, “faster data transmission that can be read from farther away.”

Reference: Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP., A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

How Does Heat Transfer Affect Operation of Your Natural Gas or Crude Oil Pipeline?

The post How Does Heat Transfer Affect Operation of Your Natural Gas or Crude Oil Pipeline? first appeared on the ISA Interchange blog site.

This guest blog post was written by Edward J. Farmer, PE, industrial process expert and author of the ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to obtain a copy of the book, click this link.

There are four methods of heat transfer in most pipeline situations. Each involves significantly different methodology and hence different calculations. It’s hard to condense a book (e.g., J. P. Holman’s Heat Transfer, with 550 pages) into a thousand words. Hopefully this will enhance conceptuality by avoiding specificity.

There are usually several thermal “environments” along a pipeline. Suppose it starts in 600 feet of water at an offshore production well and follows the seabed up a continental shelf, onto land where in some cases it is buried in earth and in others runs exposed on the surface or in above-grade racks. Heat transfer takes place in each environment, but the mechanism can be quite different. Depending on the specific situation some heat transfer methods may be active or absent.

Advection

Heat transfer by advection involves moving something from one place to another. A common example is carrying a hot water bottle from the bathroom to a bed. The more common pipeline situation is pumping hot oil into a cold pipeline. The pumping activity moves some amount of heat, contained in the product, from a production well or plant into a pipeline where it fills an empty pipe or displaces existing fill.

It can be managed by controlling the mechanical means enabling the transfer. For example, when hot oil is pumped into a pipeline containing colder oil heat energy in the pipeline segment increases with the flow rate. The rate of heat transfer, the heat flux rate, is proportional to the characteristics of the fluid (specific heat and density) and the velocity at which it is being pumped.

Conduction

Heat transfers by conduction when there is a thermal path between areas with different temperatures. A buried pipe, for example, has intimate contact with the backfill, setting up a thermal path from the usually warm petroleum product through the pipe wall and insulation, into the earth or water surrounding it. Sometimes heat transfer from a non-flowing (static) fluid in a pipeline becomes important for assessing its changing hydraulic conditions and from them, what may be necessary to reinstate motion after an outage.

The common methodology is to consider the fluid to be a set of concentric annuli, each containing some amount of thermal (heat) energy and also providing some resistance to heat conduction. It’s a problem of inner annuli transferring heat into outer annuli being impeded by the insulating qualities (thermal resistance) of the annuli in between.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free 37-page excerpt from the book, click here.

Convection

Convection results from fluid motion over a thermally active surface. Common examples include wind on an exposed or elevated pipeline, or ocean currents (e.g., due to tidal flows) over submerged pipe, or the flow in the pipe passing over the internal surface of the pipe containing it.

Radiation

Radiation is the movement of energy by electromagnetic radiation. A common example is heating of exposed piping by sunlight shining on it. A hot pipeline may also radiate energy to its environment and even out into space.

There are also some events that can occur that can affect the temperature in a pipeline. For example, suppose the physical characteristics of the flow environment changes – perhaps due to a leak decreasing the pressure or operation of some process control equipment (e.g., a pressure safety valve). Expanding the fluid’s environment produces fluid expansion which can result in Joule-Thompson cooling, essentially a cooling effect commonly used in household refrigerators. Whether this becomes a problem depends on the specifics of the fluid and situation. Freezing a valve intended for some particular function can produce process disturbances.

Mapping process flow

Mapping process flow on a pressure-enthalpy diagram can be very useful in studying and identifying regions of operation that are sensitive to various temperature related problems. A long section of exposed pipe can heat a fluid beyond the capability of a meter to accurately measure it. I did a paper years ago on an ammonia plant with a transient heat pickup problem and it was interesting stuff.

Why does any of this matter? After all it’s in the pipe so who cares about the details?

  • What’s in the pipe might not change but how it presents itself to process equipment certainly can. Transporting a gas can be significantly different than a liquid yet a liquid can become a gas as environmental and operating conditions change. NGL, for example, may be liquid until some unanticipated condition occurs at which point it becomes multiphase. Meters and valves intended for liquid flow don’t produce accurate or appropriate results in multiphase of gas flow situations. The errors in measurement, for example, can impact custody transfer and leak detection.
  • A pressure safety valve sized for gas might have performance issues when what is intended as a natural gas stream becomes two-phase or liquid instead. The cause of this might be the kind of conditions the monitoring or control system was intended to detect and mitigate.
  • In some cases, operating conditions in the pipe, what is liquid and what is gas, can result in concentration of corrosive fluids at places where the design and pipe choice envisioned a nice, dry, gas. Purpose changes over time and with it the sensitivity to various kinds of risk.

Heat transfer issues are not all that common on well-designed pipelines operating according to the original intentions, but awareness of the issues is important in evaluating changes in fluids, operating conditions, flow rates, safety systems, and objectives. Even if you are not charged with servicing the details it is good to understand the generalities so these “demon details” can be anticipated and controlled when the need occurs.

About the Author
Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. His work has produced three books, numerous articles, and four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. He is the owner and president of EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.

Connect with Ed
48x48-linkedinEmail

 



Source: ISA News

Webinar Recording: Future-Proof Your Automation Solutions with Standards-Based Design

The post Webinar Recording: Future-Proof Your Automation Solutions with Standards-Based Design first appeared on the ISA Interchange blog site.

This ISA webinar on industrial calibration was presented by Rick Slaugenhaupt, a consultant for MAVERICK Technologies, and Nicholas P. Sands, P.E., CAP, ISA leader, author and educator who serves as senior manufacturing technology fellow at DuPont

.videopopup.video__button:before {
border-left: 10px solid #ffffff !important;
}
a.popup-youtube:hover .videopopup.video__button {background: #8300e9 !important;}

Capital improvement opportunities for automation are infrequent at best, so we need to squeeze every bit of possible value from these efforts when we get the chance. Solving long-standing problems while adding new enabling features and technologies will most certainly be high on the list of expectations, but what about agility and longevity? Just like the shiny new car that seems dull, outdated, and under-powered in a few short years, automation systems that aren’t easily updated will inevitably frustrate users looking for the features and performance needed to stay competitive in a world market. We discuss a practical way to achieve sustainable benefits with the proper application of ISA standards during the design phase.

Topics included in this webinar:

  • Common obstacles to modernizing of operations
  • Methods for achieving capable, agile and sustainable solutions
  • Several common ISA standards and their uses

Would you like to view ISA Standards? ISA members can access all standards and technical reports as a benefit of membership.  Click this link to go to ISA Standards web page.

About the Presenter
Rick Slaugenhaupt is a consultant for MAVERICK Technologies with more than 30 years of industrial controls experience. Prior to joining MAVERICK, he served as a plant engineer, software designer and independent consultant for small and large companies alike. His work has involved all aspects of engineering design & construction of production equipment, processes and systems  for continuous and discrete manufacturing, metals, powders, chemicals, water treatment, facilities management and security.

Connect with Rick
LinkedIn

About the Presenter
Nicholas P. Sands, P.E., CAP, serves as senior manufacturing technology fellow at DuPont, where he applies his expertise in automation and process control for the DuPont Safety and Construction business (Kevlar, Nomex, and Tyvek). During his career at DuPont, Sands has worked on or led the development of several corporate standards and best practices in the areas of automation competency, safety instrumented systems, alarm management, and process safety. Nick is: an ISA Fellow; co-ch Cancel air of the ISA18 committee on alarm management; a director of the ISA101 committee on human machine interface; a director of the ISA84 committee on safety instrumented systems; and secretary of the IEC (International Electrotechnical Commission) committee that published the alarm management standard IEC62682. He is a former ISA Vice President of Standards and Practices and former ISA Vice President of Professional Development, and was a significant contributor to the development of ISA’s Certified Automation Professional program. He has written more than 40 articles and papers on alarm management, safety instrumented systems, and professional development. Nick is a licensed engineer in the state of Delaware. He earned a bachelor of science degree in chemical engineering at Virginia Tech.

Connect with Nick
LinkedInEmail

 



Source: ISA News

AutoQuiz: When Should Operating Instructions Be Developed for an Industrial Process?

The post AutoQuiz: When Should Operating Instructions Be Developed for an Industrial Process? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Which of the following is true of operating instructions?

a) they require a set of books covering the total operation
b) OSHA requires operating procedures for all installations
c) they may be included in a functional specification or operating description
d) ISA standards guide in the development of operating instructions
e) none of the above

Click Here to Reveal the Answer

Answers B and D are not correct. There are no ISA standards available to aid in developing operating instructions. OSHA requires operating procedures for all installations handling hazardous chemicals, but not for those that do not.

Answer A is not correct. Operating instructions can be printed (e.g., books or manuals) or electronic (e.g., PDF) and can cover from one step of the process to an entire operation. For each use, operating procedures are usually limited to a single operation or set of operations.

The correct answer is C, “They may be included in a functional specification or operating description.” Operating instructions may be included in a functional specification or operating description and may range from a few pages describing how to operate one part of a plant to a complete set of directions covering all parts of a facility.

Reference: Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP., A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

OPC UA and IEC 61131-3: The Integration of Control and the HMI

The post OPC UA and IEC 61131-3: The Integration of Control and the HMI first appeared on the ISA Interchange blog site.

This post was authored by Gary L. Pratt, PE, president of ControlSphere Enginering and Timothy L. Triplett, PE, founder, chief executive officer at Sacks Parente Golf and previously president of Coherent Technologies,

In the beginning…the world was flat. Or at least the industrial control system (ICS) programming namespace portion of the world was flat. In the 1970s when systems consisted of only a small number of tags, tag names could be simple (like T2). However, as systems grew in the 1980s, tag naming quickly became unwieldy. Engineers first began to add pseudohierarchy to names by embedding underscores (like M123_T2).

Then, in the 1990s, data structures (i.e., user-defined data structures [UDTs]) were introduced to the ICS programming world and became very popular over the next decade. With data structures, tags could now be structured, and multiple instances could be differentiated with the “dot” convention (M123.T2). However, this still required creating and instantiating structures and copying values into and out of these structures. In this decade, new standards allow direct access to function block hierarchical I/O, eliminating the need for UDTs, tags, and copying data.

Similarly, in the beginning, there was ladder logic. It was great for representing electrical equipment and simple discrete logic. However, as programming size and complexity increased, the choice of industrial control languages offered by controller vendors did not keep pace. As a result, ladder logic was recruited into purposes for which it was never intended and was poorly suited. Fortunately, the latest standards have programming languages and techniques that fill this gap and give 21st century industrial control system (ICS) programmers the tools they need to produce large, scalable, and maintainable programs—and allows ladder logic to return to the purpose for which it is best suited.

Just as UDTs transformed the 1990s, new features in OPC UA released in 2008 and IEC 61131-3 released in 2013 are transforming application programming in this decade. The new capabilities provided by these standards deliver unprecedented integration of control and the human-machine interface (HMI).

One of the most powerful features of IEC 61131-3 is its ability to nest function blocks (FBs) to any arbitrary width and depth using any of the IEC 61131-3 languages, and then to easily navigate the hierarchy by simply double clicking on any block to pop into its underlying code. This feature allows the ICS engineer to create a precise hierarchical representation of the plant and build each function within the plant in the best language for the task. For instance, engineers can use Continuous Function Chart (CFC) for high-level block diagrams, Sequential Function Chart (SFC) for state-based control, Ladder Diagram (LD) for discrete logic, and Structured Text (ST) for complex math, conditionals, looping, and bit manipulation.

The IEC 61131-3 CFC graphical language is a great tool for building a representation of the plant hierarchy. Typically, this begins with a single top-level block diagram of the plant called the plant view(PV), which instantiates additional subsystem PV block diagrams as necessary and ends with the control-and-equipment (C&E) view diagrams. The C&E view shows the complete control of a subsection of a plant with input equipment on the left, the control in the middle, and the output equipment on the right.

Within the C&E view, equipment models can be written in LD or ST, and typically deal with scaling, alarming, signal quality, latching, and manual override. The exact nature of the control block will depend on the type of control required. For instance, a process plant may use a CFC containing a startup sequence in SFC; proportional, integral, derivative (PID) from libraries; and other low-level control code written in ST. Control in a batch or discrete plant usually consists of an SFC describing the process sequence.

A typical multilevel hierarchical view is illustrated in figures 2 and 3. The plant consists of two levels of PVs, one level of C&E view, and several additional levels, each implemented in the language that is the best fit for the purpose. In this example, the PID and low-pass filters are from the OSCAT open-source industrial control library, and the block to integrate the incoming flow rate and compare it to the summation of the auger shaft-encoder pulses is implemented in ST. Imagine how simple this hierarchical multilanguage approach is for a plant technician to understand: drill down in the hierarchy to find the appropriate C&E view, examine the state of the control signals to determine if the problem is in the control or in the equipment, then push into that to diagnose the issue.

As alluded to earlier, a powerful benefit of the multiple languages of IEC 61131-3 is the ability to use the same tools for discrete, batch, and continuous process programming. In all types of programming, the plant-level views are similar, as are the input and output equipment in the C&E view. The only significant difference is the control block, which in a batch process is typically an SFC. Figure 4 shows a typical C&E view for a batch process with the control implemented in SFC, the temperature switch in ST, and the auger motor in traditional LD.

Obviously, an integrated control system is not complete without a seamless connection to its human-machine interface. Fortunately, the OPC UA standard makes this seamless connection possible with its platform independence, encryption, full hierarchical browsing, and meta-tags. Platform independence allows the OPC server to be placed directly in the industrial controller hardware (eliminating the expense and security vulnerability of an OPC server PC), and encryption ensures the security of the data and control. Hardware vendors can use true random number generators, crypto-coprocessors, and deeply embedded root of trust to further secure the connections to both the programming software and the HMI. Programming and HMI connections can be made through the open Internet while remaining protected from cyberattack or mischief.

Figure 5 shows how OPC UA makes the entire hierarchy within the process plant available within an OPC UA tag browser (without explicitly connecting tags to objects or data structures within the ICS design or exporting tag lists). Within the control development environment, programmers can expose the entire namespace tree or select only certain branches. Tags can also be exposed directly in the code where their corresponding variable is declared (figure 6). The latter is especially handy for library parts with inputs and outputs that are intended to be used by the HMI.

 

Although figure 5 illustrates how all the necessary data is available throughout the design hierarchy, we would never want to deal with that complexity manually. Fortunately, with OPC UA, the HMI can browse the server and create matching complex tags with drag-and-drop simplicity. And if an HMI project is defined with a library of the same base objects as the control design, OPC UA provides enough information for the HMI to automatically create all the complex tags and their structures based on those.

To carry out this automation, the HMI begins by examining the tree from the top. Where it encounters objects in the OPC UA tree that have a corresponding item in the coordinated library, it instantiates that library object. Where it encounters objects that do not, it creates a new folder. It then continues down the tree, either finding and instantiating library objects, or creating further new folders until the complex tag is fully defined and instantiated. All the tags are automatically connected as this process proceeds. At the end, all that is left for the HMI is to organize the visual presentation.

In addition, meta-tags can be added to the control function blocks to provide additional information to the HMI system for it to automatically perform much of the visual presentation adjustment. For example, meta-tags can differentiate the type of process equipment associated with a complex tag structure, determining the default image presented by the HMI.

Figure 7 shows how the project hierarchy in the HMI system matches the project hierarchy in the control design. Figures 8 and 9 show the HMI screens corresponding to the continuous process control design in figures 2 and 3. Notice how the connectivity between the entire ICS and entire HMI designs is made with just the top-level tag name. Thousands of tags below may be automatically connected based on the hierarchy of the design.

Figure 10 shows the corresponding screens for the batch control. Notice that the “ReactorSequence” block in the ICS library has a corresponding object in the HMI library that represents the current state of the process and allows the operator to manually override the process and select new active steps if an unusual situation occurs. Also notice that the HMI has pop-up screens for the motors in the process plant, and that these are all automatically created and connected based on the OPC UA hierarchy and associated library object templates.

Figure 11 shows how the same IEC 61131-3 modeling can be used to create a complete plant simulation, which allows control system designs to be error-free the first time. With development systems that include a complete run-time PC with embedded OPC UA server, ICS engineers can create their control project and HMI screens, and completely test the entire system on a laptop. This results in the confidence that the design is complete and correct before commissioning begins.

The features of IEC 61131-3, OPC UA, and the latest ICS and HMI systems greatly streamline the process of creating ICS and HMI designs. The process is simply:

  1. Create an ICS design by instantiating items from the coordinated ICS/HMI library and user-created function blocks created from coordinated library objects.
  2. Connect the HMI system to the OPC UA server and read in the design hierarchy.
  3. Have the HMI system build a corresponding design using parts from the coordinated library and new subobjects.
  4. Organize the visual aspects of the HMI screens.
  5. Deploy the project.

The features in the IEC 61131-3 and OPC UA standards implemented in the latest ICS and HMI systems give automation system designers unprecedented integration capabilities. More than ever before, they can leverage best-in-class hardware and software to create larger, more scalable, more reliable, more maintainable, and more secure control systems. This stands as an example of how those who create and advance standards are paving the way for development of the tools that ICS programmers need for 21st century industrial control systems.

About the Author
Gary L. Pratt, PE, is president of ControlSphere Enginering. He previously was applications engineering manager for Bedrock Automation. Pratt has a broad background in technology that includes instrumentation and control, medical imaging, PCB, FPGA, IC, software design, and engineering and marketing management. He holds several patents in industrial controls.

Connect with Gary
LinkedIn

About the Author
Timothy L. Triplett, PE, founder, chief executive officer at Sacks Parente Golf. He previously was president of Coherent Technologies, Inc., an automation engineering company, and has more than 40 years of experience in distributed control system and programmable logic controller discrete, continuous, and batch control applications. He also holds several patents. A graduate of Texas A&M University, Triplett’s career has included being both an end user and a solution provider.

Connect with Hans
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News