Thank You Sponsors!

CANCOPPAS.COM

CBAUTOMATION.COM

CGIS.CA

CONVALPSI.COM

DAVISCONTROLS.COM

EVERESTAUTOMATION.COM

FRANKLINEMPIRE.COM

HCS1.COM

MAC-WELD.COM

SWAGELOK.COM

THERMO-KINETICS.COM

THERMON.COM

VANKO.NET

VERONICS.COM

WAJAX.COM

WESTECH-IND.COM

WIKA.CA

IIoT is Approaching the Network Edge for Almost Every Industrial Application

The post IIoT is Approaching the Network Edge for Almost Every Industrial Application first appeared on the ISA Interchange blog site.

This post was written by Peter Fuhr, PhD, Marissa Morales-Rodriguez, PhD, Sterling Rooke, PhD, and Penny Chen, PhD.

Industrial instrumentation and sensors are purpose-built for applications. Rugged and proven for field applications in harsh environments, such as on an oil platform or in a copper mine 5,000-feet below ground, these instruments require reliability and performance. Before the turn of the millennium, industrial technology-and information technology (IT) in particular-drove these systems, and they often exceeded the abilities of consumer products. However, as we stand today, commercial Internet of Things (IoT) technology has advanced rapidly, with industrial control systems lagging in intelligence and features.

Experienced owner-operators of industrial facilities recognize the buzz surrounding the Industrial Internet of Things (IIoT), but often shun the notion of consumer-grade devices being installed and integrated into an operational control system. At an ISA Process Control and Safety (PCS) conference, ISA’s Communication Division convened a panel to focus on IIoT.

Experienced industrial and control engineers on the panel expressed concerns and reservations with IIoT. Whereas some acknowledged an interest in the topic, others did not recognize it as an inevitable part of the industrial controls landscape. Granted, IIoT is still mostly a vision in the instrumentation and automation landscape; however, its place on stage is coming into view. During the opening session, then ISA President Jim Keaveney rhetorically asked the audience if IoT had peaked and also wondered if “cyber” would be the next area for innovation. This blog explores the nexus of “domestic” IoT and how product evolution will drive its development toward that of IIoT.

U.S. government to promote IIoT evolution?

At a National Telecommunications and Information Administration (part of the U.S. Department of Commerce) IoT workshop, discussions included how IoT and IIoT were set to converge around common threads. An important area of convergence will occur around onboard components and subsystems, with software as a runner-up because of cost and the innate drive to be first to market. However, as the Samsung Note 7 battery failure and subsequent recall has shown, releasing a product with flaws that are later discovered in the field by your customers is a bad idea. Despite this, the hype surrounding IoT is truly at its peak relative to other emerging technologies.

The U.S. has targeted infrastructure as a key focus. The proposed infrastructure buildout combined with growth in U.S. industrial capacity would benefit from incorporation of IIoT sensors and systems. At a Consumer Electronics Show (CES), many companies demonstrated attempts to deploy IIoT in industrial facilities-often with woefully inadequate performance and cybersecurity. The rush to develop and deploy will no doubt increase consumer IoT being used for IIoT, accelerating the IoT and IIoT convergence.

At the U.S. Department of Commerce (DOC) IoT workshop, participants spent a significant amount of time discussing the important role government can play in setting IoT standards. IoT experts at the workshop made it clear that with assistance from the federal government-specifically the U.S. Department of Energy (DOE) and DOC and their national laboratories-guidelines for cybersecure and robust IoT could be developed. With help from the government and organizations such as ISA, industry will have a clearer path to develop industry-centric IIoT rather than rushing to field consumer IoT devices. The question is that, even with this framework, will simple cost and a discount of risk dominate, so that IIoT essentially becomes IoT wrapped in a harder shell? We propose that this is what will occur.

However, we must take heed when it comes to cybersecurity and realize that with commercial IoT, an industrial target could be attacked in a manner similar to an attack on a commercial target-but with very different consequences. The solution? Although we should accept that IoT and IIoT will converge, there must be clear distinctions in cyberarchitecture and associated protections. This goes for implementation as well as regulations and guidelines proposed by governments and industry standards developed by organizations like ISA.

Figure 1. IoT for electric grid

 

Industrial Internet of Things

Could seemingly trivial items, such as Amazon Echo/Alexa, be worthy of consideration for industrial automation and applications? Many stalwarts of the status quo voice concerns about safety standards and dangers of this technology in the industrial setting. Although these are valid concerns, a more important concern is that commercial IoT standards or best practices do not always apply to IIoT concerns.

IIoT is a specialized IoT implemented in ruggedized packages suitable for industrial application environments. In fact, legacy industrial control devices, such as programmable logic controllers, will be compatible-for the time being-with IIoT running alongside. IIoT benefits from data flowing through standards-based and common networks. From a networking standpoint, the IIoT systems will break the ongoing practice of using proprietary networks and bring into place a common standards-based networking technology. The convergence of IT and operational technology (OT) operation knowledge for industrial automation environments is well underway. Soon IIoT will approach the network edge for almost every industrial application. IIoT installations can include hundreds or even thousands of sensors across a large facility. To handle all of this information, one approach is for IIoT to leverage the cloud in a manner similar to the Alexa example with IoT. In his book Internet of Things with Python, Gaston Hillar illustrates how sensor readings from IoT devices compound into a situation that must be managed.

A typical industrial practice involves acquiring one measurement per second from each IoT device. The number of measurements-from just one device-is:

  • 60 measurements for all the variables per minute
  • 3,600 (60  x  60) measurements per hour
  • 86,400 (3,600 x 24) measurements per day
  • 31,536,000 (86,400 x 365) measurements per year (assuming a nonleap year)

Consider the situation where an industrial facility has 3,000 IIoT devices running the same code, thereby generating 94,608,000,000 (31,356,300 x 3,000) measurements per year. (Each IIoT device is generating one reading per second.) In addition, it is envisioned that a data ingestion engine may analyze and acquire information from other data sources, such as tweets about weather-related issues in the locations in which the sensors are capturing data. The net result is huge volumes of both structured and unstructured data to analyze computationally to reveal patterns and associations.

From a convergence standpoint, many of these big data repositories and data manipulation centers will be the same for both IoT and IIoT. The key differences are cost and technical capability, and these commercial repositories are quite capable of servicing IIoT data at a low cost. Comingling of data between a home toaster oven (IoT) and the IIoT data from a cement kiln, for example, is not the real worry. The greater concern is a denial of service attack on the large provider. If we consider Amazon and its Amazon Web Services (AWS) and similar technologies, we can appreciate how an attack on a commercial business such as Amazon could disrupt critical processes supported by IIoT in a factory.

What constitutes IoT and the technology levels associated with IoT use in, for example, the electric grid? Figure 1 illustrates the situation.

Figure 2. Spectrum sensing, sharing, decision, and mobility functional components of an IoT device for dense deployments in industrial settings

 

Intersecting technologies

With the introduction and promulgation of IoT devices in an industrial setting, a wide range of questions and problems arise, including the following examples:

1. How do wireless IoT devices all share the frequency spectrum? Issues of spectrum congestion-such as numerous devices sharing the same frequency spectrum-are lumped into the general category of the “spectrum crunch.” One example of the correct answer can be for the IoT device to incorporate levels of spectrum sensing (in essence acting as a spectrum analyzer for the frequencies of “interest”), while having spectrum mobility (being able to change operating frequencies easily and quickly). The spectrum subsystem elements for such an adaptable IoT device are shown in figure 2.

2. Process control systems speak a wide range of protocols. Should an IIoT device or system speak one or all of these protocols? Or is having a logical system element perform protocol translation sufficient?

3. “IP addressable to the edge,” such as most IoT device and system designs, causes the logical element and subsystem design-which is foundational to the vast majority of today’s industrial networks-to be incorrect. IP-to-the-edge can provide wonderful integration into IT-centric networks, thereby allowing IT security applications to have entire network visibility. A “flat architecture” provided by IP-to-the-edge allows for an everything-to-everything level of connectivity. It also allows users to partition a variety of working zones based on operational or business needs. We believe the constraints should be set by application and business needs, not by technology incompatibility. This should be one basic concept of IIoT.

4. IoT movement has also advanced sensing technology. IoT edge devices may have varying levels of complexity and functionality, with various vendors leaning toward sophisticated and relatively energy-consumption-intensive operation, whereas others promote the advancing technology of passive wireless sensor tags (with no batteries, extremely low cost, and intrinsically safe operation). The ISA Communications Division has collaborated with the U.S. National Aeronautics and Space Administration, DOE, and other organizations during the past six years to conduct passive wireless sensor tag workshops and promote new types of low-cost wireless sensing technologies for IIoT.

5. In essence, the question distills to the following: What does the industrial network have to look like for IIoT devices to be used? Several great standardization activities have been initiated in IEEE and the Internet Engineering Task Force (IETF), such as IEEE 802.1 Time Sensitive Networking and the IETF Deterministic Networking. Those technologies are created to address the need of IIoT, which allows a single network to share its resources and to be deterministic to reserve network bandwidth for time-critical applications. An initial set of functionality and performance “answers” for IoT devices in a factory automation setting is provided in figure 3.

Figure 3. An industrial Internet-ready, time-sensitive network architecture

 

Industrial Internet applications must:

1. Communicate outside the plant in standard ways
2. Have object-oriented data models that relate to their physical objects
3. Not interfere with the reliability, integrity, or security of control applications
4. Be portable to devices on the plant floor
5. Share the existing infrastructure
6. Anticipate various forms of wireless media
7. Be able to easily add or reconfigure applications without affecting the existing plant.

Data of “things”

In the preceding section, we introduced the data footprint of IIoT with some simple calculations. In this section, we delve deeper into why IIoT will simply ride alongside or leverage data technology from commercial IoT. Does this affect security in the industrial (IIoT) space? Even if a company does not use the same data storage systems as AWS or other commercial IoT, its software could have many of the same security flaws. For custom applications, such as a factory IIoT system, only small portions of original code are introduced; the rest of the software leverages preexisting objects and modules born in the commercial IoT sector.

Thus, is this wave of data really all the same ocean from a storage and software standpoint? In other words, because of modular programming and reuse, are commercial IoT flaws present in often-unpatched IIoT systems? Once a vulnerability in the commercial space (IoT) is known to the hacker community, hackers can easily develop exploits and payloads that leverage the same vulnerability in unpatched IIoT systems.

Integration of IoT devices into a control system world of supervisory control and data acquisition (SCADA), distributed control systems (DCSs), and industrial control systems (ICSs) will lead to required changes to the decades-old ISA-95 Purdue model or the related ISA-88 factory automation network architecture. Such statements simply follow the facts that IP-addressable devices-such as most, but not all, IoT devices and systems-integrated into network-centric architectures logically lead to a change in the deployment fabric. An illustrative architecture is presented in figure 4. What is most noteworthy of such an IoT architecture-data fabric-is that it follows an IT-centric network architecture, thereby allowing for standard IT cybersecurity tools to be suggested for use.

We are not promoting the network architecture shown in figure 4 as a possible replacement for current SCADA/DCS/ICS architectures. It is provided simply as an illustration of an integrated and collaborative IIoT architecture.

The Industrial Internet Consortium-like many similar groups-has developed a conceptual architecture that presents one “view” of IIoT, shown in figure 5.

Again, where do standards come into play? Maciej Kranz of Cisco states, “The IoT World Forum has been working on a common model to drive interoperability across all IoT components: devices and controllers, networks, edge computing, data storage, applications, and analytics. The IoT World Forum Reference Model organizes these components into layers and provides a graphical representation of IoT and all that it entails.” Kranz concludes with this bold statement, “The IoT World Forum Reference Model opens the door to an ‘Open IoT’ system, with guaranteed interoperability.” (Readers who are knowledgeable or who participated in the ISA95 and ISA100 development processes can attest to the difficulty of achieving simply stated goals, such as “guaranteed interoperability.”) The reference model is presented in figure 6.

Figure 4. An example of multifunction IoT architecture

 

Figure 5. An example of conceptual IIoT architecture from the Industrial Internet Consortium
Source: Infosys

 

Figure 6. An example of conceptual IIoT architecture
Source: IoT World Forum Architecture Committee

 

Cybersecurity, IoT, and the attack surface

The introduction of IoT devices, in particular IP-addressable devices, into an industrial setting most assuredly increases the number of elements/devices that are vulnerable to cyberattack. The situation was illustrated in the 2016 distributed denial of service (DDoS) cyberattack attributed to IoT devices first being infected with malware, then being coordinated in the DDoS attack on major Internet routers.

Does this warrant avoidance of IoT devices in an industrial setting? As directors and directors-elect of two of ISA’s technical divisions, the authors of this paper answer that question with a resounding “no.” However, such cybersecurity instances do illustrate the need for a change from the decades-old defense-in-depth ISA-99 (ISA/IEC 62443) model. In a future article, we will present a bold design for a cybersecure network architecture appropriate for 2017 and beyond.

Lower costs, enhanced features, and higher cyberrisks-these are what we can expect as IoT and IIoT converge. Standards and guidelines can help carve an orderly path forward. A path for IoT in industry will be needed, because infrastructure initiatives will likely invite rapid IIoT deployment.

ISA’s Communications Division and Test & Measurement Division currently have a joint working group focused on IIoT with the associated examination of functional and operational security if and when IoT devices are deployed into a control system. Although the term “cyber” is often overused, it truly applies in the world of IIoT; however, new sensor and control capabilities bring enhanced attack surfaces in the world of cybersecurity.

In follow-up articles in this series, the authors will discuss cybersecurity implications for our overall critical infrastructure and drones for remote inspection to uphold cybersecurity assurance.

ISA Cybersecurity Resources

ISA offers standards-based industrial cybersecurity training, certificate programs, conformity assessment programs, and technical resources. Please visit the following ISA links for more information:

About the Author
Peter Fuhr, PhD, is a distinguished scientist at Oak Ridge National Laboratory and also serves as the technology director for the Unmanned Aerial Systems (UAS) Research Laboratory. He is the director of the ISA Test & Measurement Division.

About the Author
Marissa Morales-Rodriguez, PhD, is a research and development scientist at Oak Ridge National Laboratory. She has been working in the area of chemical sciences, concentrating on applications related to sensing, additive manufacturing, and document security. She is director-elect of the ISA Test & Measurement Division.

About the Author
Sterling Rooke, PhD, is the founder of X8 LLC, a technology company focused on industrial sensors with an eye toward cyber- and energy security. On a part-time basis, Rooke is the director of training within a Cyber Operations Squadron in the U.S. Air Force. In his role as a reserve military officer, Rooke leads airman through training exercises to prepare for future conflicts in cyberspace. He is the director-elect of the ISA Communication Division.

About the Author
Penny Chen, PhD, is a senior principal technology strategist at Yokogawa US Technology Center (USTC), responsible for technology strategy and standardization focusing on wireless, networking, and related security, and exploring new technologies for industrial applications. Chen is actively involved in ISA100, Wireless Systems for Automation, and a variety of IoT standardization activities, including IEEE P2413 IoT Architecture Reference Framework. Chen received a PhD in electrical engineering from Northwestern University. She is the director of the ISA Communications Division.

A version of this article also was published at InTech magazine



Source: ISA News

How to Optimize Closed-Loop Control Through a Better Understanding of the PID Equation

The post How to Optimize Closed-Loop Control Through a Better Understanding of the PID Equation first appeared on the ISA Interchange blog site.

This post was written by Bill Dehner, technical marketing engineer at AutomationDirect.

Machines and processes are controlled using many strategies, from simple ladder logic to custom algorithms for specialized process control, but proportional-integral-derivative (PID) is the most common control method. Different programmable logic controllers (PLCs) handle PID control loops in different ways. Some loops need to be set manually, while others can use an autotune process embedded in the PLC’s software.

Even before loop tuning starts, the design may have created a slow-to-respond control loop with built-in lag. For example, a temperature sensor positioned a long distance from a heater can slow response to dynamic changes.

Changes in machines and processes due to disturbances and set point changes are why PID control is often needed. The amount, length of time, and rate of change of the process error are all part of the PID equation, as is correcting the error to bring the process variable closer to the set point. This blog post will look at the PID equation and some tuning tips, along with a brief review of autotuning and applications benefiting from PID control.

What is PID control?

The application almost always determines whether open- or closed-loop analog control is used. Many applications will work with on-off, closed-loop control using an analog sensor measuring temperature, pressure, level, or flow as an input to control a discrete output. The same analog sensors are also used for PID closed-loop control, but in a more complex strategy.

In on-off, closed-loop control of room temperature, for example, a heat or cooling cycle is triggered by hysteresis in the thermostat. When the room temperature is approximately 1°F or 2°F above the set point, a cooling cycle is turned on. Once the set point-or possibly 1°F or 2°F below the set point-is reached, the cooling cycle is turned off. This can result in a 2°-4°F swing in room temperature. The temperature swing can be even worse with a slight overshoot at the turn-off point and undershoot at the turn-on point. This temperature swing (error) around the set point is not accurate enough in many industrial control processes.

To reduce the process variable error, the closed-loop control function in PID controllers is commonly used. A PID controller reads a process variable (PV), compares it to a desired set point (SP) value, and uses a continuous feedback loop to adjust the control output.

The equation behind PID loops

For many control system programmers, PID loops can be difficult to set and tune. Many have forgotten the calculus involved or never learned it, but a look at the PID equation can be helpful when tuning a loop.

The PID equation and the following discussion is for basic reference only. Extensive analysis of this and similar equations is available in a variety of process control textbooks. Additionally, some PID controllers allow selection of the algorithm type, most commonly position or velocity. The position algorithm is the choice for most applications, such as heating and cooling loops, and for position and level control applications. Flow control loops typically use a velocity control algorithm.

The proportional term (P), often called gain, drives a corrective action proportional to the error. The integral term (I), often called reset, causes changes to the control output proportional to the error over time, specifically, the integral sum of the error values over a period of time. The derivative term (D), often called rate, changes the control output proportional to the error rate of change, anticipating error.

Using the equation below, a PID controller receives the PV and calculates the corrective action to the control output based on error (proportional), the sum of all previous errors (integral), and the error rate of change (derivative). The following is a discrete position form of a PID equation, where the control output is calculated to respond to displacement of the PV from the SP:

where:

Mn  is the control output at the moment of time n. This is the gain or response output, such as 0 -100%, sent to the controlled device.

en is the error at the moment of time n calculated by subtracting the desired set point from the actual process variable (SP – PVn).

Kc  * en is the proportional term (P). Kc is the proportional gain coefficient and becomes fixed once the proper value is found during tuning.

Ki * ∑ ni  =1 ei is the integral term (I). This is the sum of the calculated errors from the first sample (i = 1) to the current moment n multiplied by Ki, the integral coefficient. Ki is calculated using the formula: Ki = Kc * sample rate/integral time.

Kr * (en  en1) is the derivative term (D). It is the error now (en) minus the previous sample error (en1), with the result multiplied by the derivative coefficient Kr, which is calculated using the formula: Kr = Kc * (derivative time/sample rate). The derivative term looks at the error now and the error before. It also determines how rapidly the error is increasing or decreasing, and adjusts the output as needed.

Mo is the control output initial value. It is also the value transferred if switching from manual to autoloop control.

A PI example

Applying this PID equation to a temperature control example shows how the P, I, and D terms work together. In this example, an oven is controlled to a desired temperature set point of 350°F (figure 1). As a starting point, the following parameters are used.

  • Kr = 0 (This sets the derivative time to zero, making this a PI controller, which is a good starting point.)
  • Kc = 3 (proportional gain)
  • Ts = 60 second sample rate
  • Ki = 1 (set integral time to 180 seconds as Ki = Kc * (sample rate/integral time) or Ki = 3*60/180 = 1
  • M(0) = 30 (initial control output)

The graph (figure 2) charts temperature fluctuation over the past 9 minutes. The graph shows that the temperature is stable for the first five samples before dropping 20°F at sample six. The error at sample six is: e(6) = SP – PV = 350 – 330 = 20. The sum of all errors can also be calculated: ∑ ei  = (e(1) + e(2) + e(3) + e(4) + e(5) + e(6)) = (0+0+0+0+0+20) = 20.

Combining the parameters and calculated values, the loop at sample six is solved as follows, where red is the proportional, blue is the integral, and green is the derivative term:

The result is 80 more than the initial control output of 30, with the proportional term providing most of the increase. The controller converts this result into an analog output to the controlled device-a heating element in this example-in the form of a 4-20 mA or 0-10 VDC signal.

As the control output drives the temperature toward the set point, at sample seven the error is decreasing, so e(7) = SP – PV = 350 – 340 = 10. At this time, the sum of all the sample errors is ∑  ei = ((e(1) + e(2) + e(3) + e(4) + e(5) + e(6) + e(7)) = (0+0+0+0+0+20+10) = 30). At this moment in time, the result of the equation is:

With the error corrected, the output decreases due to the drop in the proportional term, even though the integral term increased. At sample eight, the PV temperature recovers, to 349.5°F, making the sum of all errors 30.5. At this moment in time, the control output is:

Now the proportional term is approaching zero, and the integral bias is having more effect on the control output. While the proportional term has decreased with the error from 60 to 30 to 1.5, the integral term has increased from 20 to 30 to 30.5. This highlights a loop tuning tip: with large error, the proportional term drives the output, and with small error, the integral term takes control.

Figure 1. Temperature in this industrial oven can be accurately controlled using the PID algorithm.

Figure 2. This chart shows how the process variable changes in a PID temperature control application for an industrial oven.

Add a little D for stability

The derivative term adds stability to the control loop. When a rapid rate of change occurs, such as the 20°F drop at sample six, there is a risk of instability, which the derivative term reduces, while adding correction.

For example, if the derivative coefficient is changed to Kr = 1 by setting the derivative time (Td) to 20 seconds (Kr = Kc * (derivative time/sample rate) or Kr = 3*20/60 = 1), the control output at samples six, seven, and eight would become:

P, PI, then PID manual tuning

A process does not always require a three-mode PID control loop. Many applications run fine on only PI terms and some on just the P term, so it is common to disable parts of the PID equation. Selecting appropriate values for the gain (Kc), reset (Ti), and rate (Td) makes a P, PI, PD, I, ID, and D loop possible. This is also what is done to tune the PID loop manually.

It is difficult to tune all values of a PID control loop at once. To start, it is better to cancel out the integral and derivative, and tune the proportional term. Find a proportional value that provides a quick response of the process variable. Raise the proportional term until the PV becomes unstable and oscillates, and then reduce it until a stable response is achieved with slight oscillation or error. With a stable PV, add a small value for the integral term to help the error reach zero. The error will get smaller when this term is approaching the optimal value. At this point, the PI control should have the desired response with a stable PV and minimal error.

With many applications, especially temperature control, a PI loop is all that is needed. The derivative term is used in other applications, but it can slow the control response. It reduces gain when large rates of change in the PV are detected to reduce possible overshoot. This often causes undershoot-an over-dampened and slower response. Although each PID term can be manually entered to tune the control loop, the PID processing engine in some PLCs and advanced controllers can autotune the loop by automatically calculating the values.

Autotuning

Autotuning, if available in the controller, often reduces or eliminates the trial and error of manual PID tuning. In most cases, autotuning a control loop will provide terms close to optimal values. However, it is often necessary to perform some manual tuning to attain optimal values.

Many temperature controllers and PLCs provide an automatic tuning function (figure 3). During the autotune cycle, a controller controls the output value while measuring the rate of change, overshoot, and response time of the process. The Ziegler-Nichols method is then often used to calculate the controller PID term values.

Typically, the Ziegler-Nichols method creates a square wave on the control output to create a step response that is measured and analyzed. Based on the response of the PV, the autotune function calculates the terms and the sample time. Several full-span step cycles are used to compute the terms/gains.

When manually tuning a loop, take the time to understand the equation and start with the proportional term. With a responsive and stable PV, add the integral to the equation, and then add derivative if necessary. When an autotune function is used, a little adjustment to the terms may be all that is needed to optimize the control loop.

About the Author
Bill Dehner has spent most of his engineering career designing and installing industrial control systems for the oil and gas, power, and package-handling industries. He holds a bachelor’s degree in electrical engineering with an associate’s in avionics from the U.S. Air Force. He is currently a technical marketing engineer at AutomationDirect.

Connect with Bill
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

How to Achieve Pilot-Scale Process Control Flexibility and Agility

The post How to Achieve Pilot-Scale Process Control Flexibility and Agility first appeared on the ISA Interchange blog site.

This post was written by Chris Marinucci, director of advanced manufacturing at O’Brien & Gere.

Pilot-scale process control has posed some of the biggest automation challenges I have faced working in an advanced manufacturing environment. Extreme turndown ratios, process modularity, and rapid and frequent data acquisition, combined with the need for high accuracy and repeatability are hallmarks of any research and development process. The request from our client was straightforward: Make the pilot lab more flexible, accurate, and productive while maintaining the Class 1, Division 2 hazardous area classification.

The existing pilot lab was a combination of rigidly constructed and permanently affixed pumps, valves, heat exchangers, tanks, and instruments. Over the years, a spaghetti-like arrangement of bypasses and spool pieces were added to suit the process testing needs.

Our solution was to break down each unit process operation into single systems and make them portable. Existing pumps, Coriolis flowmeters, and heat exchangers were mounted on wheeled carts, making it easy to mix and match the correct equipment for the pilot run. Cam-lock hoses replaced rigid stainless-steel piping, and 250-gallon totes or 55-gallon drums became the vessels of choice. The programmable logic controller (PLC) control panel itself was mounted in a console-style cabinet with a graphical interface and placed on casters.

Another challenge was how to deal with all the different kinds of instruments, flow pressure, turbidity, color, and temperature, that could not be permanently affixed to the process equipment. Each trial posed unique challenges for process control and data acquisition and its need to mimic real constraints at a variety of manufacturing plants in North America. The instruments had to be just as modular as the process equipment and allow the technician to place them anywhere in the process. Further complicating matters were the specialty instruments like turbidity and color analyzers that were large, heavy, and expensive.

With dozens of instruments and process elements that could be combined in seemingly infinite combinations, a wireless networking solution could tie all our pieces and parts together, but which wireless solution? Splitting our connectivity needs into real-time and periodic ones, we focused on the wireless Ethernet for real-time applications and wireless HART for periodic applications.

Wireless Ethernet provided near real-time control and data feedback for our process equipment. The centrifugal and positive displacement pump carts used variable frequency drives networked to a wireless Ethernet radio. The Coriolis flowmeter had to provide near real-time feedback to operate either pump in a closed-loop mode, so it, too, was fitted with a wireless Ethernet radio. Lastly, our control panel was fitted with a wireless Ethernet radio. To coordinate all the wireless Ethernet devices, a wireless Ethernet access point was mounted in the center of the 2,000-square-foot lab space on a beam 20 feet above the process equipment.

For those devices that required only periodic monitoring, a wireless HART system was used. Various battery-operated pressure and temperature instruments came with HART wireless thumbs, allowing them to broadcast their data back to the HART gateway approximately every 5 seconds. Being battery powered, the periodic pressure and temperature-sensing devices could be placed anywhere in the process by simply selecting the correct hose and pipe fitting.

This left our specialty analytical instruments, as well as our real-time pressure and temperature devices, to be placed. All of these instruments found a home mounted to the back of our PLC/human-machine interface console. This allowed us to use a combination of hardwired network cables and traditional 4-20 mA signals directly to the PLC.

The HART wireless gateway and the wireless access point were hardwired together through a managed switch with CAT 6 Ethernet cable. A Modbus TCP card allowed the PLC to read the HART wireless device data through the HART gateway for the purposed of alarming and graphical display. The hardwired Ethernet network linked the supervisory control and data acquisition (SCADA) workstation to the HART wireless gateway and the wireless Ethernet gateway, giving the SCADA system access to the PLC.

To maintain the area classification, the pump variable frequency drive panels and PLC panels used panel purge units. Hardwired devices used intrinsically safe I/O, while hardwired network devices were explosion-proof and used rigid conduit. The wireless network system has operated without failure of service. It has proven to be every bit as reliable as a wired solution, while providing much-needed simplification and flexibility to the pilot lab process.

About the Author
Chris Marinucci is director of advanced manufacturing at O’Brien & Gere. His career has focused on taking the control and mechanical systems associated with a variety of processes used in industry and designing the control systems and graphical interfaces to make them work as a single purpose-built system. O’Brien & Gere is a certified member of the Control System Integrators Association.

Connect with Chris
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

Why Bypassing the Factory Acceptance Test at Startup Is a Bad Idea

The post Why Bypassing the Factory Acceptance Test at Startup Is a Bad Idea first appeared on the ISA Interchange blog site.

This post was written by Michael B. Fedenyszen, II, is a senior instrumentation and controls engineer at R.G. Vanderweil Engineers

The investment in a solid factory acceptance test (FAT) pays big dividends and makes a large contribution to a successful project. Rushing to startup, taking shortcuts, or bypassing the FAT typically means many engineering mistakes and problems that will need to be fixed later in the field, where it is significantly more expensive and time consuming.

The plant control system (PCS) usually includes distributed control systems consisting of complex control panels housing programmable logic controllers (PLCs), remote terminal units, and various other control technologies that operate and control the purposed processes.

Today’s startup and commissioning environment requires many operations personnel, trades contractors, technicians, and engineers to be on hand during this scheduled event. System component manufacturers will also be onsite tweaking and proving their equipment, balancing systems, and adjusting loops. When troubleshooting becomes necessary, it is easy to see how the cost of labor can add up when workers are standing ready while the pertinent troubleshooting players sort out the system quirks.

Specifications traditionally call for witness testing and review of the PCS at the manufacturer’s facility before it is shipped to the site. A hardware factory acceptance test (HFAT) and a software factory acceptance test (SFAT) are required tests in order to confirm that the hardware selection and its installation and wiring are in conformance with specifications, and that the software, code, and inputs and outputs operate as required.

Items to look for during the HFAT are specific. The control panels and panel hardware are visually inspected, and observations are documented to be complete, clean, and ready for shipment to the site. All ground bonding is checked, and circuits are checked to be isolated from short circuit. Should UL 508A have been specified, fusing amperage and terminal torque requirements must be identified inside the panel enclosure; components must be UL and FM approved; and the UL 508A certification must be adhered within the enclosure. Wiring in general will be reviewed to conform to industry standards and best practices.

The SFAT will include all reference documents, drawings, and process and instrument diagrams and follow a predetermined test procedure defining the acceptance criteria, detailed instructions, and expected results. The procedure is generated by the system integrator and approved by the engineer before the FAT.

With the system powered up, primary and redundant power supplies, as well as primary and redundant PLCs, will be verified for failover operation. All digital inputs and outputs will be tested with temporary switches or jumpers for their operation. If human-machine interfaces (HMIs) are configured into the system, input and outputs could be checked from the HMI touchscreen. Analog inputs will be confirmed using a current simulator signaling 4 mA, 12 mA, and 20 mA while witnessing the corresponding value in the code representative of 0, 50, and 100 percent of the control value. Analog outputs would be read with a current meter while manipulating a 0, 50, and 100 percent process signal in the code.

FAT results, incidents, or abnormal findings are recorded in a document approval record, and at the conclusion of the test, signed by both the systems integrator and the owner’s representative. All incidents would then be attended to before the control system is released for shipment to the site. Once concluded, the HFAT and SFAT will be most worthwhile, preventing any delay in system startup and commissioning as a result of the PCS integration.

About the Author
Michael B. Fedenyszen, II, is a senior instrumentation and controls engineer at R.G. Vanderweil Engineers, LLP, and ISA’s Publications Department vice president elect. He is an active life member of ISA, serving in many leadership roles and has received numerous awards. Fedenyszen has a 30-year history of integrating and optimizing instrumentation and controls requirements for combined heat and power plants and central utility plants.

Connect with Michael
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

Process Control Strategies for Wastewater Treatment to Reduce Greenhouse Gas Emissions [technical]

The post Process Control Strategies for Wastewater Treatment to Reduce Greenhouse Gas Emissions [technical] first appeared on the ISA Interchange blog site.

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.

Abstract: The application of control strategies is increasingly used in wastewater treatment plants with the aim of improving effluent quality and reducing operating costs. Due to concerns about the progressive growth of greenhouse gas emissions (GHG), these are also currently being evaluated in wastewater treatment plants. The present article proposes a fuzzy controller for plant-wide control of the biological wastewater treatment process. Its design is based on 14 inputs and 6 outputs in order to reduce GHG emissions, nutrient concentration in the effluent and operational costs. The article explains and shows the effect of each one of the inputs and outputs of the fuzzy controller, as well as the relationship between them. Benchmark Simulation Model no 2 Gas is used for testing the proposed control strategy. The results of simulation results show that the fuzzy controller is able to reduce GHG emissions while improving, at the same time, the common criteria of effluent quality and operational costs.

Free Bonus! To read the full version of this ISA Transactions article, click here.

Enjoy this technical resource article? Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus professional networking and discounts on technical training, books, conferences, and professional certification.

Click here to join ISA … learn, advance, succeed!

Copyright © 2019 Elsevier Science Ltd. All rights reserved.



Source: ISA News

AutoQuiz: Technical Characteristics of the Open Systems Interconnection Model

The post AutoQuiz: Technical Characteristics of the Open Systems Interconnection Model first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Translation of data between a networking service and an application, including character encoding, data compression, and encryption/decryption is the responsibility of which layer of the Open Systems Interconnection (OSI) model?

a) application
b) presentation
c) session
d) transport
e) none of the above

Click Here to Reveal the Answer

The application layer addresses high-level application programming interfaces, including resource sharing, remote file access, directory services, and virtual terminals. The session layer deals with continuous exchange of information transactions between two nodes. This layer is often not required in instrumentation bus protocols. The transport layer provides reliable transmission of data segments between points on a network, such as TCP and UDP.

The correct answer is B, presentation. The presentation layer for protocols used in control systems provides the translation of data received into a usable format, such as binary data from a programmable logic controller into floating point data for presentation in a distributed control system data table.

ReferenceNicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP.A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

An Introduction to Operations Research in the Process Industries

The post An Introduction to Operations Research in the Process Industries first appeared on the ISA Interchange blog site.

This guest blog post is part of a series written by Edward J. Farmer, PE, ISA Fellow and author of the new ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to purchase the book, click this link. To read all the posts in this series, scroll to the bottom of this post for the link archive.

A long time ago (back in 480 BCE) King Leonidas of the Greek city-state Sparta confronted an invasion promulgated by the ambitious Persian King Xerxes who had an army estimated to be between 70,000 and 300,000 experienced warriors on its way across Greece to capture Sparta.

Limited by political issues in Sparta, Leonidas was forced to confront the Persian force with his personal guard: 300 of Sparta’s very best soldiers.  Aided by militia contributed by a few other cities, the Greek force ended, at least temporarily, Xerxes’ ambitions. How could this happen?

Operations research.

Applying mathematical, scientific, and logical techniques to the management of a problem or process has become the field of operations research. In the case of the 300 Spartans it involved development of special shields, weapons, and tactics that made them effective against a force two or three orders-of-magnitude larger. In contemporary times, it is usually considered how the military debacle of World War I became the directed, fast-moving, and effective methods of World War II. It is also the basis of modern automation and process control theory.

Operations research concepts are well-known in the field of project management and are evident in Gantt charts and critical path method organization and planning. The basis of critical path method as well as the tools for using it flow from operations research.

In my early days I learned process control concepts from Benjamin Kuo’s book, Automatic Control Systems. It required a good understanding of differential equations applied to how processes were structured and operated. For optimization we often studied how things had to be organized versus how they were thought about in then-existing thinking. We wrote equations like the transfer function that mathematically described the relations among things that could cause changes in the things that mattered.

We tested and analyzed to develop a sensitivity function describing what happened to an output of a process when an input was tweaked. Many processes in those days had not been designed with those concepts in mind and many improvements in critical performance could be developed from better and more complete understanding of cause-and-effect along with quantifiable (mathematical) understanding of how changes affected issues of importance.

Automation engineering work usually involved experienced people with little engineering education who knew from years of experience how things worked helping fresh new people like me develop the equations and concepts involved in making the transition to intelligent automation and optimization. In one plant that was very good to me a team involved several people who were veterans of the “valve shop” working with instrumentation and control guys who could usually recognize a valve when they saw one.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free 37-page excerpt from the book, click here.

This needed more than just physical understanding of the chemistry or mechanism – we needed to include assessment of the effects of time. Lots of help was developed and made it into mostly statistics books – I like Mendenhall and Scheaffer’s Mathematical Statistics with Applications, but there are others. Over the years, ISA Fellow Ronald Dieck’s book Measurement Uncertainty has been useful, especially for problems common in process control.

Eventually the benefits of automation were exhibited, analyzed and proven to everyone’s satisfaction. This resulted in machines (what we would consider primitive computers today) analyzing situations and making decisions that were provably faster and closer to optimal than experienced operators could manage.

The number of control loops an operator could handle increased from “a few” to over a hundred, and process throughput improved because decisions were made more quickly and accurately a higher percentage of the time. Sometimes analysis would expose problem-causing areas and ways could be designed to circumvent or minimize them. It was often noticed that automatic control kept all variables closer to proper values and that overall stability improved the performance of even non-automated loops since they had less-extreme conditions to manage. Improvement efforts were focused on issues that sensitivity analysis exposed as the most critical in achieving intended results – improvement money went to the right places.

Knowing that something does work is comforting. Knowing how it works is satisfying, and useful. Is there a way to make it work better? Once a process is deeply understood a range of opportunities can open. Perhaps results would be better with higher quality inputs. On the other hand, perhaps there are ways to adjust processing that allows using less expensive input or producing additional outputs that improve the quality of a primary product while producing a lower value secondary product. Of course, it all depends on what’s involved but the opportunities can be mind-bending.

Operations research has been useful to me from my first experience with the U.S. Army in 1966 when it helped me develop methodology for getting out of bed, the bed made, dressed, with field gear, and in formation, all with time to help my squad do the same things, all in less than 15 minutes.

Whenever it occurs to you there may be a completely correct and perhaps optimal way to do something that others think through from scratch every time it is often useful to think through the steps involved, model or diagram the situation, and construct a solution method. It can mean the difference between “an answer” and the optimization for which everyone was hoping.

Think like a Spartan! The 300 Spartans would have been very pleased to observe improvement of several orders of magnitude in a dependable and predictable way.

About the Author
Edward Farmer, author and ISA Fellow, has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. He has authored three books, including the ISA book Detecting Leaks in Pipelines, plus numerous articles, and has developed four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. During his long industry career, he established EFA Technologies, Inc., a manufacturer of pipeline leak detection technology.

Connect with Ed
48x48-linkedinEmail

 



Source: ISA News

Mechatronics: The Evolution From Mechanical to Information-Based Industrial Automation

The post Mechatronics: The Evolution From Mechanical to Information-Based Industrial Automation first appeared on the ISA Interchange blog site.

This post was written by Kenneth J. Ryan, MD, is director of education at BE.services GmbH and co-founder of the Center for Applied Mechatronics at Alexandria Technical and Community College.

At the core of any automated system is the basic closed-loop paradigm: sense, decide, act. This model reverberates from the lowest machine component all the way to the business enterprise, starting with sensors, logical algorithms, and end effectors.

Sensors

Contact or noncontact, sensors are made up of a mechanical component, which at the very least supports and positions the sensor, and a means of communicating the presence, absence, or intensity of the measured phenomena. Each can be designed as an island of sensing encapsulating everything needed for its function and providing an interface for mechanical, electrical, and information integration into the larger system. Object-oriented control code is uniquely suited to reside on these islands.

Controllers

Controllers range from simple bare-metal processors with embedded control algorithms to complex multicore controllers parsing the logic, motion, visualization, communication, safety, and measurement functions. At the lowest level, modular code allows encapsulation of the limited functionality of these controllers, while at the higher levels this same modularity permits rapid integration and configuration of complex state machines.

Effectors (actuators)

The action portion completes the automation triad and can be as varied as the applications, including servomotors, fluid power, and laser energy to implement action.

Mechanics evolves into informatics

The history of packaging machine technology illustrates the evolution from mechanical to information-based automation. Originally, there were complex, internal mechanical cams and gear arrays energized through central driveshaft timing and driven by a prime mover. These machines are now sleek, compact designs based on distributed servomotor technology facilitated by the availability of software-based cam and gearing profiles and high-speed closed-loop communication protocols. This “extraction” of mechanical complexity has contributed greatly to the increasing accuracy, speed, and reliability.

A similar metamorphosis is occurring in all segments of industrial automation as the mechanical-to-informatics ratio has nearly inverted over the past two decades in diverse industries including assembly, energy distribution, and batch processing. It permeates all levels of automation design, as evidenced by the degree to which informatics-based automated response to production-level data is driving supervisory and even corporate behavior.

Everything gets smarter

Early in controller-based automation, all the intelligence was concentrated at the decision-making level of the triad. Today, with modular software design, this capability is much more distributed. At the sensor level, we see photo eyes that can manage their own dwell times, sensitivity, and internal diagnostics. Actuator components such as a servo drive simply need to receive a position command from the central controller and can locally calculate velocity and acceleration and deceleration loop closure. This local encapsulation of intelligence greatly eases the integration of these components into an automation solution. This efficiency is only fully realized using a modular reusable code architecture like that of the IEC 61131-3 Programmable Controllers – Part 3: Programming Languages standard.

Architecture of modern automation systems with sensors, actuators, and controllers

 

Object-oriented programming

 

Object-oriented programming to the rescue

When you recognize the self-contained, component nature of the mechatronic architecture, it becomes desirable to create an informatics domain that maps directly to it. This is where object-oriented programming excels.

Modular programming architecture

Object-oriented programming begins by defining specific program organizational units, each with key structural advantages. Programs, function blocks, and functions are used to accomplish different goals: programs are intended to be more organizational and coordinative, while functions are a means of encoding basic operations that require few parameters. The real stars of the show are function blocks encapsulating functionality that can be reused over and over and modified on the fly through a parameter interface.

Generic control through interface definition

The fullest implementation of object-oriented programming takes these function blocks and subdivides them even further into a method (what the function block does) and an interface (the data the function block needs to accomplish its method). The key advantage is the interface can be defined without needing to know how the method will accomplish its objective. It is a bit like being able to drive any car, whether it is gas, diesel, hybrid, or electric. Once you know how to “interface” with the car, you have little interest in how it accomplishes its method of getting you from point A to point B. Two examples will illustrate this concept:

  1. A “generic” sensor interface can be defined with an “enable” signal input, sensitivity and hysteresis parameters, and a presence/absence output. Once this interface is defined, the method written for the function block controlling the sensor can be different depending on whether the application uses a photo eye, a capacitive sensor, or a contact probe.
  2. You can use the same interface to integrate a servomotor or a fluid power for axis motion, even though they are different motion hardware.

Communication creates distributed intelligence

After designing a component-oriented mechatronic architecture with mapped object-oriented code, the only remaining requirement for efficient integration is a standardized communication structure. 

Communication protocol

As was the case with the “working” sense-decide-act components of the automation system, first standardize the hardware and software of the communication system that will integrate these components. Several Ethernet-based industrial communication networks exist to manage the various domains of automation, including EtherCAT, Powerlink, EtherNet/IP, Sercos III, Profinet, and others that leverage commercial-off-the-shelf Ethernet to standardize this aspect of the mechatronic structure.

Communication data structures

The structure and syntax of interdevice communication is greatly facilitated by the introduction of OPC UA. OPC UA is a scalable and portable technology that describes and transports information securely, reliably, and flexibly. It is a perfect fit for smart manufacturing, allowing machine-to-machine communication and integration from the plant floor to the management suite. The PLCopen organization (www.PLCopen.org) has simplified the application of OPC UA with the definition of 24 OPC UA function blocks within the IEC 61131-3 framework for programming controllers, coordinating control, and interacting with enterprise IT, cloud, and Internet of Things applications (e.g., cloud solutions, big data collection, data analytical tools). These communication standards reflect the modular, standards-based solutions occupying the modern mechatronics and general automation landscape.

Modular architecture has automation advantages

Now that we have examined some of the available solutions for providing standards-based, modular architecture, what are the advantages and why apply them to applications?

Flexibility: Mechatronic components mapped directly to object-oriented code create a “Lego” toolkit that can be assembled into a vast array of end devices. Once the mechatronic components are matched with their encapsulated method and provided with standardized communication interfaces and a network protocol, smart sensors can be coupled to smart actuators with little or no need for additional supervisory control.

The more “granular” the description of the objects becomes, the more flexibility is created. Just like Legos: the more foundational the individual component objects, the less modification is needed to integrate them into a solution. Think of the utility of a Lego block with only one “interface bump”; it can be part of almost any design. Similarly, the more fundamental the mechatronic object, the broader the scope of potential applications.

Reusability: Create it once-use it over and over-across projects and time. Once a class object is created, each copy in the application simply requires a new instance of the object. For example, once you create code to describe a photo eye, each additional photo eye in the application simply uses a copy (instance) of the original. Because the method and interface has been written to handle all functionality of the sensor within that function block, all that remains is to map the interface to the application and provide the needed parameters to each unique instance.

Objects created for one application can easily be maintained in an object library for use in future applications, because they are reusable in any project. The more foundational an object, the greater the likelihood it has an application in other projects. Recognize that many of these library objects are hardware independent and can thus be reused across development tools and controllers.

Transportability: Code is usable across industries. Mechatronic components and their mapped code modules created for discrete applications may be applied to process automation. Similarly, an object created for factory automation may have applications in mobile environments.

Adaptability: The use of methods and interfaces allows the application to be “agnostic” regarding the underlying method used to accomplish a given application goal. Once an interface is defined, the underlying method can change to fit the mechatronic component being used in each specific application. This is another example of allowing “best solution” design, because in one environment servomotors may be appropriate and in another a fluid power solution may be more effective.

Scalability: This advantage has two aspects. The first is less obvious but is grounded in the ability to expand the use of IEC 61131-3 to encode nonhardware-based application functionality, such as communication and visualization. Each of these can be realized using function blocks authored in IEC code. In another respect, the granular nature of IEC 61131-3 code allows it to be scaled from the lowest-level embedded device to the highest-level multicore industrial process controller.

Validation: This is one of the most under-appreciated advantages of modular code architecture. Once the code within an IEC 61131-3 object is validated, the object (function block) can be locked and traced using an electronic signature. Reusing the object does not require repetition of the expensive, time-consuming validation process. As increasing regulatory pressure affects the manufacturing of medical devices, food, and pharmaceutical, personal care, and other products, this advantage takes on considerable significance.

IEC 61131-3 code supports all these architectural advantages

Program organizational units are designed to match mechatronics components with modular code.

  • Program-level code can match machine subunits and behaviorally discreet functionality.
  • Function blocks can encapsulate specific behavior and map it to specific mechatronic components.
  • Functions can encode generic operations executed across a broad array of components.

Language options: IEC 61131-3 defines the syntax for Sequential Function Chart, Function Block Diagram, Ladder Diagram, and Structured Text, providing engineers a toolbox to select the best-of-class language for the application. Sequential Function Chart is an ideal means to define top-down state machines. Continuous Function Chart is ideal for visualization and tracing logic pathways. Structured Text is a compact, powerful text-based language for coding complex automated behavior in a dense format.

Normalization of programming syntax makes training efficient and durable. Perhaps one of the most important achievements of the standard is the normalization of programming languages that have a rigid syntax. There is little ambiguity about how each one is used to accomplish a given automation goal. Once the languages are learned, they are applicable across a broad array of compliant controllers from participating companies. This makes it easier to move between application development environments from multiple vendors. In addition, third parties can provide hardware-independent solutions that further expand the range of development and control options.

Once they have been trained in the use of one IEC 61131-3 compliant development and control environment, the migration threshold to new environments is minimized. Engineers, technicians, and operators can be more effective over a broad range of applications.

Library-based reusability is at the foundation of the IEC 61131-3 automation software standard. Every module created with the IEC 61131-3 software model can be saved for reuse, increasing productivity. Emerging global object libraries have reliable and tested solutions to use instead of reinventing new functions.

Figure 1. Ladder Diagram programming

 

Figure 2. Two-axis camming function block

 

Figure 3. Structured text programming

 

Standards-based products and solutions enhance market value

In a competitive and innovative environment, the life cycle of existing products is abbreviated, and time to market for new products is accelerated. Standardization of the software technologies and tools helps shorten development time relative to that of complicated, maintenance-intensive proprietary software tools. In addition, users benefit from access to multiple qualified suppliers, standardized development and maintenance tools, reduced training investments, and increasing independence from single-supplier solutions.

Vendor conformity to the IEC 61131-3 standard leverages previous resource investments. Once engineers gain qualification in a standard like IEC 61131-3, they also gain the flexibility to work with different brands of software development and maintenance tools that are compliant to that standard. This allows users to expand their supplier options, pick the most suitable system for specific tasks, and reduce overall automation costs.

Engineers wishing to leverage their intellectual property by coding with program code or function blocks that are reusable on different systems, increasingly specify standards-compliant hardware and software in the design phase of a new machine, process, or plant for enhanced supplier independence. This desire for transportability is also true across different development environments. The PLCopen organization has specified the PLCopen XML format, which allows applications developed in one tool to be reusable in another environment.

Technicians trained in the IEC 61131-3 software standard can easily learn new applications as automation expands in factory and machine environments. For example, a qualified engineering technician comfortable using IEC 61131-3 to program and maintain packaging machines can easily program a different type of machine or a completely different application, such as off-road vehicle automation, building automation, or even “smart-grid” electrical distribution automation. This provides opportunities for diversification in human resource management, as well as career migration opportunities for individuals.

Considering some of the advanced features of standards, we notice that retooling and process improvement are accelerated in standards-based environments. Manufacturers’ innovation of programming tools based on standards provides user-friendly features and increased application development efficiency. Examples include integration of source code management, static code analysis, automatic test features, profiler functionality, refactoring, and even integration of C-code in an IEC application. Having such features completely integrated into a development environment adds value and accelerates adoption of the standard.

The increased performance and lower cost of hardware reduces the number of controllers in one installation. Today’s applications might require a programmable logic controller (PLC), motion controller, safety controller, and display. Modern system architecture allows a central, multicored control system to cover these requirements. The PLCopen organization has defined sets of function blocks for motion control and safety, so a conforming control system can take over these tasks. Modern programming environments will allow the engineer to develop the PLC code, motion code, safety application, and communication setup all in a single tool, even using a single application.

Standardized application specification, development, and maintenance

Automation solutions are much easier to manage when all parties communicate from a standards-based platform. In many projects, several engineers are involved in the specifications, design, and maintenance with tasks divided in different teams. For the largest systems, it is not unusual to involve third parties as outsourced engineering or integrators. The standardization dramatically eases communication between teams and their understanding of the code. Standardized code modularity eases validation, performance acceptance testing, and life-cycle maintenance.

Enhanced mechatronic interoperability

The interoperability of hardware in a standards-compliant, component-based architecture offers options for the selection of best-in-class mechatronics solutions. Upgrading an automation system is easy when standards create predictable component interfaces. When the application scope needs to be extended, the standards-based mechatronic architecture can be easily integrated into the software ecosystem with coding extensions and configuration over standard communication protocols. Machines become modular when the mechanics and automation components are mirrored in a modular application-code architecture.

Industrial automation is advancing inexorably in the direction of flexible, component-oriented design and implementation. Standards-based, modular architecture has significant advantages for industrial automation, and the IEC 61131-3 fully supports this approach.

A component-oriented automation approach is financially beneficial for users thanks to the expansion of compatible suppliers, the flexibility of component selection, and the reduction of overhead in the design, integration, and maintenance phases of the system life cycle.

It is no surprise that standards fit comfortably into the nimble architecture of the emerging mechatronic era. Smart manufacturing is here, and innovators, suppliers, and users alike prefer automation platforms based on standard technologies like IEC 61131-3, supporting PLCopen, OPC-UA, TSN, and real-time operating system environments. Stakeholders at every level benefit from such standardization. 

About the Author
Kenneth J. Ryan, MD, is director of education at BE.services GmbH, which owns BE.educated, an e-learning platform for industrial automation software tools. Ryan is the cofounder of the Center for Applied Mechatronics at Alexandria Technical and Community College. He created the first PLCopen-certified, university-based IEC 61131-3 curriculum and training center in North America. Ryan has been president of SERCOS North America, a member of the PLCopen board of directors, founder and director of the Center for Advanced Manufacturing Automation, and chair of a National Science Foundation – Advanced Technology Education Committee.

Connect with Kenneth
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

AutoQuiz: How to Install Thermocouple Extension Wires

The post AutoQuiz: How to Install Thermocouple Extension Wires first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

When connecting a “K” type thermocouple to a control system where extension wires are required, it is important to use only properly installed “KX” extension wires because:

a) this prevents the formation of a second temperature measurement junction
b) the manufacturer’s warranty for the thermocouple would be voided if “KX” extension wire is not used
c) “KX” thermocouple extension wire comes with special connectors for making the connection
d) “KX” thermocouple extension wire is cheaper than “JX” thermocouple extension wire and reduces installation cost
e) none of the above

Click Here to Reveal the Answer

Proper installation of thermocouple extension wires also requires special terminal blocks to prevent additional junctions from being formed.

The correct answer is A; it prevents the formation of a second temperature measurement junction. A thermocouple measurement junction is formed wherever two dissimilar metals are joined. KX-type thermocouple extension wire is made of the same metals as the K-type thermocouple (chromel and alumel). When extending the thermocouple leads with an extension wire back to the control system input card, KX thermocouple extension wire must be used, and the chromel wire and the alumel wire must be joined to the wire of the same metal in the extension cable. If JX or another type of extension wire is used, another measurement junction is formed. For instance, if JX extension cable is used in the example in this problem, the point where the iron and chromel wires are joined would form another thermocouple. This will negatively affect the intended measurement signal.

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

Webinar Recording: Lessons Learned During the Migration to a New DCS

The post Webinar Recording: Lessons Learned During the Migration to a New DCS first appeared on the ISA Interchange blog site.

This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

This ISA webinar on control valves is introduced by Greg McMillan and presented by Hector Torres, in conjunction with the ISA Mentor Program. Hector is a recipient of ISA’s John McCarney Award for the article on opportunities and challenges for enabling new automation engineers. Hector has been a member of the ISA Mentor Program since its inception. In this webinar, he provides a detailed view of how to use key PID controller features that can greatly expand what you can achieve. The setting of anti-reset windup ARW limits, dynamic reset limit, eight different structures, integral dead band, and set-point filter. Feedforward and rate limiting are covered with some innovative application examples

.videopopup.video__button:before {
border-left: 10px solid #ffffff !important;
}
a.popup-youtube:hover .videopopup.video__button {background: #8300e9 !important;}

Principal ISA Mentor Program mentee Hector Torres shares his extensive knowledge gained after migrating a plant from an 1980s vintage DCS to a state-of-the art, new DCS. The following important topics are covered: the proper setting of tuning parameters, controller output scales, anti-reset windup limits, and the many grounding, wiring, and configuration practices found to be essential in a migration project that exceeded expectations.

ISA Mentor Program Posts & Webinars

Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars.

About the Author
Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

Connect with Greg
LinkedIn



Source: ISA News