Thank You Sponsors!

CANCOPPAS.COM

CBAUTOMATION.COM

CONVALPSI.COM

DAVISCONTROLS.COM

ELECTROZAD.COM

EVERESTAUTOMATION.COM

HCS1.COM

MAC-WELD.COM

SWAGELOK.COM

THERMON.COM

VANKO.NET

WESTECH-IND.COM

WIKA.CA

What ISA Can Do for You

The post What ISA Can Do for You first appeared on the ISA Interchange blog site.

This post is authored by Paul Gruhn, president of ISA 2019.

ISA has both led and participated in a number of surveys over the last few years. One thing that really stood out for me is the significant differences between “active” versus “passive” (and non-) members. Their perceptions and desires from ISA differ.

Active members have a strong relationship with the Society; one driven by perceived value. They rate the Society’s products highly and perceive their membership to be a good value, and use many of our products and services. Their most preferred products are a) annual conferences, b) local meetings, and c) accreditation programs. ISA naturally offers all three, and much more.

Passive members (and non-members), on the other hand, have a weak relationship with the Society. They do not perceive membership to be a good value. They have low familiarity with our product range, do not see a benefit to it, and therefore have a low usage rate.

I find that intriguing; some in the automation industry do see the value of what we offer and use our products, yet others members of the very same industry don’t, with many not even being aware of what we actually offer.

But here’s the really interesting part: passive members state that they have unfulfilled needs. They seek a competitive advantage and want support in achieving both their and their employers’ objectives. That’s exactly what ISA offers, yet they don’t seem to realize that!!

Here’s another similar example. An Executive Board member recently posed a question on the LinkedIn ISA forum. (It’s interesting to note the group has 57,000 members, which is many more members than ISA actually has. So apparently non-members do want to be associated with the Society in some manner!) That board member asked what people “want” from ISA. The vast majority of responses were asking for what the Society already offers, and has for a long time! How is it that these people are either unaware of what we offer, or are aware of what we offer, but don’t perceive any benefit?

All the promotional material we’ve produced over the years lists what we offer (i.e., training, certifications, standards, publications, and conferences). Active members are able to connect the “what” to the “benefit” it offers both to them, as well as to their employer. They use our products and services as a result. Simply put, passive and non-members don’t make the connection, and therefore they don’t use our products and services. Yet our products and services are exactly what they’re asking for to make them and their employers more successful! We simply need to make the connection more obvious.

As an employee, are you looking to increase your technical knowledge and make yourself more valuable and competitive in the marketplace? Are you looking for a way to advance your career more quickly? Are you looking for ways to make your employer more successful? If so, ISA has just what you’re looking for!

As an employer, are you looking for a competitive advantage? Are you looking for a way to increase the competency of your employees, or a place to find competent prospects? Are you looking for a way to increase your operational excellence (e.g., safety, security, efficiency, profitability)? If so, ISA has just what you’re looking for!

And all this fits in perfectly with our new mission statement: Advance technical competence by connecting the automation community to achieve operational excellence. We’re advancing the technical competence of everyone in the industry (not just members) through publications, training, certifications, standards, and conferences. (We give our members extra benefits!) We do it to make people and their employers more successful. Who wouldn’t want to be a part of that!?

Oh, and, like all my predecessors, I’m honored to be your new Society President. I’ve been an active volunteer for 30 years and have served in essentially every area of the Society. I naturally would like to see the Society achieve certain goals over the next year, but those goals will need to be discussed and approved by the Executive Board at our first meeting in January, so I won’t announce them yet. Stay tuned!

About the Author
Paul Gruhn PE, CFSE, and ISA Life Fellow, is a global functional safety consultant with aeSolutions, a process safety, cybersecurity and automation consulting firm. As a globally recognized expert in process safety and safety instrumented systems, Paul has played a pivotal role in developing ISA safety standards, training courses and publications. He serves as a co-chair and long-time member of the ISA84 standard committee (on safety instrumented systems), and continues to develop and teach ISA courses on safety systems. He also developed the first commercial safety system modeling program. Paul has written two ISA textbooks, numerous chapters in other books and dozens of published articles. He is the primary author of the ISA book Safety Instrumented Systems: Design, Analysis, and Justification. He earned a bachelor of science degree in mechanical engineering from Illinois Institute of Technology, is a licensed Professional Engineer (PE) in Texas, and both a Certified Functional Safety Expert (CFSE) and an ISA84 safety instrumented systems expert.

Connect with Paul
48x48-linkedin Twitter48x48-email

 



Source: ISA News

What Are the Benefits and Limitations of Multivariable Control?

The post What Are the Benefits and Limitations of Multivariable Control? first appeared on the ISA Interchange blog site.

This post was written by Allan Kern, PE, who has more than 35 years of process control experience, and has authored numerous papers on topics ranging from field instrumentation, safety systems, and loop tuning to multivariable control, inferential control, and expert systems.

Multivariable control has become an enigma. On one hand, it is logically the centerpiece of process control—veritably “the robotics” of processautomation. But on the other hand, even as parallel process control technologies, such as alarm management, safety systems, and cybersecurity, continue to evolve swiftly, multivariable control’s progress stalled sometime in the past decade. This is all the more puzzling because several basic multivariable control issues, such as cost, maintenance, and performance, remain unresolved, and many potential applications remain to be addressed—the “application gap.”

This post does not endeavor to explain exactly where industry now finds itself, but only to put forth some of the features of the multivariable control landscape that have emerged with experience, especially where those features are ambiguous or unexpected relative to original industry expectations. With a more accurate picture and wider shared understanding of the prominent features of the multivariable control landscape, hopefully forward progress may resume.

What is advanced process control?

In recent decades, the term advanced process control (APC) has often been used synonymously with model-based predictive multivariable control (MPC), but for the purposes of this post, and for the purposes of moving industry forward, it helps to maintain distinctions.

Advanced process control is an umbrella term that can refer to any of a number of process control technologies and techniques. Functionally, advanced process control is best defined in contrast to basic process control. Basic (or base-layer) controls are normally part of initial plant design, construction, and commissioning, to facilitate basic plant operation, automation, and reliability. Advanced process controls, on the other hand, are most often added after the plant is up and running, often over the course of many years, to address economic or operational improvement opportunities that become apparent in the course of ongoing operation.

In terms of tools or technologies, APC comprises many, of which multivariable control is one. Other advanced control techniques include advanced regulatory control (ARC), inferential control, and sequential control.

What is multivariable control?

Multivariable control is usually viewed as something new, complex, and powerful. It is usually explained in terms of its technology (its models, control solver, and optimization methods), rather than in terms of the automation role it actually fills in industrial process operation. Consequently, from operators to managers, and even to control engineers themselves in many cases, people have often misunderstood exactly the role and value that multivariable control brings to process operation.

With the benefit of several decades of experience, multivariable control can now be widely understood in a more practical, even obvious way. This will help operators understand and use it more effectively. It will help managers and decision-makers better understand its role, value, and limitations. And it will help control engineers design and build more effective and reliable controllers.

Multivariable control is not new. It is as old as industry itself (much older than computers). Multivariable control has always been a prominent aspect of essentially every industrial process operation. Operators adjust valves and controllers to keep processes within constraint limits, to (locally) optimize economic performance, and—this aspect has been crucially missing from most discussions—to ensure process reliability (i.e., to avoid or minimize the risk of a process upset or abnormal situation).

That is the essence of multivariable control. It is mainly carried out by operators, who may be thought of as “boots on the ground.” But overall strategy and tactics are the product of the greater operating team, especially including production planners and process engineers, but also including nearly every other group whose areas of responsibility may intersect with ongoing operation, such as maintenance, equipment reliability, and process technologists. Constraint limits and operating tactics can be highly dynamic and may change or be updated hourly or daily from throughout the operating team. Multivariable control may or may not be partially automated in the form of an MPC application, but it is a prominent, dynamic aspect that is ongoing 24/7 in essentially every industrial process operation.

An important aspect of this generic functional definition of multivariable control is that it is independent of the technology or methodology being employed—manually by the operating team, automatically by a model-based controller, or automatically by a “model-less” controller, which is an alternative emerging technology. For example, the iconic multivariable control “constraint corner” diagram shown in figure 1 is recast as the difference between automated versus manual multivariable control, with benefits accruing by function, not method.

Figure 1. Automated multivariable control brings the usual automation benefits of greater consistency and timeliness, plus the traditional multivariable control benefits of increased capacity, efficiency, and quality. These benefits derive from reliably closing the constraint control and optimization loops, although not necessarily from using models to do so.

Is multivariable control “optimization”?

Multivariable control is often described as optimization, but this does not help operators, engineers, and managers understand its role or value, or distinguish it from other technologies that claim this term. Possibly the best term for multivariable control is local optimizer. In mathematics, a local optimizer finds an optimum that is geographically nearby or is based on only a subset of relevant inputs, and care is always taken to distinguish a local optimum from a global one.

Local optimization is an important and appropriate part of multivariable control, because the nature of multivariable control mechanics is that it normally has “left over” degrees of freedom (after first tending to constraint control) that can be readily applied toward optimization. Albeit local, this optimization can be operationally and economically important and worthwhile.

That said, multivariable control automation is partial and its optimization is local, so that it is a mistake to conflate it with the much larger job being carried out by the greater operating team. Automated multivariable control is limited to the subset of inputs available to the controller, which are limited to measured variables (those with transmitters), local variables (those available to the local distributed control system), and known variables (the greater operating team fields a continuous stream of asynchronous unplanned inputs from throughout the plant organization).

Traditional multivariable control practice has basically tried to mitigate this limited geographic horizon by adopting the “big matrix” design practice, wherein nearly all identifiable inputs are included in the matrix. But MPC continues to struggle with “degraded performance.” Experience now suggests it is caused in part by including many inappropriate inputs in the matrix, such as those where immediate automatic response to a constraint violation is operationally undesirable, (i.e., “managed” variables versus “controlled” variables). This suggests that industry might be better off taking a “small matrix” approach, wherein only key variables, well-known and proven in existing operation, are included, resulting in more targeted, reliable, and intuitive performance.

Efforts to implement a less partial (i.e., “rigorous”) and less local (i.e., “global”) optimization, often known as real-time optimization (RTO), are ongoing at many plant sites and enterprises across industry. RTO attempts to automate, to the extent possible, the full optimization scope of the greater operating team (although asynchronous, unanticipated, and human-derived inputs will most likely always comprise a large part of the solution). RTO is implemented on the business side of the computer network, but also normally depends on a healthy multivariable controller on the control system side, in order to honor actual dynamic process and equipment constraints. RTO may automatically write to some of the optimization targets and constraint limits of the multivariable controller. This was once expected to become very common, but as yet in industry it remains very rare. More often, today’s RTO results are passed down to operation via the human chain of command as updated operating instructions.

Another way to appreciate the scope and value of multivariable controller optimization is to notice that few people refer to it as optimization when it is being carried out manually by the operator, without the aid of an automated multivariable controller. Yet, an operator has a much wider geographic horizon and greater number of available inputs than an online controller, by virtue of access to business network tools and direct communication with the greater operating team, including the chain of command, process engineers, and field operators. Automated multivariable control has the virtue of responding in a timely and consistent manner to the key constraints and optimization variables that are made available to it. This emphasizes the wisdom of including key variables in an automated multivariable controller, rather than all variables.

Multivariable control performance

Multivariable control performance, maintenance, and cost, which are all closely related, have posed continuing ownership concerns. The majority of industry’s installed MPC applications have “poor” performance, but a discussion and understanding of degraded performance has been mostly absent. The wider community seems reluctant to directly confront these issues, which have now persisted for decades, resulting in today’s “enigma.”

Degraded performance is basically a syndrome (a set of incompletely understood causes and symptoms) characterized by “clamped” manipulated variables (MVs), detuned control action, and many variables out of service. The problem is exacerbated by the large number of variables caused by big-matrix design practice, which often render multivariable controllers unintuitive to operators—producing a disconnect between manual and automated multivariable control, rather than a congruence. This has led to yet another emerging concern—operators who have not learned (or have forgotten) how to control and optimize their unit when the multivariable controller is unavailable or turned off.

Development of performance metrics has also stagnated, even as, by contrast, industry has widely adopted an entire suite of alarm management metrics in half the time. The majority of end-user sites still rely solely on “service factor” (simple on/off status), but the community has long realized that this metric is largely meaningless (a controller can be on, but not doing anything). Moreover, the typically high service factor values this metric reports would probably be much lower if abandoned applications were kept in the math. A limited number of end users have forged ahead with additional, more meaningful metrics, such as MV utilization, but few best practices have emerged to help users increase utilization. The author, who first published this metric in 2005, believes the low values for this metric are essentially telling industry that the lion’s share of multivariable control benefits could be captured more reliably by using small-matrix design practice.

The author also believes that performance and maintenance issues are rooted in aspects of model-based technology (such as dynamically changing models) and practice (such as big-matrix practice) that were adopted by industry early on and have not been appropriately revised based on experience. When this eventually takes place, the issue of performance will resolve itself and cease to be a major ownership concern. A basic requirement of any controller or any piece of process automation is reliability, responsiveness, stability, precaution, and intuitiveness, or, as the author has summarized it, “operational” performance (figure 2).

Operational Controller Performance

  • Preselected move rates based on process knowledge, experience, and procedures
  • No overshoot or oscillation (for CVs and MVs)
  • Small-matrix design practice
    • Purpose-built matrix (not derived from plant test)
  • Intuitive to operations personnel
  • Mirrors operation priorities, needs, agility
  • Deploy or modify in days or weeks with little or no cost

Figure 2. “Operational” multivariable control performance, characterized by reliability, responsiveness, stability, precaution, agility, and intuitiveness, is a basic expectation of any process controller or any piece of process automation, although MPC has long struggled with performance issues.

The APC application gap

Figure 3 illustrates the APC application gap. This area is a gap because it is not well served by existing tools or technology. ARC has all but disappeared in the wake of MPC, but MPC did not expand to fill this gap as it was expected to, due to its continuing high cost and complexity of ownership issues. Consequently, for the past couple decades, applications in this range have often gone unpursued or have stagnated in the planning and budgeting pipeline. They often represent a missed opportunity or continued unrealized operational improvements.

Figure 3. The APC application gap results because applications in this size range are not well served by any existing tools or technology. ARC all but disappeared in the wake of MPC, but MPC never expanded into this range due to persisting high maintenance and cost of ownership issues. An agile and affordable tool to address the many applications in this range would be a boon to industry.

To capture applications in this gap, and to capture many traditional applications that might be better served by the small-matrix design approach, industry needs a more agile and affordable tool than MPC has so far proven to be. A more agile and affordable tool would also mirror industry’s modern manufacturing agility paradigm and would give industry the luxury of using automated multivariable control to capture common sense operational improvements, not just projects with large hard returns on investment. Today, the cost of MPC applications continues to be hundreds of thousands of dollars, which prices it out of many gap applications.

In addition to addressing some of the more obvious applications in this gap, such as stand-alone distillation columns, hydrotreating units, and sulfur units, an affordable and agile tool would open up the many applications that only tend to present themselves to the knowledgeable automation engineer when an appropriate tool or solution is on hand (those who remember the agility, affordability, and flexibility of ARC technology readily appreciate this).

Where are we today?

Multivariable control is a prominent aspect of nearly every industrial process operation, ergo, automated multivariable control must become a core competency in the process industries. Most everyone expected this to evolve long ago, but it never has, due to complex performance, support, and cost issues that industry has been slow to understand and address.

A more affordable, agile, and reliable tool would be an additional boon to industry. It would provide a potential way out from under the high maintenance and support burden of numerous legacy MPC applications; it would capture the “gap” applications; and it would give industry a tool to match its modern dynamic manufacturing paradigm.

A clear set of best practices and a long-term road map should emerge, so that end users know better what to expect and how to plan. By and large, industry today is still following decades-old original practices by rote, leaving many end users in limbo regarding the technology path forward, and increasingly wondering when, how, and even if, the ongoing cost, performance, and support issues will be resolved.

This is a topic for the entire process control community, not just the MPC community, to address the APC application gap and achieve multivariable control core competency. A better understanding of past limitations will help to move beyond them. If necessary, new concepts, such as model-less multivariable control, small-matrix design, and operational performance, warrant further consideration to end the stagnation and carry automated multivariable control progress forward once again.

About the Author
Allan Kern, PE, has 35 years of process control experience. He has authored numerous papers on topics ranging from field instrumentation, safety systems, and loop tuning to multivariable control, inferential control, and expert systems. From 2001 to 2008, Allan served as automation leader at a major Middle Eastern refinery, where his responsibilities included deployment and performance of multivariable control systems. Since 2005, Kern has published more than a dozen papers on multivariable control performance. In 2012, he became an independent process control consultant serving clients worldwide.

Connect with Allan
LinkedInEmail

 

A version of this article also was published at InTech magazine.



Source: ISA News

The Not-So-Distant Future of Industrial Automation

The post The Not-So-Distant Future of Industrial Automation first appeared on the ISA Interchange blog site.

This post was written by William Aja, vice president of customer operations at Panacea Technologies, Inc.

It sometimes feels like a common theme in the industry is to simply keep things going by implementing what was done before. If it works, why bother fixing it? New projects are continually built on existing standards, most of which are dog eared with problems. With financial or management pressures, there is a tendency to push on regardless of planned fixes and improvements.

The start of every project begins the familiar process of counting I/O, selecting vendors, laying out programmable logic controller chassis, and distributing requests for quotes. In an extended blink of an eye, you are commissioning a system built on less-than-perfect standards, inheriting the same quirks as previous systems.

The same conversations repeat during bid review or kickoff meetings when someone points out inconsistencies or outdated standards. The typical response is “We meant to fix that, but we ran out of time,” or “That’s on the to-do list.” Is the brand-new plant really new, or does it have the functionality of a system from the ’90s, barely utilizing the latest technology?

As previous blog posts have pointed out, these are side effects of aggressively downsizing technical staff. Technology changes seem to be accelerating, and the past few years have brought technical advances that have breathed new life into automation groups and sparked great discussions around what the future holds. There is still a lot of work to be done.

We tend to attract very passionate engineers, and one of them summed up the situation perfectly, pointing out it can be frustrating to see bigger technological jumps and more advanced systems available in grocery stores than in the automation industry. Passengers have been tracked on every commercial airline for 30 years using barcoded boarding passes, yet many manufacturers do not track raw material additions to their multi-million dollar batches! People continually draw attention to the increasingly wider disparity between consumer and industrial technology. There are concerns about the leading edge becoming “bleeding edge,” but the benefits often outweigh the risks. As with safety systems, risks can be designed out or at least mitigated.

Virtualization is a great example. When it first became a buzzword, there was a great deal of pushback. Every application got its own box, and that was the law, no exceptions. Early adopters saw the benefits, and news spread like wildfire. As supervisory control and data acquisition, manufacturing execution systems, and historian capabilities increased, so did the need for processing power, and virtualization was the perfect answer.

Thin and zero clients soon followed, and the fear of something new was replaced with the realization of the power of virtualized infrastructures. Companies now manage sites globally from central locations and deploy engineering resources via remote connections without travel. The daunting and, frankly, scary tasks of patch deployment and operating system upgrades are now easier and centralized. Replacing broken or failed operator and engineering workstations is as simple as connecting power, video, and network cables.

Virtualization is now commonplace in most specification documents, and many are asking what is next. Some say industrial technology pales in comparison to consumer technology—but why is that? Is it because consumer technology is not robust enough and thoroughly tested for the industry, or is it a recursive cycle of accepting status quo or technology advancements that are less than innovative? Things like faster scan rates, increased memory, tougher security, and unified communication protocols are great, but they are evolutionary not revolutionary.

The next big thing is waiting to happen. The technology exists, and it is our job as automation professionals to help define what is next, push the industry, innovate, and stay on the leading edge of these advances while pushing automation vendors to reinvent their product portfolios.

Advancements like virtualized controllers and end-to-end mobility solutions are not only being asked for daily by the industry, but they are easily feasible with existing technology. So push boundaries, adopt new technologies, and never accept the status quo simply because it has always been done that way. The success of our industry depends on it.

About the Author
William Aja is vice president of customer operations at Panacea Technologies, Inc. He has a passion for automation and process control and focuses on delivering automation services ranging from automation philosophy consultation and feasibility studies to turnkey process control solutions and long-term service and support. Panacea Technologies is a member of the Control System Integrators Association.

Connect with William
LinkedInEmail

 

A version of this article also was published at InTech magazine.



Source: ISA News

AutoQuiz: What Information Is Typically Included in a Loop Diagram?

The post AutoQuiz: What Information Is Typically Included in a Loop Diagram? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

A loop diagram typically includes all of the information indicated below, except for:

a) tag numbers of instruments connected to the loop as shown on the P&IDs and depicted by the loop diagram
b) conduit size and material in which the wires are run for the loop
c) wire numbers for all wires and terminal numbers for all connections depicted in the loop diagram
d) indication of the interrelation to other instrumentation loops, including overrides, interlocks, cascaded set points, shutdowns, and safety circuits
e) none of the above

Click Here to Reveal the Answer

Tag numbers (A), wire numbers and terminal numbers (C), and indication of the interrelation to other instrumentation loops (D) are three of the essential elements of a loop diagram. They define the “blueprint” for wiring of the components in the loop shown on the loop diagram. The size of the conduit and the material of the conduit (B) are not usually shown on the loop diagram because these items are generally not required for design review, installation, or maintenance.

The correct answer is B.

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 

Image Credit: Wikipedia



Source: ISA News

KPI-Based Monitoring and Fault Detection for Large-Scale Industrial Processes [technical]

The post KPI-Based Monitoring and Fault Detection for Large-Scale Industrial Processes [technical] first appeared on the ISA Interchange blog site.

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.

Abstract: Large-scale processes, consisting of multiple interconnected sub-processes, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel re- presentation of each sub-process, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods.

Free Bonus! To read the full version of this ISA Transactions article, click here.

Enjoy this technical resource article? Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus professional networking and discounts on technical training, books, conferences, and professional certification.

Click here to join ISA … learn, advance, succeed!

2006-2018 Elsevier Science Ltd. All rights reserved.

 



Source: ISA News

Is Schrödinger’s Cat Alive, Dead, or Having Kittens? How the Power of Observation Improves Accuracy and Reduces Errors in Industrial Operations

The post Is Schrödinger’s Cat Alive, Dead, or Having Kittens? How the Power of Observation Improves Accuracy and Reduces Errors in Industrial Operations first appeared on the ISA Interchange blog site.

This guest blog post was written by Edward J. Farmer, PE, industrial process expert and author of the ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to obtain a copy of the book, click this link.

Back in 1944 Erwin Schrödinger, the progenitor of quantum physics, observed in his essay “On Life” that a very many things are “true” according to the rule of the √n. In a random sample of 100 individuals, for example, perhaps √100 = 10 will not conform to a seemingly universal hypothesis, thus making it true 90 percent of the time. If the sample size is increased to a million then √n = 1000, so the hypothesis will then be found to be true 99.9 percent of the time.

The most profound implications on the other side are found when the sample size becomes small. If the hypothesis is tested in a room of 10 randomly selected people, it may be found that three of them (33 percent) do not conform. When it is applied to any randomly selected individual, quickly notice √1 = 1 meaning our entire sample is likely non-conforming.

This simple observation has everything to do with the validity of rules one might propose on any subject, but particularly were human behavior is involved. It is easy for opposition to attack any proposed rule or theorem with as simple a basis as, “Well it doesn’t apply to ME,” thus implying that a rule should be “true all the time” in order to be called a rule and thereby relegating it to the realm of anecdotal observations. 

In fact, where people are concerned no rule applies to every one of them, although there are a number of rules that are true even in broad context. In classical mathematics a “proof” must be absolutely true and comprehensive. In modern science (e.g., quantum physics) many outcomes involve probability and limits on certainty.

Einstein didn’t like the probabilistic aspects of quantum theory. He had a penchant for “true all the time” implications flowing from a minimum number of profound rules, so those who set the bar for efficacy of a rule at that high level are in good company. Unfortunately, very little progress will be made in any field involving people on that basis. Einstein is often characterized as not believing in a “dice-throwing God.” When I studied modern physics, I took that to suggest our quest should be toward explanations supporting universally predictable and repeatable outcomes – that true-and-complete understanding took us through a stochastic “curtain.”

In statistics, we characterize probability experience by evaluating the probability density of a variety of observed or expected outcomes. When my son was seven he found he often won a dice-tossing event by betting the two-die result would add up to his age. He’s now 11 and is far less confident that using his age is a sure-fire way to be the most prolific winner.

Depending on the number of throws and the exact nature of the exercise it is common to characterize an expected result by its distance from the average of all of them. This is quantified by the “standard deviation” of a data set. It’s calculated as the square root of the sum of the squares of the differences between each of the data points and the population mean. Simply:

Where:

  • σ is the standard deviation
  • N is the number of data points
  • Xi is the value of the ith sample
  • μ is the population mean

In a “normal” data set this usually results in observing 68.27 percent of the observations are within a standard deviation of the mean (average). That’s about 2/3 of them. That leaves the rest lying above or below this central “consensus.” This concept is where stochastic concepts begin creeping into measurement (observation) issues, such as “accuracy.” It also portends the “confidence” we can have in the next measurement, or the one after that, or the one.

What is our confidence in an analysis for which we have measurements? Put another way, how sure are we of the accuracy of a particular measurement? As previously discussed, if we have but one measurement in our set it might be “right” or it might be “wrong.” If we have enough data to characterize a probability density function from our measurement experience, we can look at the expected variations and the value of a particular measurement.

One standard deviation contains 68.27 percent of the data, two contain 95.45 percent, three contain 99.73 percent. Any situation can be calculated but the purpose here is just to develop a feeling for the concept. Note that this discussion is focused on “normal” (also known as Gaussian) distributions. A process may turn out to have a different distribution shape, but that’s a subject for another day.

In many automation applications the process can change faster than the measurement system can report values. The number of readings available over what may be a short period of interest can have a profound effect on the confidence one might have from an analysis. When it comes to data, more is usually better which hopefully this excursion into statistics has demonstrated.

About the Author
Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. His work has produced three books, numerous articles, and four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. He is the owner and president of EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.

Connect with Ed
48x48-linkedinEmail

 



Source: ISA News

AutoQuiz: What Is the Purpose of Forward Decoupling?

The post AutoQuiz: What Is the Purpose of Forward Decoupling? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Using forward decoupling, the objective is to:

a) accumulate the interaction between two process variables and their outputs
b) cancel out the interaction between two process variables and their outputs
c) use feedback to eliminate the gain of one process variable and its output
d) use a decoupling algorithm to eliminate all gain between outputs
e) none of the above

Click Here to Reveal the Answer

Answer A is not correct; accumulation of the interactions would be counterproductive to the loop performance, and would simply magnify the coupling between inputs and outputs.

Answer C is not correct. Simple feedback can be used to make adjustments to the output based on a measured or calculated quantity, but simple feedback cannot be used to eliminate the process gain between a process variable and its own output.

Answer D is not correct. Similar to Answer C, a decoupling algorithm does not eliminate “all gains between outputs,” but rather, when used in a forward decoupling method, can be used to cancel the interactions between multiple inputs and their outputs.

The correct answer is B, “Cancel out the interaction between two process variables and their outputs.” In MIMO (multiple input, multiple output) systems, often the process variables and outputs interact with one another, which makes control of the independent variables difficult. A forward decoupling algorithm can be used to cancel out these interactions, making more traditional control methods applicable to these complex systems.

Reference: Greg McMillan and Robert Cameron, Models Unleashed: Virtual Plant and Model Predictive Control Applications

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

Webinar Recording: How to Protect Critical Industrial Control Systems

The post Webinar Recording: How to Protect Critical Industrial Control Systems first appeared on the ISA Interchange blog site.

This ISA webinar on how to protect critical control systems from cyber attack was presented by Wally Magda, an ISA instructor and an internationally recognized cybersecurity and physical security expert.

Note: This is the first eight minutes of the recorded webinar. To watch the entire webinar, click this link.

.videopopup.video__button:before {
border-left: 10px solid #ffffff !important;
}
a.popup-youtube:hover .videopopup.video__button {background: #8300e9 !important;}

Legacy industrial devices are “insecure by design” and therefore vulnerable to interruption from cybersecurity threats or unintentional network incidents. Risk is increasing as Ethernet networking becomes more pervasive and more complex. Physical security has internet protocol (IP) based cameras and sensors sharing the same network infrastructure. Along with that comes the Internet of Things (IoT) and the Industrial Internet of Things (IIoT). Now your control room coffee pot and refrigerator may be connected to the internet and exposing your network to threat actors, ransomware and bots.

The move to using open standards such as Ethernet, TCP/IP, and web technologies in industrial automation and control systems (IACS), supervisory control and data acquisition (SCADA) and process control networks (PCN) has begun to expose these systems to the same cyberattacks that have wreaked so much havoc on corporate information systems. The introduction of complex Windows 7 and 10 operating systems (OS) deployed along with the existing legacy Windows XP OS means that the security risk is even higher.

This presentation provides a high-level overview on how the ISA/IEC 62443 standards can be used to protect your critical control systems. It also explores the procedural and technical differences between the security for traditional IT environments and those solutions appropriate for IACS, SCADA, and PCN environments.

As part of ISA’s continued efforts to meet the growing need of industrial control systems professionals and to expand its global leader outreach into the security realm, ISA has developed a knowledge-based certificate recognition program designed to increase awareness of the ISA99 committee and the ISA/IEC 62443 standards. The ISA/IEC 62443 Cybersecurity certificate program is designed for professionals involved in IT and control system security roles that need to develop a command of industrial cybersecurity terminology and understanding of the material embedded in the ISA/IEC 62443 standards.

Key Takeaways

  • Use the ISA/IEC 62443 standards to secure your control systems
  • Discover the five common myths regarding industrial automation and control system (IACS) security
  • Assess the cybersecurity of new or existing control systems
  • Understand cybersecurity design & implementation & testing of control systems

About the Presenter
Wally Magda is an internationally recognized cyber and physical security expert for Industrial Control Systems (ICS) with many years of practical hands on experience. His deep security experience spans military nuclear missile command and control systems, intelligence agencies and enterprise cyber and physical security. As a regional North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) compliance auditor, Wally set a professional tone demonstrating for all stakeholders the necessity of adhering to governing rules of procedure. He successfully completed over 100 on and off site audits. Wally is the 2018 Information Systems Security Association (ISSA) International Security Professional of the Year. As an ISSA Fellow Member, he is recognized for his active contributions to the security community. Wally currently focuses on providing ICS cyber and physical security training courses. He also conducts cyber and physical security assessments for industries such as electric energy, natural gas, chemical, liquefied natural gas (LNG), water, water reclamation and manufacturing facilities.

Connect with Wally
LinkedIn



Source: ISA News

How to Get Started with Effective Use of OPC

The post How to Get Started with Effective Use of OPC first appeared on the ISA Interchange blog site.

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

Encouraged to ask general questions that would help share knowledge, Nikki Escamillas provided several questions on OPC. Initially, the OPC standard was restricted to the Windows operating system with the acronym originally designating OLE (object linking and embedding) for process control.  OPC is the acronym for open platform communications that is much more widely used playing a key role in automation systems. We are fortunate to have answers to Nikki’s questions from a knowledgeable expert in higher level automation system communications, Tom Freiberger, product manager for industrial Ethernet in R&D engineering for Emerson Automation Solutions.

Nikki Escamillas is a recently added protégé in the ISA Mentor Program. Nikki is an Automation Process Engineer for Republic Cement and Building Materials – Batangas Plant. Nikki specializes in process optimization and automation control, committed in minimizing cost and product quality improvement through effective time management and efficient use of resources and data analytics. Nikki has an excellent knowledge and experience of advanced process control principles and its application to different plant processes more specifically on cement and building materials manufacturing.

Nikki Escamillas’ First Question

How does OPC work?

Tom Freiberger’s Answer

OPC is a client/server protocol. The server has a list of data points (normally in a tree structure) that it provides. A client can connect to a server and pick a set of data points it wishes to use. The client can then read or write to those data points.  OPC is meant to be a common language for integrating products from multiple vendors. The OPC Foundation has a good introduction of OPC DA and UA at their website.

Nikki Escamillas’ Second Question

Does configuration of OPC DA differs from OPC UA?

Tom Freiberger’s Answer

Yes and no. The core concept of client/server and working with a set of data points remains consistent between the two, but the details of how to configure differ. The security configuration is the primary difference. OPC DA is based off of Microsoft’s DCOM technology, which means the security settings in the operating system are used. OPC UA runs on many operating systems and therefore the security settings are embedded into the configuration of the OPC application. OPC UA applications should use common terminology in their configuration, to ease integration between multiple vendors

Nikki Escamillas’ Third Question

Do we have any guidelines to follow when installing and configuring one OPC based upon its type?

Tom Freiberger’s Answer

Installation and configuration guidelines are going to be specific to the products being used. Some products are going to be limited on the number of data points that can be exchanged by a license or other application limitation. Some products may have performance limits. All of these details should be supplied in the documentation of the product.

ISA Mentor Program

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

Nikki Escamillas’ Fourth Question

Could I directly make one computer to become OPC capable?

Tom Freiberger’s Answer

An OPC server or client by itself is just a means to transfer data. OPC is not very interesting without another application behind it to supply information. The computer you are attempting to add OPC to would need some other application to provide data. The vendor of that application would need to build OPC into their product. If the application with the data supports some other protocol to exchange data (like Modbus TCP, Ethernet/IP, or PROFINET) an OPC protocol converter could be used to interface with other OPC applications. If the application with the data has no means of extracting the information, there is nothing an OPC server or client can do.

Nikki Escamillas’ Fifth Question

Is it also possible to create a server to server communication between two OPC?

Tom Freiberger’s Answer

I believe there are options for this in the OPC protocol specification, but the details would be specific to the product being used. If it allows server to server connections, it should be listed in its documentation.

Additional Mentor Program Resources

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

About the Author
Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

Connect with Greg
LinkedIn



Source: ISA News

AutoQuiz: What Is the Minimum Course of Action to Take Before Replacing Damaged Wiring?

The post AutoQuiz: What Is the Minimum Course of Action to Take Before Replacing Damaged Wiring? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

A properly designed control panel is fed by one 120VAC circuit. A set of 24VDC redundant power supplies feeds the transmitter and controller for a temperature control loop. The loop is protected by a single removable fuse. It is necessary to replace a damaged wire inside the control panel that is part of the temperature control loop. What is the minimum course of action to take before replacing the damaged wiring?

a) Just go ahead and replace the wire. The largest voltage that could be present is 24VDC, which cannot hurt the technician.
b) Remove the 24VDC power feed wiring to the temperature control loop before replacing the damaged wire.
c) With the proper personnel protective equipment, pull the fuse on the level control loop circuit and test for “dead circuit” with a multi-meter to insure no voltage is present on the loop before replacing the wires.
d) Check the level transmitter front LCD. If it indicates that no power is present, replace the wiring.
e) None of the above

Click Here to Reveal the Answer

Answer A is not correct. Even exposure to a live 24VDC circuit can cause injury (although not to the degree that 120VAC can).

Answer B is not correct because removing live power feed wires from a loop is not safe to do while the circuit is hot. There is a likelihood of creating a short circuit using this procedure.

Answer D is not correct because indication of zero volts at the device does not guarantee that there is no voltage present in the control panel.

The correct answer is C. Always use the proper PPE for the job. Circuits should be disabled using only the devices provided (removable fuses, disconnect switches, etc.). All circuits to be worked on should be tested with a multi-meter prior to performing the work to ensure that no voltage or current is present. Even safer would be to disconnect the 120VAC feed to the panel, if possible.

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News