Thank You Sponsors!

CANCOPPAS.COM

CBAUTOMATION.COM

CONVALPSI.COM

DAVISCONTROLS.COM

ELECTROZAD.COM

EVERESTAUTOMATION.COM

HCS1.COM

MAC-WELD.COM

SWAGELOK.COM

THERMON.COM

VANKO.NET

WESTECH-IND.COM

WIKA.CA

Basic Guidelines for Control Valve Selection and Sizing

The post Basic Guidelines for Control Valve Selection and Sizing first appeared on the ISA Interchange blog site.

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Hiten Dalal.

Hiten Dalal, PE, PMP, is senior automation engineer for products pipeline at Kinder Morgan, Inc. Hiten has extensive experience in pipeline pressure and flow control.

Hiten Dalal’s Question

Are there basic rule of thumb guidelines for control valve sizing outside of relying on the valve supplier and using the valve manufacturer’s sizing program?

Hunter Vegas’ Answer

Selecting and sizing control valves seems to have become a lost art. Most engineers toss it over the fence to the vendor along with a handful of (mostly wrong) process data values, and a salesperson plugs the values into a vendor program which spits out a result.  Control valves often determine the capability of the control system, and a poorly sized and selected control valve will make tight control impossible regardless of the control strategy or tuning employed. Selecting the right valve matters!

There are several aspects of sizing/selecting a control valve that must be addressed:

Determine what the valve is supposed to do

  • Is this valve used for tight control or is ‘loose’ control acceptable. (For instance, are you trying to control a flow within a very tight margin across a broad range of process conditions or are you simply throttling a charge flow down as it approaches setpoint to avoid overshoot). The requirements for one situation are quite different from the other.
  • Is this valve supposed to provide control or tight shutoff? A valve can almost never do both. If you need both, then add a separate on/off shutoff valve. 

Understand the TRUE process conditions

  • What is the minimum flow that the valve must control?
  • What is the maximum flow that the valve must pass?
  • What are the TRUE upstream/downstream pressures and differential pressure across the valves in those conditions? (Note that the P1 and DP at low flow rates will usually be much higher than at full flow rates. If you see a valve spec showing the same DP valve for high and low flow conditions it will be wrong 95%+ of the time.
  • What is the min/max temperature the valve might see? Don’t forget about clean out/steam out conditions or abnormal conditions that might subject a valve to high steam temperatures.
  • What is the process fluid? Is it always the same or could it be a mix of products?

Note that gathering this data is probably the hardest to do.  It often takes a sketch of the piping, an understanding of the process hydraulics, and examination of the system pump curves to determine the real pressure drops under various conditions. Note too that the DP may change when you select a valve since it might require pipe reducers/expanders to be installed in a pipe that is sized larger.

Understand the installed flow characteristic of the valve

This can be another difficult task. Ideally the control valve response should be linear (from the control system’s perspective). If the PID output changes 5%, the process should respond in a similar fashion regardless of where the output is.  (In other words 15% to 20% or 85% to 90% should ideally generate the same process response.) If the valve response is non-linear, control becomes much more difficult. (You can tune for one process condition but if conditions change the dynamics change and now the tuning doesn’t work nearly as well.)  The valve response is determined by a number of items including:

  • The characteristics of the valve itself. (It might be linear, equal percent, quick opening, or something else.)
  • The DP of the process – The differential pressure across the valve is typically a function of the flow (the higher the flow, the lower the DP across the valve). This will generate a non-linear function.
  • System pressure and pump curves – pumps often have non-linear characteristics as well, so the available pressure will vary with the flow.

The user has to understand all of these conditions so he/she can pick the right valve plug. Ideally you pick a valve characteristic that will offset the non-linear effects of the process and make the overall response of the system linear. 

If the pressure drop is high, you may have a cavitation, flashing, or choked flow situation

That complicates matter still further because now you’ll need to know a lot more about the process fluid itself. If you are faced with cavitation or flashing you may need to know the vapor pressure and critical pressure of the fluid. This information may be readily available or not if the fluid is a mix of products. Choked flow conditions are usually accompanied with noise problems and will also require additional fluid data to perform the calculations. Realize too that the selection of the valve internals will have a big impact on the flow rates, response, etc.  (You’ll be looking at anti-cav trim, diffusers, etc.)

Armed with all of that information (and it is a lot of information) you can finally start sizing/selecting the valve

Usually the vendor’s program is a good place to start, but some programs are much better than others because some have more process data ‘built in’ and have the advanced calculations required to handle cavitation, flashing, choked flow, and noise calculations. Others are very simplistic and may not handle the more advanced conditions. Theoretically you could use any vendor’s program to do any valve but obviously the vendor program will typically have only its valve data built in so if you use a different program you’ll have to enter that data (if you can find it!)  One caution about this – some vendors have different valve constants which can be difficult to convert.  

The procedure for finally choosing the valve (roughly)

Run a down and dirty calc to just see what you have. What is the required Cv at min and max flows? Do I have cavitation/flashing/choking issues?

  • Run a down and dirty calc to just see what you have. What is the required Cv at min and max flows?  Do I have cavitation/flashing/choking issues? 
  • If there is cavitation/flashing/choking then things get a lot more complicated so I’ll save that for another lesson.
  • Assuming no cavitation/flashing/choking then you can take the result and start to select a particular valve. The selection process includes:
  1. Pick an acceptable valve body type. (Reciprocating control valves with a digital positioner and a good guide design will provide the tightest control. However other body styles might be acceptable depending on the requirements and budget.)
  2. Pick the right valve characteristic to provide an overall linear response.
  3. Now look at the offering of that valve and trim style and pick a valve with the proper range of CVs. Usually you want some room above the max flow and you want to make sure you are able to control at the minimum flow and not be bumping off the seat. Note that you may have to go to a different valve body (or even manufacturer) to meet your desired characteristic and Cv. 
  4. Make sure the valve body/seals are compatible with your process fluid and the temperature.

Hope this helped. It was probably a bit more than you were wanting but control valve selection and sizing is a lot more complicated than most realize.

ISA Mentor Program

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

Greg McMillan’s Answer

Hunter did a great job of providing detailed concise advice. My offering here is to help avoid the common problems from an inappropriate focus on maximizing valve capacity, minimizing valve pressure drop, minimizing valve leakage and minimizing valve cost. All these things have resulted in “on-off valves” posing as “throttling valves” creating problems of poor actuator and positioner sensitivity, excessive backlash and stiction, unsuspected nonlinearity, poor rangeability, and smart positioners giving dumb diagnostics.

While certain applications, such as pH control, are particularly sensitive to these valve problems, nearly all loops will suffer from backlash and stiction exceeding 5% (quite common with many “on-off valves”) causing limit cycles that can spread through the process. These “on-off valves” are quite attractive because of the high capacity and low pressure drop, leakage and cost. To address leakage requirements, a separate tight shutoff valve should be used in series with a good throttling valve and coordinated to open and close to enable a good throttling valve to smoothly do its job.

Unfortunately there is nothing on a valve specification sheet that requires the valve have a reasonably precise and timely response to signals and not create oscillations from a loop simply being in automatic making us extremely vulnerable to common misconceptions. The most threatening one that comes to mind in selection and sizing is that rangeability is determined by how well a minimum Cv matches the theoretical characteristic. In reality, the minimum Cv cannot be less than the backlash and stiction near the seat. Most valve suppliers will not provide backlash and stiction for positions less than 40% because of the great increase from the sliding stem valve plug riding the seat or the rotary disk or ball rubbing the seal.  Also, tests by the supplier are for loose packing. Many think piston actuators are better than diaphragm actuators.

Maybe the physical size and cost is less and the capability for thrust and torque higher, but the sensitivity is an order of magnitude less and vulnerability to actuator seal problems much greater. Higher pressure diaphragm actuators are now available enabling use on larger valves and pressure drops. One more major misconception is that boosters should be used instead of positioners on fast loops. This is downright dangerous due to positive feedback between flexure of diaphragm slightly changing actuator pressure and extremely high booster outlet port sensitivity. To reduce response time, the booster should be put on the positioner output with a bypass valve opened just enough to stop high frequency oscillations by allowing the positioner to see the much greater actuator and booster volume.

The following excerpt from the Control Talk blog Sizing up valve sizing opportunities provides some more detailed warnings:

We are pretty diligent about making sure the valve can supply the maximum flow. In fact, we can become so diligent we choose a valve size much greater than needed thinking bigger is better in case we ever need more. What we often do not realize is that the process engineer has already built in a factor to make sure there is more than enough flow in the given maximum (e.g., 25% more than needed). Since valve size and valve leakage are prominent requirements on the specification sheet if the materials of construction requirements are clear, we are setup for a bad scenario of buying a larger valve with higher friction.

The valve supplier is happy to sell a larger valve and the piping designer is happier that not much or any of a pipe reducer is needed for valve installation and the pump size may be smaller. The process is not happy. The operators are not happy looking at trend charts unless the trend chart time and process variable scales are so large the limit cycle looks like noise. Eventually everyone will be unhappy.

The limit cycle amplitude is large because of greater friction near the seat and the higher valve gain. The amplitude in flow units is the percent resolution (e.g., % stick-slip) multiplied by the valve gain (e.g., delta pph per delta % signal). You get a double whammy from a larger resolution limit and a larger valve gain. If you further decide to reduce the pressure drop allocated to the valve as a fraction of total system pressure drop to less than 0.25, a linear characteristic becomes quick opening greatly increasing the valve gain near the closed position. For a fraction much less than 0.25 and an equal percentage trim you may be literally and figuratively bottoming out for the given R factor that sets the rangeability for the inherent flow characteristic (e.g., R=50).

What can you do to lead the way and become the “go to” resource for intelligent valve sizing?

You need to compute the installed flow characteristic for various valve and trim sizes as discussed in the Jan 2016 Control Talk post Why and how to establish installed valve flow characteristics. You should take advantage of supplier software and your company’s mechanical engineer’s knowledge of the piping system design and details.

You must choose the right inherent flow characteristic. If the pressure drop available to the control valve is relatively constant, then linear trim is best because the installed flow characteristic is then the inherent flow characteristic. The valve pressure drop can be relatively constant due to a variety of reasons most notably pressure control loops or changes in pressure in the rest of the piping system being negligible (fictional losses in system piping negligible). For more on this see the 5/06/2015 Control Talk blog Best Control Valve Flow Characteristic Tips.

On the installed flow characteristic you need to make sure the valve gain in percent (% flow per % signal) from minimum to maximum flow does not change by more than a factor of 4 (e.g., 0.5 to 2.0) with the minimum gain greater than 0.25 and the maximum gain less than 4. For sliding stem valves, this valve gain requirement corresponds to minimum and maximum valve positions of 10% and 90%. For many rotary valves, this requirement corresponds to minimum and maximum disk or ball rotations of 20 degrees and 50 degrees.

Furthermore, the limit cycle amplitude being the resolution in percent multiplied by the valve gain in flow units (e.g., pph per %) and by the process gain in engineering units (e.g., pH per pph) must be less than the allowable process variability (e.g., pH). The amplitude and conditions for a limit cycle from backlash is a bit more complicated but still computable. For sliding stem valves, you have more flexibility in that you may be able to change out trim sizes as the process requirements change. Plus, sliding stem valves generally have a much better resolution if you have a sensitive diaphragm actuator with plenty of thrust or torque and a smart positioner.

The books Tuning and Control Loop Performance Fourth Edition and Essentials of Modern Measurements and Final Elements have simple equations to compute the installed flow characteristic and the minimum possible Cv for controllability based on the theoretical inherent flow characteristic, valve drop to total system drop pressure ratio and the resolution limit.

Here is some guidance from “Chapter 4 – Best Control Valves and Variable Frequency Drives” of Process/Industrial Instruments and Controls Handbook Sixth Edition that Hunter and I just finished with the contributions of 50 experts in our profession to address nearly all aspects of achieving the best automation project performance.

Use of ISA Standard for Valve Response Testing

The effect of resolution limits from stiction and dead band from backlash are most noticeable for changes in controller output less than 0.4% and the effect of rate limiting is greatest for changes greater than 40%. For PID output changes of 2%, a poor valve or VFD design and setup are not very noticeable. An increase in PID gain resulting in changes in PID output greater than 0.4% can reduce oscillations from poor positioner design and dead band.

The requirements in terms of 86% response time and travel gain (change in valve position divided by change in signal) should be specified for small, medium and large signal changes. In general, the travel gain requirement is relaxed for small signal changes due to effect of backlash and stiction, and the 86% response time requirement is relaxed for large signal changes due to the effect of rate limiting. The measurement of actual valve travel is problematic for on-off valves posing as throttling valves because the shaft movement is not disk or ball movement. The resulting difference between shaft position and actual ball or disk position has been observed in several applications to be as large as 8 percent.

Best Practices

Use sizing software with physical properties for worst case operating conditions. The minimum valve position must be greater than backlash and deadband. Based on a relatively good installed flow characteristic valve gains (valve drop to system pressure drop ratio greater than 0.25), there are minimum and maximum positions during sizing to minimize nonlinearity to less than 4:1. For sliding stem valves, the minimum and maximum valve positions are typically 10% and 90%, respectively. For many rotary valves, the minimum and maximum disk or ball rotations are typically 20 degrees and 50 degrees, respectively. The range between minimum and maximum positions or rotations can be extended by signal characterization to linearize the installed flow characteristic.

  1. Include effect of piping reducer factor on effective flow coefficient
  2. Select valve location and type to eliminate or reduce damage from flashing
  3. Preferably use a sliding stem valve (size permitting) to minimize backlash and stiction unless crevices and trim causes concerns about erosion, plugging, sanitation, or accumulation of solids particularly monomers that could polymerize and for single port valves install “flow to open” to eliminate bathtub stopper swirling effect
  4. If a rotary valve is used, select valve with splined shaft to stem connection, integral cast of stem with ball or disk, and minimal seal friction to minimize backlash and stiction
  5. Use Teflon and for higher temperature ranges use Ultra Low Friction (ULF) packing
  6. Compute the installed valve flow characteristic for worst case operating conditions
  7. Size actuator to deliver more than 150% of the maximum torque or thrust required
  8. Select actuator and positioner with threshold sensitivities of 0.1% or better
  9. Ensure total valve assembly dead band is less than 0.4% over the entire throttle range
  10. Ensure total valve assembly resolution is better than 0.2% over the entire throttle range
  11. Choose inherent flow characteristic and valve to system pressure drop ratio that does not cause the product of valve and process gain divided by process time constant to change more than 4:1 over entire process operating point range and flow range
  12. Tune positioner aggressively for application without integral action with readback that indicates actual plug, disk or ball travel instead of just actuator shaft movement
  13. Use volume boosters on positioner output with booster bypass valve opened enough to assure stability to reduce valve 86% response time for large signal changes
  14. Use small (0.2%) as well as large step changes (20%) to test valve 86% response time
  15. Use ISA standard and technical report relaxing expectations on travel gain and 86% response time for small and large signal changes, respectively

For much more on valve response see the Control feature article How to specify valves and positioners that do not compromise control.

The best book I have for understanding the many details of valve design is Control Valves for the Chemical Process Industries written by Bill Fitzgerald and published by McGraw-Hill. The book that is specifically focused on this Q&A topic is Control Valve Selection and Sizing written by Les Driskell and published by ISA.  Most of my books in my office are old like me. Sometimes newer versions do not exist or are not as good.

Additional Mentor Program Resources

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

About the Author
Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

Connect with Greg
LinkedIn

Image Credit: Wikipedia



Source: ISA News

Webinar Recording: Calibration and Inspections in Hazardous Areas

The post Webinar Recording: Calibration and Inspections in Hazardous Areas first appeared on the ISA Interchange blog site.

This ISA webinar on calibration considerations in hazardous areas was presented by Cameron Kamrani, PE, PMP, LEED AP, and Ned Espy and Roy Tomalino of Beamex.

.videopopup.video__button:before {
border-left: 10px solid #ffffff !important;
}
a.popup-youtube:hover .videopopup.video__button {background: #8300e9 !important;}

There are many aspects to consider before you enter a hazardous (Ex) area to perform calibrations or inspections, such as the rating of the area (zone or division) and the equipment you use. Learn about hazardous area classifications, important considerations and techniques to ensure compliance and safety, and see demonstrations of a calibration and inspection in a hazardous area.

About the Presenter
Cameron Kamrani, PE, PMP, LEED AP, has more than 40 years of engineering and project management experience. He is an expert in wide-scale control and instrumentation projects and is an ISA instructor. Cameron teaches ISA training courses on working in hazardous areas.

Connect with Cameron
LinkedIn

 

 

About the Presenter
Ned Espy has been promoting calibration management with Beamex for more than 20 years. He has directed field experience in instrumentation measurement application for over 27 years. Today, Ned provides technical & application support to Beamex clients and partners throughout North America.

Connect with Ned
LinkedIn

 

 

About the Presenter
Roy Tomalino has been teaching calibration management for 14 years. Throughout his career, he has taught on four different continents to people from over 40 countries. His previous roles include technical marketing engineer and worldwide trainer for Hewlett-Packard and application engineer with Honeywell. Today, Roy is responsible for all Beamex training activities in North America.

Connect with Roy:
48x48-linkedinEmail

 



Source: ISA News

AutoQuiz: Characteristics of a Fuzzy Logic Controller

The post AutoQuiz: Characteristics of a Fuzzy Logic Controller first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Which of the following statements about fuzzy logic controllers is true?

a) the rules for the fuzzy logic replacement for a proportional-integral (PI) controller have two antecedents and two consequents
b) if-then statements are developed as backup rules in case of system failure
c) a fuzzy logic controller is tuned by adjusting the scale factors
d) a fuzzy logic controller cannot replace a proportional-integral-derivative (PID) controller unless the fuzzy controller is linear
e) none of the above

Click Here to Reveal the Answer

A PI controller works to keep an output from the process, termed the controlled variable (CV), at a desired operating point, called the set point (SP), by adjusting an input to the process, known as the manipulated variable (MV). The control error (E) is the controlled variable minus the set point.

The CV, SP, and E in a PI control algorithm are converted to a percent of measurement scale, and the MV is the percent of the scale of whatever is manipulated, which could be a valve, speed, or set point.

In a fuzzy logic algorithm, these variables are converted to a fractional value from –1 to +1 based on scale factors that the user must enter for each variable. A PI controller is tuned by adjusting the gain or proportional band and integral time settings. A fuzzy logic controller is tuned by adjusting the scale factors.

The correct answer is C, “A fuzzy logic controller is tuned by adjusting the scale factors.”

Reference: Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP., A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

Why You Must Factor Maintenance Into the Cost of Any Industrial System

The post Why You Must Factor Maintenance Into the Cost of Any Industrial System first appeared on the ISA Interchange blog site.

This guest blog post was written by Edward J. Farmer, PE, industrial process expert and author of the ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to obtain a copy of the book, click this link.

One of the arguments against leak detection has often been, “and the costs go on.” Installing a new system involves some engineering work to figure out what needs to be protected and developing suitable ways of doing so; acquisition of suitable measurement, communications, and processing equipment; installing all this new equipment; training company people or hiring a contractor to take care of it; and periodically monitoring for proper operation. Many of these turn out to be reoccurring costs.

Over time, what seemed to be an expensive system initially produces a substantial stream of continuous costs, much like a DCS or SCADA system does. A leak detection system, though, is safety equipment, almost always mission-critical, and demonstrably necessary to safe and reliable and well as legally viable, operation.

Most information technology departments in major companies have well-developed and experience-based policies that establish guidelines, or even rules, for maintaining the currency and integrity of mission-critical systems. Many such policies involve the periodic replacement of some of the hardware, often at five-year intervals. Some require some sort of redundancy or resource-sharing configuration which can increase the amount of mission-critical equipment because of additional communications requirements, more servers in more places, and more network management software.

Even with insightful configurations and lots of redundancy, unusual failures can develop, some of which can be difficult to identify and diagnose without focused testing procedures. This motivates periodic manual testing or automated test systems. Automated systems usually generate alarms and reports that trained people have to see, read, and understand.

None of this is specific to leak detection – it is a fundamental necessity of living with and benefiting from modern process monitoring and control sophistication. While insightful management can optimize the user effort involved to produce a credible result, there is no way to avoid it.

Hardware and software maintenance are often interlinked. Operating system software may assume or require particular hardware features. Application programs may depend on certain operating system features, or hardware performance (usually represented by processing speed or data storage capacity) in order to work properly.

Sometimes upgrading one or more of the system components discloses incompatibilities resulting from the others. Most IT departments are keenly aware of this and have procedures that avoid unpleasant discoveries and outcomes. Detecting Leaks in Pipelines discusses some of these issues.

In many companies, critical people with positive job records get noticed and promoted. When these people work in IT or in the specialties it supports, they can be sorely missed. It can be hard to find and train suitable replacements. Often there are several levels of training. Some operating systems may be sufficiently evolved to require certification-level augmentation to existing training programs. In some cases, new hires may not come trained on the hardware and software systems you are actually using.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free 37-page excerpt from the book, click here.

In process control, some of the applications can involve features and tools that are not in the experience of personnel trained on more routine equipment providing more usual application support. Depending on what changes, it’s easy to need a few days of training and often necessary to schedule a week or more. All of this costs something. There are employee expenses, overtime, extra-time, and trainer-time. All of this involves budgeting, scheduling, evaluating, and usually some sort of remedial work.

These cost streams are never “over.” They are only abated until the next issue occurs that forces revisiting them. Most IT departments have developed policies and procedures that establish criteria for the needs and processes for the implementation of this recurrent training. Insightfully done it can increase performance and thereby control or reduce costs, or at least “surprises.” No matter what, though, it is a cost of doing business in the age of information technology and automatic control.

Programs for ongoing support should be envisioned in the beginning, when the applications are being designed and the equipment to implement them selected. There can be benefits from commonality and from an effort to remain in the “mainstream” of the evolution of automation technology. Signing on to be the last user of an obsolete technology can turn out to not only be expensive but very frustrating. Anticipating the evolution of the technology can keep budgets in control and reliability high for long periods of the evolution we all see and hope to cope with.

When I was in college there was a story intended to motivate engineering students to learn to think ahead. It involved a long dialog about a factory assembling a new large aircraft design and the angst a supervisor kept feeling when he went to the place where the wings were being assembled, with a huge pattern of carefully arranged attachment bolt locations.

A time later this same person would visit the factory location where the fuselage was being built and his eyes would pass over the huge pattern of carefully arranged fuselage attachment bolt locations. Finally, one day, the wings were brought to the fuselage, hoisted into place and a crew took position to insert the hundred or so bolts involved in fastening it all together.

What do you suppose happened next? Could it have been avoided with a little more organization, planning, care, structure, or any of those ideas management books and plans dwell over?

Over time, support can be as important as the selection and implementation of the original system. It isn’t an extra cost. It’s the inherent price of using this kind of technology.

About the Author
Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. His work has produced three books, numerous articles, and four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. He is the owner and president of EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.

Connect with Ed
48x48-linkedinEmail

 



Source: ISA News

How to Use Asset Performance Management as a Driver for Digital Industrial Transformation

The post How to Use Asset Performance Management as a Driver for Digital Industrial Transformation first appeared on the ISA Interchange blog site.

This post was authored by Jeremiah Stone, chief technology officer at Catasys and previously general manager of GE Digital’s APM business.

Digital transformation will not happen overnight, but it will happen quickly. While the IoT is quickly expanding, many have yet to feel the benefits, as organizations struggle to merge existing operational frameworks with new systems and capabilities.

The global economy is undergoing rapid shifts in resource demand, infrastructure, and desired skill sets in the workforce. As technology and innovation continue to shape industry, a significant opportunity to ignite economic revival is digital productivity through asset performance management (APM).

When organizations digitize power plants and factories by connecting them to the industrial Internet, global productivity improves. Insights that operators derive from data analytics create new energy efficiencies and expand capacity. As industrial companies transition into digital businesses, the next wave of competitiveness will drive up productivity and enhance the global economy by removing waste and creating a more efficient response to customer needs. Industrial organizations must prepare for this change in steps to bring both workers and systems into the digital age successfully.

The journey begins with APM. Today, industrial managers, engineers, and operators can use data from their operating environments to significantly improve asset reliability and maintenance effectiveness while optimizing processes. This approach relies on APM—a system that uses big data and risk-based strategies to identify critical assets and areas of improvement across thousands of assets to eliminate failures, reduce production losses, and address poor performance. When a single unproductive day on a liquefied natural gas platform can cost as much as $25 million, for example, minimizing unplanned downtime due to equipment malfunction and human error is a top priority.

Industrial companies that analyze and optimize their machines are positioned to succeed in the era of the industrial Internet. The steps—consolidating disparate data, executing a risk-based criticality assessment, and empowering the workforce through a culture of reliability—sound simple, but they deliver big results.

Consolidating disparate data

Today many data projects are set up, but only a small percentage of those projects realize a return on investment. Data projects can be overwhelming and ineffective without the right tools and buy-in from users within an organization. When implemented properly, APM programs help organizations reimagine their asset strategy and drive significant performance improvements. By relying on sophisticated algorithms for gathering data, building integrated data modeling, and performing advanced condition monitoring across a variety of techniques, APM systems help industrial companies develop intelligent asset strategies and realize value.

Industry sensor technology gives visibility into machine health and performance across all types of assets. This information accounts for factors, such as temperature and vibration, that help direct maintenance and workflow decisions. Using machine sensors to monitor equipment performance generates a constant stream of in-depth data, which can be consolidated and used to derive meaningful insights. Operators with this capability gain actionable information that increases asset reliability and enables more precise maintenance actions, reducing the cost of ownership. Industrial companies are feeling the urgency to get connected and gain insights from their machine data. In fact, 87 percent of manufacturing and oil and gas executives stated that big data and analytics are in their top three priorities. That number soars to 94 percent for power generation. The urgency to connect and act on machine data is widespread—and the opportunities are well recognized.

When it comes to asset management and maintenance practices today, however, most organizations are still relying on time-based maintenance strategies, using separate management systems and a host of smaller independent solutions. The majority of these systems were custom built to enable the unique work processes within companies, but they require a huge amount of ownership and create a siloed system with limited communication across reliability and maintenance teams. To become a true digital business, organizations must get rid of aging systems that generate disparate data streams and focus on the value of a single platform to incorporate data from multiple tools across the enterprise and provide a clear view of asset health. A single, secure operating system that gathers data from a wide variety of assets and systems enables operators and leadership to easily access a comprehensive and validated data repository for more informed maintenance decisions.

Criticality assessment of assets

Many companies are only focused on the reactive rather than on the proactive approach to maintenance incidents and reliability. Both are critical components of a reliability-centric culture, but there should be greater emphasis on proactive methodologies. Digitally driven proactive methodologies focus on generating and implementing asset management strategies by optimizing the assets’ total cost, risk, and performance impact. Reliability will become an ingrained practice rather than a function when a business shifts its operating processes and adopts APM technology to proactively address critical assets before they fail.

To strategically roll out an APM framework, organizations should first determine criticality and rank assets according to which ones require the most focus and are a priority for continued operations. Engineers must complete equipment criticality assessments for all assets, as well as reliability-instrumented system studies for safety and instrument-critical functions. Once assets are defined, engineers can conduct reliability-centered maintenance studies for all critical, high-priority systems. As a last step, conduct root-cause analysis on all incidents related to production, environment, health, safety, security, quality, and customer complaints. APM systems manage data on these critical assets to help operators and leadership prioritize failing and poor-performing systems and avoid unnecessary maintenance on healthy assets.

As industrial operators and engineers know, even the smallest incident can cause a chain reaction that remains unnoticeable until there are large-scale ramifications. To avoid costly losses over time, organizations should develop daily plans against specific assets and measure performance against plans. When these plans are maintained and automated in an APM system, operators can first identify common failures and issues across sites at a global or regional level, and then narrow in on incremental losses and incidents at individual sites to prevent major problems in the long term.

Reports that are automated, digital, and standardized across the entire organization give full transparency into asset performance and leave little room for failure. A reliability-focused organization should consistently track key performance indicators, recommendations, and performance improvements. This involves standardizing the practices of monitoring production deviation triggers, classifying incidents, identifying performance killers, benchmarking and measuring compliance and data quality, analyzing data and trending procedures, and finally developing recommendations and closure for the loss-accounting process.

One mining company had massive quantities of historical data in its SAP plant maintenance system—about 1.5 million records of functional locations and equipment units, 10 million records of work history, more than 300,000 task lists of repetitive operations, and more than 35,000 measurement points. With such huge volumes of data, a single APM system helped the company seamlessly integrate data from its SAP system, prioritize data from critical assets, and develop strategic recommendations for maintenance. By automating its reliability processes and developing intelligent asset strategies, the company began to achieve major operational efficiencies and cost savings.

APM systems provide information for informed decision making.

 

Digital workforce and reliability culture

Technology is only as effective as the workforce that uses it. If employees are not inputting data or pulling the right reports, then the system will not provide real value. In a 2014 global survey, 44 percent of oil and gas companies in the Americas said a skills shortage is the biggest threat to their industry— higher than capital costs, labor costs, or even economic stability concerns. This is largely related to training issues, with many companies citing the lack of quality candidates and skilled employees available to train. As a company undergoes full digital transformation, the industrial environment will change dramatically. Training programs need to evolve with it.

Simplifying what have become extremely intricate company cultures across industry will help reliability and digitization truly take root. By implementing a training program for all employees, leadership fosters the values and behaviors necessary for organizational reliability and also helps support talent retention and company growth. Training should review the fundamentals of reliability, detail the relationship to maintenance and operations, and teach how to use APM tools to support sound asset management decision making.

Forward-thinking organizations use lean tools and training to promote a reliability culture, as well as the long-term viability of an organization. New synchronous learning processes, for example, are highly interactive and are a platform to share and develop challenges, solutions, and ideas for employees. This type of training can take the form of virtual instructor-led training, social learning opportunities, and traditional instructor-led classroom training—either on or off site.

Operators and engineers should expect and seek training opportunities that involve participation across multiple disciplines from information technology, operations, and marketing to engineering. This cross-pollination approach to training will drive and ensure the success of corporate change and APM initiatives in an organization.

With the increased availability of these new data sources, engineers and operators are challenged to think beyond a single discipline, and today must consider both operations and competition from more of a marketing perspective. The goal of training, therefore, should be to acquire a big picture perspective of asset and system reliability and the competitive landscape—capabilities encompassed in APM methodologies.

Becoming a digital industrial

The industrial Internet will transform traditional industrial sectors with digital, data-rich services. But until now, only 5 percent of companies have succeeded in this transformation.

With APM, companies can streamline the process of connecting to smart assets, collecting all the data from those assets, and monitoring equipment for emerging threats. Weaving all that information together lets companies model various scenarios for evaluating the risk and cost of making changes to the asset management strategy. APM can compare the results and effectiveness of those new asset strategies, using machine learning capabilities to constantly learn and improve.

APM enables organizations to accelerate their path to a digital business model. Organizations gain control of asset decisions with broad-reaching data that reflects resource availability, operating impact, and real-time condition reports. When industrial organizations require assets to operate at all times, APM helps focus on the assets that need repair to lower the total cost of ownership and reduce the risk of unplanned downtime of mission-critical assets.

If they want to survive, industrial companies need to recognize that they are in the information business, and change extends beyond the assets themselves. Employees must embrace the digital industry they now work in and adopt APM tools for reliability and safety in today’s highly regulated environments.

About the Author
Jeremiah Stone is the chief technology officer at Catasys, and previously served as general manager of GE Digital’s APM business. Before taking over the APM business, Stone was the chief technology officer, software, for GE’s Energy Management business unit. Before joining GE, he was vice president, natural resource industries and sustainability solutions, at SAP. Stone started his career as a programmer and systems administrator with the National Center for Atmospheric Research, helping to develop systems to predict clear-air turbulence. He is a graduate of the University of Colorado’s mathematics program (summa cum laude), an inventor or co-inventor of multiple U.S. patents and several publications, and a founding member of the NextGen advisory board at the Computer History Museum in Mountain View, Calif.

Connect with Jeremiah
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

AutoQuiz: What Is the Best Method for Measuring the Level of a Highly Corrosive Media?

The post AutoQuiz: What Is the Best Method for Measuring the Level of a Highly Corrosive Media? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

Which of the following measurement methods is most cost effective for measuring the level of a highly corrosive media?

a) radioactive
b) ultrasonic
c) capacitance
d) float
e) none of the above

Click Here to Reveal the Answer

Floats are very seldom used with any corrosive material, and radioactive measurement is very expensive, due to purchase costs and compliance with federal and state regulations.

Capacitance can be adapted to work with a highly corrosive media, but it will not be as cost effective as ultrasonic measurement.

The correct answer is B, “ultrasonic.” The ultrasonic method is the most cost effective.

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

ISA Values: Live Them

The post ISA Values: Live Them first appeared on the ISA Interchange blog site.

This post is authored by Paul Gruhn, president of ISA 2019.

In case you missed it in one of my earlier columns, ISA modified its vision and mission statements last year. I am very proud of these efforts and firmly believe these new statements will more clearly guide us in all that we do. Our vision is to “create a better world through automation.” Our mission is to “advance technical competence by connecting the automation community to achieve operational excellence.”  

In addition to the rework of our vision and mission statements, we also created five value statements. While ISA obviously had values for the last 74 years, we never actually had anything in writing. Documenting such beliefs formalizes them for all to see. They will help guide us both as a Society and as individuals.

I thought I would take this opportunity to offer my personal comments on each of our value statements, their importance, and how they impact our Society. I would enjoy hearing your thoughts on these values and how best ISA can embody them in all that we do.  

Excellence: We strive to provide industry leading resources and unbiased content developed and vetted by our community of experts.” In order to advance people’s technical competence, we need to produce the materials they both want and need, including standards, books, training courses, and conferences. Our members and volunteers create this material. Experts are documenting their knowledge and lessons learned for the benefit of others—all with the goal of making you and your organization more successful! ISA’s material is non-commercial and vetted by experts. You simply won’t find a better or broader range of material when it comes to instrumentation, automation, and control. And this value needs to be nurtured and supported by all. Each of us has knowledge we can impart back to the Society. I encourage you to find ways to get involved in this.  

Integrity: We act with honesty, integrity, and trust, treating others with respect in all that we do.” I truly believe that the vast majority of our members and staff clearly do act with integrity. There are rare exceptions (as there, unfortunately, are in any organization). However, having such a written statement clarifies the behavior we expect, and the behavior we will not tolerate. We each must hold the other accountable. Integrity is critical to the greatness of our Society.  

Diversity and Inclusion: We strive to be a global, diverse, and welcoming organization.” When ISA was founded in 1945 our name was the “Instrument Society of America.” We changed our name to the International Society of Automation about 10 years ago, even though we have had international members for many decades. It is important that we know our history while also defining our future.  

I believe we need to be more inclusive and diverse if we are going to flourish and grow around the world. We need diverse backgrounds and opinions on the Executive Board, within our assemblies, departments, and committees. While this may be uncomfortable for some at times, this is necessary to achieve our objectives and global growth.  

Collaboration: We seek out opportunities to work together for the benefit of the Society, its members, and our profession.” Over a dozen local instrument societies banded together to form ISA in 1945. They knew that together they could accomplish greater objectives through collaboration than any single unit could accomplish on their own. For example, the original declaration of policy listed objectives such as “…to advance the arts and sciences related to the theory, design, manufacture, and use of instruments and controls…, to encourage research…, to foster education…, to advance the standards of science and engineering…, and to promote interaction among its members and with allied technological societies.” Volunteers and staff must work together as a team to benefit the Society, its members, and the overall profession.  

Professionalism: We uphold the highest standards of competence and skill in everything we do.” ISA provides the opportunity to enter the industry as an amateur and increase your level of knowledge and experience. That could include volunteering in some manner at the section or division level and learning teamwork and leadership skills (since no one is born a good leader). At some point in your career, many people wish to give back, whether it be in writing a textbook, creating a training class, or participating in a standards committee. We hold the ISA bar rather high and expect the best from our volunteers and leaders.  

I stand behind each of ISA’s values. But it isn’t enough to just say them; we need to live them in everything that we do.  

About the Author
Paul Gruhn PE, CFSE, and ISA Life Fellow, is a global functional safety consultant with aeSolutions, a process safety, cybersecurity and automation consulting firm. As a globally recognized expert in process safety and safety instrumented systems, Paul has played a pivotal role in developing ISA safety standards, training courses and publications. He serves as a co-chair and long-time member of the ISA84 standard committee (on safety instrumented systems), and continues to develop and teach ISA courses on safety systems. He also developed the first commercial safety system modeling program. Paul has written two ISA textbooks, numerous chapters in other books and dozens of published articles. He is the primary author of the ISA book Safety Instrumented Systems: Design, Analysis, and Justification. He earned a bachelor of science degree in mechanical engineering from Illinois Institute of Technology, is a licensed Professional Engineer (PE) in Texas, and both a Certified Functional Safety Expert (CFSE) and an ISA84 safety instrumented systems expert.

Connect with Paul
48x48-linkedin Twitter48x48-email

 



Source: ISA News

Cost-Effective Option for Using Cellular Data for SCADA and Data Acquisition at Remote Sites

The post Cost-Effective Option for Using Cellular Data for SCADA and Data Acquisition at Remote Sites first appeared on the ISA Interchange blog site.

This post was authored by Marcia Gadbois, formerly VP and general manager of InduSoft Business Unit of Schneider Electric.

The Internet of Things needs technology to connect people to machines and processes. From the operators on the plant floor to the execs in the C-suites, supervisory control and data acquisition (SCADA) data is expected to be available. Production data, key performance indicators, fault history, and process variable trends are just some of the information that needs to be collected and displayed. The demand for this and other data continues to grow.

When the machines or processes are on a plant floor, tried-and-true methods of connecting their data to human-machine interfaces (HMIs), databases, or the cloud are available. A hardwired Ethernet connection, usually via an industrial Ethernet protocol, is a popular choice for connectivity. As the distance to the machines increases or where the application allows, a local and secure Wi-Fi connection can be used.

When machines and processes move to remote sites, public communication methods, such as phone lines, radio, and satellite, become better choices. Each method has its strengths and weaknesses. Another over-the-air communication method is cellular, which in the past has been too expensive for many applications. But with proper cellular hardware selection, the right protocol, proper network configuration, and data filtering in the HMI or controller, cellular data usage can be limited to allow cost-effective collection and distribution of SCADA system data (figure 1).

This article discusses the advantages of going cellular in remote applications and covers cellular basics. Examples are provided to show how to remotely connect to cellular data, efficiently collect it, and use it in SCADA applications.

Capable cellular networks

Selecting cellular as the communication method in a SCADA data collection application confers many advantages.

Cellular advantages for SCADA:

  • High-speed connection
  • Widespread coverage
  • Lower cost than satellite
  • Reasonable equipment cost
  • Easy to configure and deploy
  • Secure link

Cellular network communication provides a high-speed connection. Download speeds up to 12 Mbps and upload speeds of up to 5 Mbps are common, with even higher peak speeds. Therefore, cellular communication provides a reasonably fast Ethernet connection in remote locations. Internet access can also be provided, but can use data quickly.

Cellular service availability in remote locations is increasing every day, making it a viable and lower cost option than using satellite service. Cellular technology is also well developed, so the latest LTE and 4G services and related cellular modems, gateways, and routers are reasonably priced.

Figure 1. Existing worldwide cellular networks can be used to implement communication between remote sites and SCADA system HMIs.

Making the cellular connection

A cellular modem, gateway or router, and data provider are required to implement cellular communication in SCADA or remote data collection applications. The cellular modem and gateway can be quickly configured and deployed as a remote data collection option. The cellular gateway can incorporate a firewall, a virtual private network (VPN), a network address translation (NAT) router, and modem functionality into a single device (figure 2). These cellular gateways are designed to be continuously connected and monitored, and have been simple to configure and deploy for about 20 years. A cellular router is like a gateway, but with additional network management capabilities. Other than making a cellular connection, a modem has limited functionality. It needs additional functionality added to be suitable for remote data exchange.

The selected cellular gateway should be designed for industrial use to handle temperature extremes as well as harsh environments. It should also include a future-proof design with LTE and 4G cellular connectivity and support multiple carriers. The gateway needs 2G and 3G capability, as many remote locations do not have LTE available. Low power consumption is important in remote locations to minimize the impact on power systems.

Cellular carriers, such as Verizon Wireless, Sprint, AT&T, and T-Mobile, offer cellular data service. Many cellular service plans are available, and organizations should contact each of the leading carriers to review the service options available for industrial data plans.

Figure 2. A cellular gateway combines the functions of a firewall, a VPN, a NAT router, and a cellular modem in one device.

Secure mobile communications

Cellular gateways can be cybersecured, because these devices are commonly used in enterprise and retail applications as secure links for corporate data, sales, and payment transactions. This makes cellular gateways suitable for connecting remote machines or processes via Ethernet or serial using a variety of industrial protocols. Many of the gateways can also provide connections for Wi-Fi devices.

Cellular gateways can secure SCADA data by using IP networks with cybersecurity features, such as encryption and VPNs. Many gateway devices offer multiple, concurrent VPN sessions, providing connections to multiple control networks or SCADA systems. Remote authentication management can limit access, as can the use of port filtering and trusted IP addresses. Cellular gateways are an option wherever cellular service is available, but data usage must be managed wisely.

Efficient transmission

Cellular data usage must be efficient and limited, lest overuse charges send monthly costs over budget in data collection or SCADA applications. With cellular communication applications, the data cannot just be sent every second as with many in-plant or hard-wired systems. Additionally, communication protocols used with cellular must be efficient.

Transmission Control Protocol (TCP) is not suitable for SCADA communications on a cellular network due to the read, response, and confirm requirements of the protocol. These requirements, as part of Ethernet services in Windows, cause retransmission of the data, often several times, depending on packet length and network status. This retransmission causes additional network traffic and increases cellular data usage.

The solution is to use User Datagram Protocol (UDP) instead of TCP. UDP is another protocol used by Ethernet to send data packets over the Internet. UDP is less reliable but faster than TCP, because it does not confirm or provide error-proof delivery of the data packets. Instead, UDP just sends packets as they are received without resending missed data packets.

DNP3 protocol reduces data usage

When the UDP protocol is used with SCADA or remote data acquisition in cellular applications, reliable transmission of data packages is not the responsibility of the Ethernet protocol. The Distributed Network Protocol Version 3 (DNP3) is used instead to guarantee data packet delivery.

The DNP3 protocol sends data between a local server and a remote station using IP or serial connections. This protocol is commonly used in utility industries, such as in electric, gas, and water companies, in traffic controls, and in general SCADA applications The DNP3 protocol ensures the reliability of the data, verifying the local machine received and understood the IP data sent to it, even on serial connections.

An important feature of the DNP3 protocol inherently limits communication channel bandwidth usage. It is also designed for reliable communication that is resistant to electrical interference. DNP3 combines some of the best features of other protocols, such as select-before-operate, quality alerts, time-stamped data, multiple data format compatibility, and unsolicited reporting.

To reduce data usage, the DNP3 protocol only reads the changes in new data, so there is no need to read all the information each time. Only the data that changes is reported to the SCADA, and with a time stamp.

HMI software is often used to translate DNP3 information to industrial protocols commonly used by programmable logic controllers and other controllers. It is also one of the few SCADA protocols with built-in security features, adding another layer to a defense-in-depth security policy.

Just send the important data

In addition to using efficient Ethernet and data server protocols, data should be filtered when using cellular communication. In a SCADA or remote data acquisition application, an HMI can be used to filter the data so only the most important information, as opposed to raw data, is sent via the cellular connection.

Although the controller at the remote site can be programmed to limit data transmission by only reporting the minimum and maximum variable data, along with large changes in data points, an HMI can provide more advanced data translation.

With the use of rule-based systems, artificial intelligence, and data analytics, the SCADA/HMI software can transform data into actionable information. For example, remote pumping stations (figure 3) and tank farms collect a large amount of data from instruments and controllers. A local SCADA/HMI data acquisition system can provide automated analysis of the data and communication of only the most important and actionable information over the cellular network. The data acquisition system can also limit how often the data is sent.

In a similar fashion, the SCADA system can also connect to a local historian to compress and efficiently store the information in a database. The compressed data, or only a portion of that data, can then be sent over the cellular connection.

Cellular communication is often the best option for SCADA and other remote data applications, but care must be taken in design to limit the amount of data sent from the remote site. The right HMI is a key part of this design. It can act as a hub to collect, store, process, and analyze raw data-and only send the resulting actionable information to users. As coverage continues to spread and becomes less expensive, expect more industrial data acquisition applications to select the cellular communication option.

Figure 3. Remote sites like this oil pumping station are good candidates for cellular communications, which is often the most cost-effective option.

About the Author
Marcia Gadbois is formerly VP and general manager of InduSoft Business Unit of Schneider Electric. She has over 30 years’ experience in the software industry, with the last nineteen years working in start-up software companies as the president, general manager, vice president of marketing and business development. Prior to Marcia’s work in start-up companies, she spent 10 years working at Digital Equipment Corporation and a year working at the Open Software Foundation (a UNIX consortium). In those 11 years, Marcia held management positions in product management, business management, marketing, and sales. Marcia has worked in various technical areas such as AI, UNIX, distributed systems, output management, middleware, PC utility tools and industrial automation. She is a contributing author to a book titled Programming with RPC and DCE. Marcia holds an MBA from Whittemore School of Business (University of New Hampshire), with a bachelor of science in management information systems with a minor in computer science from Bowling Green State University.

Connect with Marcia
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

New Control Approach Increases Efficiency and Safety of Boiler-Turbine Systems [technical]

The post New Control Approach Increases Efficiency and Safety of Boiler-Turbine Systems [technical] first appeared on the ISA Interchange blog site.

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.

Abstract: As higher requirements are proposed for the load regulation and efficiency enhancement, the control performance of boiler-turbine systems has become much more important. In this paper, a novel robust control approach is proposed to improve the coordinated control performance for subcritical boiler-turbine units. To capture the key features of the boiler-turbine system, a nonlinear control-oriented model is established and validated with the history operation data of a 300 MW unit. To achieve system linearization and decoupling, an adaptive feedback linearization strategy is proposed, which could asymptotically eliminate the linearization error caused by the model uncertainties. Based on the linearized boiler-turbine system, a second-order sliding mode controller is designed with the super-twisting algorithm. Moreover, the closed-loop system is proved robustly stable with respect to uncertainties and disturbances. Simulation results are presented to illustrate the effectiveness of the proposed control scheme, which achieves excellent tracking performance, strong robustness and chattering reduction.

Free Bonus! To read the full version of this ISA Transactions article, click here.

 

Enjoy this technical resource article? Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus professional networking and discounts on technical training, books, conferences, and professional certification.

Click here to join ISA … learn, advance, succeed!

 

 

 

2006-2019 Elsevier Science Ltd. All rights reserved.

 

 

 



Source: ISA News

AutoQuiz: Which Signal Type Indicates Direction and Velocity in a Motion Control Application?

The post AutoQuiz: Which Signal Type Indicates Direction and Velocity in a Motion Control Application? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

In a motion control application, which of the following signal types would be able to indicate both direction and velocity?

a) 4–20 mA
b) 3–15 psi
c) ±10 V
d) 0–10 V
e) none of the above

Click Here to Reveal the Answer

In order to indicate both direction and velocity, a signal with both positive and negative characteristics is required. Motion controls use either ±10 V or ±5 V signals to accomplish this. Direction is indicated by the sign of the signal; magnitude of velocity is indicated by the magnitude of the voltage.

The other three choices are typical signal types for analog inputs, but none have the ability to easily indicate direction, only velocity. Obviously, a pneumatic (3–15 psi) signal would never be a good choice for motion control applications.

The correct answer is C, “±10 V.”

Reference: Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP., A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News