Thank You Sponsors!

CANCOPPAS.COM

CBAUTOMATION.COM

CONVALPSI.COM

DAVISCONTROLS.COM

ELECTROZAD.COM

EVERESTAUTOMATION.COM

HCS1.COM

MAC-WELD.COM

SWAGELOK.COM

THERMON.COM

VANKO.NET

WESTECH-IND.COM

WIKA.CA

AutoQuiz: How to Calculate Steady State Gain for a Standard Pneumatic Instrument Loop

The post AutoQuiz: How to Calculate Steady State Gain for a Standard Pneumatic Instrument Loop first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

You have a standard pneumatic instrument loop with a span of 200 units. What is the steady state gain?  

a) 0.06 psig/unit
b) 0.07 psig/unit
c) 0.08 psig/unit
d) 0.12 psig/unit
e) none of the above

Click Here to Reveal the Answer

The correct answer is A, 0.06 psig/unit.

Gain is defined in control theory as the change in input divided by the change in output. In this case, the gain can be calculated as follows:

Gain = (full range change in input) / (full range change in output)

= (15 psig – 3 psig, for a pneumatic loop) / 200 units of change

= (15 – 3) / 200

= 12 / 200

= 0.06 psig/unit

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

How Connected Buildings and Power Grids Make Cities ‘Smart’

The post How Connected Buildings and Power Grids Make Cities ‘Smart’ first appeared on the ISA Interchange blog site.

This guest blog post was authored by Claudia Vergueiro Massei, CEO of Siemens in Oman.

We are living through interesting times. Name any tech buzzword – be it artificial intelligence, big data, Internet of Things (IoT) – and you will see a ripple effect on different industries. One thing remains certain: We live in a world of cities, and our planet is increasingly urban. By 2050, more than 70 percent of the world’s population will live in cities. Cities are the new engines of growth in the global economy, responsible for 80 percent of global GDP, and they have been under constant transformation for the future.

There is increasing awareness of the benefits that smart cities can bring, including their ability to help meet the United Nations’ Sustainability Development Goals (SDGs). By taking advantage of new and emerging technology trends and data sciences, cities can manage their growing urban environments to become more livable with safer, cleaner, healthier and more convenient communities; more workable with a modern digital infrastructure that attracts companies, jobs and talent; and more sustainable when powered by clean, renewable energy.

What makes infrastructures smart?

But with this tech revolution comes a need to change how our cities are powered! The world of energy is undergoing a massive transformation: it is moving away from fossil fuels and a centralized supply provided by a few power plants and towards renewable energy sources like wind turbines and solar power systems, in conjunction with storage technologies and a distributed structure.

Thanks to its accessibility and its eco-friendliness, solar power is increasingly being touted as the future of energy for smart cities.  In addition, solar-powered microgrids have the capacity to run off-grid for several days in the event of power outage due to natural calamities, etc.

For example, the Oman stands to benefit tremendously from solar energy thanks to a conducive climate and favorable geographical position, ensuring the country with sunny days for most part of the year. The sultanate has been setting ambitious targets for solar power generation, which will likely produce 21 percent of total power needed in Oman by 2030, according to Oman Power and Water Procurement Company counting both independent solar power plants and rooftop photovoltaic systems. Launched under its renewable energy initiative “Sahim,” the Authority for Electricity Regulation (AER) seeks wide-scale deployment of small photovoltaic (PV) systems at residential premises in Oman. With a higher contribution of renewables to the energy mix, compensating for fluctuations in power supply will be increasingly important in order to maintain stability and reliability of the electricity grid.

Smart city, smart water

One of a city’s most important pieces of critical infrastructure is its water system. With populations in cities growing, it is inevitable that water consumption will grow as well. The term “smart water” points to water and wastewater infrastructure that ensures this precious resource – and the energy used to transport it – is managed effectively. A smart water system is designed to gather meaningful and actionable data about the flow, pressure and distribution of a city’s water system. Further, it is critical that that the forecast and actual measurement of water consumption are accurate.

A city’s water distribution and management system should be equipped with the capacity to be monitored and networked with other critical systems to obtain more sophisticated and granular information on how they are performing and affecting each other. Incorporating smart water technologies allows water providers to minimize non-revenue water (NRW) by finding leaks and bursts quickly and even predicatively using real-time SCADA (Supervisory Control and Data Acquisition) data and comparing that to network simulation models.

Another important element we have to consider is, in both agricultural and commercial building applications, irrigation systems are required to keep the landscape healthy and vibrant. However, there is often a tendency to over or under-water, which can lead to damage of plant life, as well as increased operational and energy costs. Intelligent irrigation systems can help mitigate these issues and generate savings that far exceed the cost of their implementation.

Key technology: building automation

There is further potential for greater sustainability in cities in the merging of buildings and power grids. Buildings are becoming increasingly smarter and more networked, as they exchange energy and data with the grid. Buildings are no longer only consuming energy, they also store and distribute it.

At least 30 percent of electricity in buildings is being wasted: heating or cooling systems not adjusted according to room occupancy levels, leading to over-heating or over-cooling; sprinklers incorrectly aimed and/or activated during the warmest hours of the day; lights kept on all day in spaces where sunlight could be used for proper illumination or where partial lighting could be sufficient; computers left running 24/7 unnecessarily. The intelligent integration of lighting, data, heat, ventilation, air-conditioning, fire safety and security systems with automation platforms could significantly reduce wasted energy and therefore, optimize total consumption.

One thing that is uncontestably true in smart cities is that cars and people will exert a heavy burden on our mobility systems and infrastructure. Thankfully, mobile apps, automated parking systems, street-side sensors, and open data all stand to revolutionize parking systems in major systems. We are moving towards a parking paradigm that yields greater convenience for drivers and less congestion for cities.

All in all, emerging and digital technologies offer huge potential to address infrastructure challenges, but ready access to resources and expertise is essential. Today, enormous amount of data is used in silos, limited to narrowly defined purposes. Much of its value goes unmined, because real benefits are only created when this “big data” is gathered and analyzed into true ‘smart data’.

Regarding systematic gathering and integrating urban data, this will only be possible when individual infrastructure units – power grids, trains, traffic systems and intelligent buildings – are connected to the digital world.

Finally, we have to remember that the story of the smart city is a story about innovation. It is a story about people because smart city strategies will always start with people, not with technology. Smartness is not just about installing digital interfaces in traditional infrastructure. It is also about using technology and data purposefully to make better decisions, to deliver a better quality of life to the population.

About the Author
Claudia Vergueiro Massei is an executive manager committed to enabling the digital transformation in Oman and to contributing to its economic development, while building up skilled local workforce in the Sultanate. As Siemens Oman CEO, Claudia drives business in key growth areas, including power generation, transmission and distribution, oil & gas, automation, manufacturing and smart building technologies. She assumed her role after working with Siemens in Germany, Denmark and China for various business lines. Originally from Brazil, she commenced her career in strategy consulting, undertaking assignments in France, Morocco, South Africa and the United States, besides her home country. She has subsequently co-founded a SaaS (software as a service) edtech startup in Brazil, which she ran for a couple of years before joining Siemens. In her free time, Claudia enjoys to dance, travel and learn about different cultures, as well as to mentor young entrepreneurs. Claudia holds an aeronautical engineering degree from Instituto Tecnologico de Aeronautica, an MBA from the Wharton School, and an MA in international studies from the Lauder Institute, University of Pennsylvania. 

Connect with Claudia
LinkedIn

 



Source: ISA News

Does IIoT Live up to the Hype?

The post Does IIoT Live up to the Hype? first appeared on the ISA Interchange blog site.

This article was written by Eric J. Byres, a well-known industry expert in the field of industrial cybersecurity and chair of the ISA99 Security Technologies Working Group.

There sure is an awful lot of hype about the Industrial Internet of Things (IIoT). One cannot attend a trade show without seeing a dozen new IIoT products or services. Every one of those new offerings promises to completely revolutionize your business and bring untold riches to your company.

But is IIoT really a game changer? Or is it just a trendy buzzword? And if it is real, how do you get it to live up to its promise at your company?

I have provided security guidance for a number of IIoT projects. I have also been facilitating teams of IIoT experts in “think tanks” for Fortune 500 companies rolling out IIoT projects. At first I was pretty cynical about IIoT. After all, haven’t we been connecting smart industrial devices for decades? Network-connected remote terminal units, programmable logic controllers, and human-machine interfaces (HMIs) are nothing new.

But the more I got involved in IIoT, the more I saw that it was something new. Integration was not just between systems on the plant floor. It offered corporate, customer, and partner-wide connectivity on a whole new scale. As a result, it had the potential to unlock tremendous value in the manufacturing chain and transform the way a company does business.

I also quickly discovered that, like all new ideas, IIoT is not without its challenges. These include increased security risks, the potential for information overload, a shortage of staff with the needed skill sets and experience, and unexpected effects on corporate culture. Deploy IIoT incorrectly, and you can have a big mess on your hands. So here are three IIoT best practices  I have learned that can make your IIoT project live up to its promise.

#1 Whole company involvement

IIoT is not a “wiring problem.” It is not limited to the information technology (IT) department. And it is not just the concern of the chief information officer. When an IIoT project rolls out, it can affect everyone from the operator on the plant floor to the general manager.

We all know that when people do not understand how something will affect their jobs, they are often scared. When they are scared, they are likely to roadblock. Road blocking can result in projects that do not achieve their objectives-or fail outright. IIoT projects are no exception. In fact, because they can affect so many aspects of a business, they absolutely need wide-ranging involvement to be a success.

Ingersoll Rand Residential HVAC group recently rolled out an IoT solution called the Nexia Home Intelligence system. One of the things it does is enable technicians to remotely troubleshoot air conditioners. This both improves customer satisfaction through faster service response times and saves the company money by reducing unneeded or incorrectly provisioned calls.

As good as it is for the customer and the company, this new system affects how service specialists do their jobs. It changes their work day from one of constant service calls on the road to more in-office troubleshooting and preparation. Now, if the technicians felt threaten by the system, it would be easy for them to sabotage it through incorrect diagnoses or rolling out trucks regardless. Fortunately, the project was successful because the Ingersoll Rand IIoT deployment included the opportunity for the service teams to accept the IIoT concept, comment on it, and understand how it would benefit both them and the company in the long run.

This highlights how critical it is for the entire company to be involved and aligned in the success of any IIoT project. Everyone needs to have a stake in helping the project achieve the ultimate win-win scenario in the company’s best interest. Thus, the project team must encompass everyone who might contribute or be affected. And it will need experience in a lot of different areas, including analytics, joint IT/OT operations, communication and management, and security designs and architecture. Everyone’s skill sets and cooperation are needed to seamlessly integrate IIoT into the workspace.

So how can you get whole company buy-in? Instead of specifying IIoT projects from the top down, senior management can ensure that the necessary tools are available to its operational teams for IIoT projects. This way the people with the hands-on experience of the process, products, and customers can help the company derive real value from an IIoT deployment.

Bill Brown, the director of digital innovation at the tool manufacturer Stanley Black & Decker recently explained how his job is to offer an easy-to-use IIoT platform for the different business units to be able to roll out their IIoT vision. “The system needs to be so easy that people will adopt quickly without being told to do it,” explains Brown. This enables the entrepreneurial fast thinkers and problem solvers to implement IIoT more efficiently, easily, and effectively.

I also quickly discovered that, like all new ideas, IIoT is not without its challenges. These include increased security risks, the potential for information overload, a shortage of staff with the needed skill sets and experience, and unexpected effects on corporate culture. Deploy IIoT incorrectly, and you can have a big mess on your hands. So here are three IIoT best practices  I have learned that can make your IIoT project live up to its promise.

#1 Whole company involvement

IIoT is not a “wiring problem.” It is not limited to the information technology (IT) department. And it is not just the concern of the chief information officer. When an IIoT project rolls out, it can affect everyone from the operator on the plant floor to the general manager.

We all know that when people do not understand how something will affect their jobs, they are often scared. When they are scared, they are likely to roadblock. Road blocking can result in projects that do not achieve their objectives-or fail outright. IIoT projects are no exception. In fact, because they can affect so many aspects of a business, they absolutely need wide-ranging involvement to be a success.

Ingersoll Rand Residential HVAC group recently rolled out an IoT solution called the Nexia Home Intelligence system. One of the things it does is enable technicians to remotely troubleshoot air conditioners. This both improves customer satisfaction through faster service response times and saves the company money by reducing unneeded or incorrectly provisioned calls.

As good as it is for the customer and the company, this new system affects how service specialists do their jobs. It changes their work day from one of constant service calls on the road to more in-office troubleshooting and preparation. Now, if the technicians felt threaten by the system, it would be easy for them to sabotage it through incorrect diagnoses or rolling out trucks regardless. Fortunately, the project was successful because the Ingersoll Rand IIoT deployment included the opportunity for the service teams to accept the IIoT concept, comment on it, and understand how it would benefit both them and the company in the long run.

This highlights how critical it is for the entire company to be involved and aligned in the success of any IIoT project. Everyone needs to have a stake in helping the project achieve the ultimate win-win scenario in the company’s best interest. Thus, the project team must encompass everyone who might contribute or be affected. And it will need experience in a lot of different areas, including analytics, joint IT/OT operations, communication and management, and security designs and architecture. Everyone’s skill sets and cooperation are needed to seamlessly integrate IIoT into the workspace.

So how can you get whole company buy-in? Instead of specifying IIoT projects from the top down, senior management can ensure that the necessary tools are available to its operational teams for IIoT projects. This way the people with the hands-on experience of the process, products, and customers can help the company derive real value from an IIoT deployment.

Bill Brown, the director of digital innovation at the tool manufacturer Stanley Black & Decker recently explained how his job is to offer an easy-to-use IIoT platform for the different business units to be able to roll out their IIoT vision. “The system needs to be so easy that people will adopt quickly without being told to do it,” explains Brown. This enables the entrepreneurial fast thinkers and problem solvers to implement IIoT more efficiently, easily, and effectively.

#2 Focus on business value

IIoT is meant to drive business value. It is not just how you are collecting data through interconnectivity; it is why you want to do this in the first place. If someone asks you, “How does IIoT [or the data derived] make your company better?” and you are unable to answer with a specific reason, you should reconsider the project. The more specifically you can define the business value, the more likely you are to attain it.

How do you get everyone onboard so that you can indeed focus on the goal? The one strong strategy is to pilot, then scale. Start small and get some clear wins. Celebrating little triumphs goes a long way; you get the naysayers and traditionalists off your back sooner (and there will always be these types), but you can also win them over to your cause. Your opponents can become your allies.

A well-known turbine manufacturer tried this tactic, beginning small before expanding based on its initial success. One of its manufacturing products is the impellers used in centrifugal compressors, both single stage and multi-stage, bound for process gas plants, ethylene plants, mines, and wastewater treatment facilities. These impellers can range in diameter from 16 in to 72 in, either milled or fabricated from expensive alloys. This puts the component value from $100K to $400K. Having them sit around in inventory is very expensive.

Two critical post-processing operations for impellers are balancing and blade frequency testing. Specifically, impeller blade testing requires the blade frequency to be measured for each of the 17-23 blades. Before the IIoT project, it was someone’s job to take these individual measurements and document them in a spreadsheet. After that, a hardcopy of the data was delivered to an engineer for review, where approval or rejection of a given blade was communicated. In the case of a rejected blade, a certain amount of material had to be removed, determined by the standard blade geometry and operational speed. The process took days to weeks to complete, translating to at least $200,000 in (wasted) inventory costs.

The extra days in held inventory were quite a significant cost on its own, but with the addition of the extensive manual processing and paperwork approval, this was a prime area for implementing IIoT technologies with automation and machine intelligence.

By automating the process via IIoT and adding intelligence into the blade-ring test machine, multiple problem areas-transcription errors, man hours, held-up inventory, and consequent expenses-were addressed. All of the tasks are repeatable, and could therefore be completed autonomously. The operator is immediately informed of pass/fail results, and alerts can be sent remotely to the engineer if approval is required. It was not a big project per say, but the process got clear wins and a huge return on investment (ROI). That helped people within the company understand why they would want to support another, larger IIoT process in the future.

IIoT is meant to drive business value. It is not just how you are collecting data through interconnectivity; it is why you want to do this in the first place. If someone asks you, “How does IIoT [or the data derived] make your company better?” and you are unable to answer with a specific reason, you should reconsider the project. The more specifically you can define the business value, the more likely you are to attain it.

How do you get everyone onboard so that you can indeed focus on the goal? The one strong strategy is to pilot, then scale. Start small and get some clear wins. Celebrating little triumphs goes a long way; you get the naysayers and traditionalists off your back sooner (and there will always be these types), but you can also win them over to your cause. Your opponents can become your allies.

A well-known turbine manufacturer tried this tactic, beginning small before expanding based on its initial success. One of its manufacturing products is the impellers used in centrifugal compressors, both single stage and multi-stage, bound for process gas plants, ethylene plants, mines, and wastewater treatment facilities. These impellers can range in diameter from 16 in to 72 in, either milled or fabricated from expensive alloys. This puts the component value from $100K to $400K. Having them sit around in inventory is very expensive.

Two critical post-processing operations for impellers are balancing and blade frequency testing. Specifically, impeller blade testing requires the blade frequency to be measured for each of the 17-23 blades. Before the IIoT project, it was someone’s job to take these individual measurements and document them in a spreadsheet. After that, a hardcopy of the data was delivered to an engineer for review, where approval or rejection of a given blade was communicated. In the case of a rejected blade, a certain amount of material had to be removed, determined by the standard blade geometry and operational speed. The process took days to weeks to complete, translating to at least $200,000 in (wasted) inventory costs.

The extra days in held inventory were quite a significant cost on its own, but with the addition of the extensive manual processing and paperwork approval, this was a prime area for implementing IIoT technologies with automation and machine intelligence.

By automating the process via IIoT and adding intelligence into the blade-ring test machine, multiple problem areas-transcription errors, man hours, held-up inventory, and consequent expenses-were addressed. All of the tasks are repeatable, and could therefore be completed autonomously. The operator is immediately informed of pass/fail results, and alerts can be sent remotely to the engineer if approval is required. It was not a big project per say, but the process got clear wins and a huge return on investment (ROI). That helped people within the company understand why they would want to support another, larger IIoT process in the future.

#3 Design security and robustness in from the start

The best IIoT systems are those designed from the very beginning with security and robustness in mind. They include elements such as automated failback features, an increased tolerance for short-term failures, and security monitoring within the system operations plan. Brown of Stanley Black & Decker explains that his company’s IIoT deployments could not be centered in the cloud-they needed to be able to work on premise. “If the Internet connection goes down, your system still needs to function.”

Experts such as the chief security architect of Polyverse Corporation, Steven C. Venema, recommend reviewing the ISA/IEC 62443 standards (formerly known as the ISA99 standards) as a preliminary road map toward partitioned architectures for the industrial control system (ICS) and supervisory control and data acquisition domain. “Partition your equipment and systems designs,” Venema cautions, “to allow security components to be updated on a faster cycle than other operational components.” As “the complete security life-cycle program for industrial automation and control systems,” ISA/IEC 62443 consists of 11 standards and technical reports. It introduces the concepts of zones (groupings of logical or physical assets that share common security requirements based on criticality, consequence, and other such factors; equipment in a zone should share a strong security level capability) and conduits (paths for information flow between zones). ISA/IEC 62443 standards provide requirements based on a company’s assessment of cyberattack risks and vulnerabilities.

Within the oil industry, a large refinery created a security architecture that effectively protected its operations based on these standards. In its oil refinery process facility, the company had multiple operations (with basic control, safety, and HMI/supervisory systems). It also had considerable wireless and remote communications needs, both for maintenance and for communications to downstream customers.

To secure its operations, the company divided its systems into zones and subzones-depending on operational function, security capabilities and requirements, perceived risk, and process level-to best adjust the security requirements for each particular operation. After analyzing potential threat sources, the company relocated the safety integrated system in each operational unit to its own zone (instead of being part of a basic control zone). Conduits were defined and documented, breaking down the overall system into manageable chunks. The zones and conduits were then implemented with industry security appliances, including firewalls and virtual private networks, which were tried and tested. The technologies introduced into the control systems also made significant improvements to plant performance and productivity, and the company successfully continued to use and maintain ISA/IEC 62443 standards as a framework for security improvements.

In your IIoT security checklist, strategize to ensure and implement the following proactive and protective measures:

  • Design security in from the start. Never leave it as an afterthought.
  • Enlist expert help. Fuse a team of senior management and security specialists who can communicate and work together to design protective strategic measures that work seamlessly with the plant’s (and whatever products or services therein) functionality and features.
  • Compartmentalize IIoT solutions into security zones to prevent the spread of malware throughout the plant. In tandem, integrate security best practices during each phase of the developmental process on the plant floor.
  • Monitor your IIoT system continuously to understand vulnerabilities and manage emerging threats. It is essential to detect issues as early as possible.

IIoT should not be a raw or experimental practice. It must be designed reliably and with evolving security systems that are punctually followed and updated. Otherwise, it is no different than installing a burglar alarm system in your house . . . and never bothering to turn it on.

IIoT: A new way of examining an old problem

“IIoT is an evolution . . . it is moving legacy systems into the new age of technology to take advantage of everything [that] new technology and connectivity have to bring.”Vimal Kapur, president of Honeywell Process Solutions

At its core, IIoT is not a new technology. It takes advantage of some new technologies, but it is actually a new way of examining an old problem. We have always had the data-test results, analytics, asset management information, maintenance information-but it has often been inaccessible, overlooked, or obscured in our operating procedures. IIoT lets us rethink the way industry integrates the data buried in our manufacturing process.

Effectively implementing IIoT is a continuous process that demands strategic planning, a focus on company goals, the coordination of teams with diverse skill sets, and an investment in quality security measures. The adoption of IIoT brings immediate benefits, such as improved reliability and reduced downtime. Simultaneously, it also enables long-term benefits by establishing a platform for continuous development and offering a greater ROI by integrating information quantity and quality.

By creating a forward-thinking company culture, by maintaining corporate focus, and by designing IIoT systems with appropriate security measures, your business can overcome obstacles and strategically implement IIoT best practices to gain an immense competitive advantage in the digital future.

About the Author
Eric Byres, CTO and VP Engineering of Tofino Security (part of Hirschmann, a Belden Brand), is a well-known industry expert in the field of industrial cybersecurity and is chair of the ISA99 Security Technologies Working Group, chair of the ISA99 Cyber Threat Gap Analysis Task Group and Canadian representative for IEC TC65/WG13, a standards effort focusing on an international framework for the protection of process facilities from cyberattack. Eric was recognized for his contributions to the automation industry when honored by the International Society of Automation as an ISA Fellow for his outstanding achievements in science and engineering.

Connect with Eric
LinkedInTwitter

A version of this article also was published at InTech magazine.



Source: ISA News

AutoQuiz: What Are Common Terms Used to Quantify Dangerous Industrial Failures?

The post AutoQuiz: What Are Common Terms Used to Quantify Dangerous Industrial Failures? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Common terms used to quantify dangerous failures include which of the following?

a) probability of failure on demand (PFD) and nuisance trip rates
b) probability of failure on demand (PFD), risk reduction factor (RRF), and safety availability (SA)
c) mean time between failure, spurious (MTBFsp); nuisance trip rates; and safety availability (SA)
d) mean time between failure, spurious (MTBFsp) and risk reduction factor (RRF)
e) none of the above

Click Here to Reveal the Answer

PFD, RRF, and SA are extremely important in quantifying dangerous failures.

  • Probability of failure on demand: A value that indicates the probability of a system failing to respond to demand when a failure occurs.
  • Risk reduction factor: RRF = 1/PFD
  • Safety availability: SA = 1 – PFD

The correct answer is B, probability of failure on demand (PFD), risk reduction factor (RRF), and safety availability (SA). Spurious trips and nuisance trips are indicative of “safe” failure modes, not “dangerous” failures. This makes answers A, C, and D incorrect.

Reference: ANSI/ISA-84.00.01-2004 standard

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

How Often Do Measurements Need to Be Calibrated?

The post How Often Do Measurements Need to Be Calibrated? first appeared on the ISA Interchange blog site.

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Greg Breitzke.

Greg Breitzke is an E&I reliability specialist – instrumentation/electrical for Stepan. Greg has focused his career on project construction and commissioning as a technician, supervisor, or field engineer. This is his first in-house role, and he is tasked with reviewing and updating plant maintenance procedures for I&E equipment.

Greg Breitzke’s Question

I am working through an issue that can be beneficial to other Mentor Program Participants. NFPA 70B provides a detailed description for the prescribed maintenance and frequency based on equipment type, making the electrical portion fairly straight forward. The instrumentation is another matter. We are working to consolidate an abundance of current procedures based on make/model, to a reduced list based on technology. The strategy is to “right size” frequencies for calibration and functional testing; decreasing non-value maintenance to have the ability to increase value added activities, within the existing head count.  

My current plan for the instrumentation consists of: 

  1. Sort through the historical paper files with calibration records to determine how long a device has remained in tolerance before a correction was applied,
  2. Compare data against any work orders written against the asset that may reduce the frequency,
  3. Apply safety factors relative to the device impact on safety, regulatory compliance, quality, custody transfer, basic control, or indication only.

I am trying to provide a reference baseline for review of these frequencies, but having little luck in the industry standards I have access to.  Is there a standard or RAGAGEP (Recognized and Generally Accepted Good Engineering Practice) for calibration and functional testing frequency min/max by technology, that I can reference for a baseline?

Nick Sands’ Answer

The ISA recommended practice is not on the process of calibration but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems. While I contributed, Leo Staples would be a good person for more explanation.

For SIS, there is a requirement to perform calibration, which is comparison against a standard device, within a documented frequency and with documented limits, and correction when outside of limits. This is also required by OSHA for critical equipment under the PSM regulation. EPA has similar requirements under PSRM of course. Correction when out of limits is considered a failed proof test of the instrument in some cases, potentially affecting the reliability of the safety function. Paul Gruhn would be a good person for more explanation.

Paul Gruhn’s Answer

The ISA/IEC 61511 is performance based and does not mandate specific frequencies. Devices must be tested as some interval to make sure they perform as intended. The frequency required will be based on many different factors (e.g., SIL (performance) target, failure rate of the device in that service, diagnostic coverage, redundancy used (if any), etc.).

Leo Staples’ Answer

Section 5.6 ISA of the ISA technical report ISA-RP105.00.01-2017 addresses in detail calibration verification intervals or frequencies. Users should establish calibration intervals for a loop/component based on the following:

  • criticality of the loop/component
  • the performance history of the loop/component
  • the ruggedness/stability of the component(s)
  • the operating environment.

Exceptions include SIS related devices where calibration intervals are established to meet SIL requirements. Other factors that can drive calibration intervals include contracts regulatory requirements.

The idea for the technical report came about after years of frustration dealing with ambiguous gas measurement contracts and government regulations. In many cases these simply stated users should follow good industry practices when addressing all aspects of calibrations.

Calibration intervals alone do not address the other major factors that affect measurement accuracy. These include the accuracy of the calibration equipment, knowledge of the calibration personnel, adherence to defined calibration procedures, and knowledge of the personnel responsible for the calibration program. I have lots of war stories if anyone is interested.

One of the last things that I did at my company before I retired was develop a Calibration Program Standard Operating Procedure (SOP) based on ISA-RP105.00.01-2017. The SOP was designed for use in the Generation, Transmission & Distribution, and other Division of the Company. Some of you may find this funny, but it was even used to determine the calibration frequency for NERC CIP physical security entry control point devices. Initially personnel from the Physical Security Department were testing these devices monthly only because that was what they had always done. While this was before the SOP was established my team used the concepts in establishing the calibration intervals for these devices. This work was well received by the auditors. As a side note, the review of monthly calibration intervals for these devices found the practices caused more problems than it prevented.

The ISA recommended practice is not on the process of calibration, but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems.

ISA Mentor Program

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

Greg McMillan’s Answer

The measurement drift can provide considerable guidance in that when the number of months between calibrations multiplied by drift per month approaches the allowable error, it is time for a calibration check. Most transmitters today have a low drift rate but thermocouples and most electrodes have a drift rate much larger than the transmitter. The past records of calibration results will provide an update on actual drift for an application. Also, fouling of sensors, particularly electrodes, is an issue revealed in 86% response time during calibration tests (often overlooked). The sensing element is the most vulnerable component in nearly all measurements. Calibration checks should be made more frequently at the beginning to establish a drift rate and near the end of the sensor life when drift and failure rates accelerate. Sensor life for pH electrodes can decrease from a year to a few weeks due to high temperature, solids, strong acids and bases (e.g., caustic) and poisonous ions (e.g., cyanide). For every 25 oC increase in temperature, the electrode life is cut in half unless a high temperature glass is used.

Accuracy is particularly important for primary loops (e.g., composition, pH, and temperature) to ensure you are at the right operating point. For secondary loops whose setpoint is corrected by a primary loop, accuracy is less of an issue. For all loops, the 5 Rs (reliability, resolution, repeatability, rangeability and response time) are important for measurements and valves.

Drift in a primary loop sensor shows up as a different average controller output for a given production rate assuming no changes in raw materials, utilities, or equipment. Fouling of a sensor shows up as an increase in dead time and oscillation loop period.

Middle signal selection using 3 separate sensors provides an incredible amount of additional intelligence and reliability reducing unnecessary maintenance. Drift shows up as a sensor with a consistently increasing average deviation from the middle value. The resulting offset is obvious. Coating shows up as a sensor lagging changes in the middle value. A decrease in span shows up as a sensor falling short of middle value for a change in setpoint.

The installed accuracy greatly depends upon installation details and process fluid particularly taking into account sensor location in terms of seeing a representative indication of the process with minimal measurement noise. Changes in phase can be problematic for nearly all sensors. Impulse lines and capillary systems are a major source of poor measurement performance as detailed in the Control Talk columns Prevent pressure transmitter problems and Your DP problems could be a result of improper use of purges, fills, capillaries and seals.

At the end of this post, I give a lot more details on how to minimize drift and maximize accuracy and repeatability by better temperature and pH sensors and through middle signal selection.

Free Calibration Essentials eBook

For an additional educational resource, download Calibration Essentials, an informative eBook produced by ISA and Beamex. The free e-book provides vital information about calibrating process instruments today. To download the eBook, click this link.

Hunter Vegas’ Answer

There is no easy answer to this very complicated question. Unfortunately the answer is ‘it depends’ but I’ll do my best to cover the main points in this short reply.

1) Yes there are some instrument technologies that have a tendency to drift more than others. A partial list of ‘drifters’ might include:

    • pH (drifts for all kinds of reasons – aging of probe, temperature, caustic/acid concentration, fouling, etc. etc.)
    • Thermocouples (tend to drift more than RTDs especially at high temperature or in hydrogen service)
    • Turbine meters in something other than very clean, lubricating service will tend to age and wear out so they will read low as they age. However cavitation can make them intermittently read high.
    • Vortex meters with piezo crystals can age over time and their low flow cut out increases.
    • Any flow/pressure transmitter with a diaphragm seal can drift due to process temperature and/or ambient temperature.
    • Most analyzers (oxygen, CO, chromatographs, LEL)
    • This list could go on and on.

2) Some instrument technologies don’t drift as much. I’ve had good success with Coriolis and radar. (Radar doesn’t usually drift as much as it just cuts out. Coriolis usually works or it doesn’t. Obviously there are situations where either can drift but they are better than most.) DP in clean service with no diaphragm seals is usually pretty trouble free, especially the newer transmitters that are much more stable.

3) The criticality of the service obviously impacts how often one needs to calibrate. Any of these issues could dramatically impact the frequency:

    • Is it a SIS instrument? The proof testing frequency will be decided by the SIS calculations.
    • Is it an environmental instrument? The state/feds may require calibrations on a particular frequency.
    • Is it a custody transfer meter? If you are selling millions of pounds of X a year you certainly want to make sure the meter is accurate or you could be giving away a lot of product!
    • Is it a critical control instrument that directly affects product quality or throughput?

4) Obviously if a frequency is dictated by the service then that is the end of that. Once those are out of the way one can usually look at the service and come up with at least a reasonable calibration frequency as a starting point. Start calibrating at that frequency and then monitor history. If you are checking a meter every six months and have checked a meter 4 times in the last two years and the drift has remained less than 50% of the tolerance, then dropping back to a 12 month calibration cycle make perfect sense. Similarly if you calibrate every 6 months and find the meter drift is > 50% every calibration then you probably need to calibrate more often. However if the meter is older it may be cheaper to replace the meter with a new transmitter which is more stable.

5) The last comment I’ll make is to make sure you are actually calibrating something that matters. I could go on for pages about companies who are diligently calibrating their instrumentation but aren’t actually calibrating their instrumentation. In other words they go through the motions, fill out the paperwork, and can point to reams of calibration logs yet they aren’t adequately testing the instrument loop and it could still be completely wrong. (For instance, shooting a temperature transmitter loop but not actually checking the RTD or thermocouple that feeds it, using a simulator to shoot a 4-20mA signal into the DCS to check the DCS reading but not actually testing the instrument itself, etc. They often check one small part of the loop and after a successful test, consider the whole loop ‘calibrated’.

Greg McMillan’s Answer

The Process/Industrial Instruments and Controls Handbook Sixth Edition 2019 edited by me and Hunter Vegas provide insight on how to maximize accuracy and minimize drift for most types of measurements. The following excerpt written by me is for temperature:

Temperature

The repeatability, accuracy and signal strength are two orders of magnitude better for an RTD compared to a TC. The drift for a RTD below 400 oC is also two orders of magnitude less than a TC. The 1 to 20oC drift per year of a TC is of particular concern for biological and chemical reactor and distillation control because of the profound effect on product quality from control at the wrong operating point. The already exceptional accuracy for a Class A RTD of 0.1oC can be improved to 0.02 oC by “sensor matching” where the four constants of a Callendar-Van-Dusen (CVD) equation provided by the supplier for the sensor are entered into the transmitter. The main limit to accuracy of an RTD is the wiring.

The use of three extension lead wires between the sensor and transmitter or input card can enable the measurement to be compensated for changes in resistance in the lead wires due to temperature assuming the change is exactly the same for both lead wires. The use of four extension lead wires enables total compensation that accounts for the inevitable uncertainty in resistance of lead wires. Standard lead wires have a tolerance of 10% in resistance. For 500 feet of 20 gauge lead wire, the error could be as large as 26oC for a 2-wire RTD and 2.6oC a 3-wire RTD. The “best practice” is to use a 4 wire RTD unless the transmitter is located close to the sensor, preferably on the sensor. The transmitter accuracy is about 0.1oC.

A handheld signal generator of resistance and voltage can be used to simulate the sensor to check or change a transmitter calibration. The sensor connected to the transmitter with linearization needs to be inserted in a dry block simulator. A bath can be used for low temperatures to test thermowell response time but a dry block is better for calibration. The accuracy of the reference temperature sensor in the block or bath should be 4 times more accurate than the sensor being tested. The block or bath readout resolution must be better than the best possible precision of the sensor. The block or bath calibration system should have accuracy traceable to the National Metrology Institute of the user country (NIST in USA).

The accuracy at the normal setpoint to ensure the proper process operating point must be confirmed by a temperature test with a block. For factory assembled and calibrated sensor and thermowell with integral temperature transmitter, a single point temperature test in a dry block is usually sufficient with minimal zero or offset adjustment needed. For an RTD with “sensor matching,” adjustment is often not needed. For field calibration, the temperature of the block must be varied to cover the calibration range to set the linearization, span and zero adjustments. For field assembly, it would be wise to check the 63% response time in a bath.

Middle Signal Selection

The best solution in terms of increasing reliability, maintainability, and accuracy for all sensors with different durations of process service is automatic selection of the middle value for the loop process variable (PV). A very large chemical intermediates plant extended middle signal selection to all measurements that in combination with triple redundant controller essentially eliminated the one or more spurious trips per year. Middle signal selection was a requirement for all pH loops in Monsanto and Solutia.

The return on investment for the additional electrodes from improved process performance and reduced life cycle costs is typically more than enough to justify the additional capital costs for biological and chemical processes if the electrode life expectancy has been proven to be acceptable in lab tests for harsh conditions. The use of the middle signal inherently ignores a single failure of any type including the most insidious failure that gives a pH value equal to the set point. The middle value reduces noise without the introduction of the lag from damping adjustment or signal filter and facilitates monitoring the relative speed of the response and drift, which are indicative of measurement and reference electrode coatings, respectively. The middle value used as the loop PV for well-tuned loops will reside near the set point regardless of drift.

A drift in one of the other electrodes is indicative of a plugging or poisoning of its reference. If both of the other electrodes are drifting in the same direction, the middle value electrode probably has a reference problem. If the change in pH for a set point change is slower or smaller for one of the other electrodes, it indicates a coating or loss in efficiency, respectively for the subject glass electrode. Loss of pH glass electrode efficiency results from deterioration of glass surface due to chemical attack, dehydration, non-aqueous solvents, and aging accelerated by high process temperatures. Decreases in glass electrode shunt resistance caused by exposure of O-rings and seals to a harsh or hot process can also cause a loss in electrode efficiency.

pH Electrodes

Here is some detailed guidance on pH electrode calibration from the ISA book Essentials of Modern Measurements and Final Control Elements.

Buffer Calibrations

Buffer calibrations use two buffer solutions, usually at least 3 pH units apart, which allow the pH analyzer to calculate a new slope and zero value, corresponding to the particular characteristics of the sensor to more accurately derive pH from the milliVolt and temperature signals.

  • The slope and zero value derived from a buffer calibration provide an indication of the condition of the glass electrode from the magnitude of its slope, while the zero value gives an indication of reference poisoning or asymmetry potential, which is an offset within the pH electrode itself.
  • The slope of pH electrode tend to decrease from an initial value relatively close to the theoretical value of 59.16 mV/pH, largely due in many cases to the development of a high impedance short within the sensor, which forms a shunt of the electrode potential.
  • Zero offset values will generally lie within + 15 mV due to liquid junction potential, larger deviations are indications of poisoning.
  • Buffer solutions have a stated pH value at 25°C, but the stated value changes with temperature especially for stated values that are 7 pH or above. The buffer value at the calibration temperature should be used or errors will result.
  • The values of a buffer at temperatures other than 25°C are usually listed on the bottle, or better, the temperature behavior of the buffer can be loaded into the pH transmitter allowing it to use the correct buffer value at calibration.
  • Calibration errors can also be caused by buffer calibrations done in haste, which may not allow the pH sensor to fully respond to the buffer solution.
  • This will cause errors, especially in the case of a warm pH sensor not being given enough time to cool down to the temperature of the buffer solution.
  • pH transmitters employ a stabilization feature, which prevents the analyzer from accepting a buffer pH reading that has not reached a prescribed level of stabilization, in terms of pH change per time.

pH Standardization

Standardization is a simple zero adjustment of a pH analyzer to match the reading of a sample of the process solution made using a laboratory or portable pH analyzer. Standardization eliminates the removal and handling of electrodes and the upset to the equilibrium of the reference electrode junction. Standardization also takes into account the liquid junction potential from high ionic strength solutions and non-aqueous solvents in chemical reactions that would not be seen in buffer solutions. For greatest accuracy, samples should be immediately measured at the sample point with a portable pH meter.

If a lab sample measurement value is used, it must be time stamped and the lab value compare to a historical online value for a calibration adjustment. The middle signal selected value from three electrodes of different ages can be used instead of a sample pH provided that a dynamic response to load disturbances or setpoint changes of at least two electrodes is confirmed. If more than one electrode is severely coated, aged, broken or poisoned, the middle signal is no longer representative of actual process pH.

  • Standardization is most useful for zeroing out a liquid junction potential, but some caution should be used when using the zero adjustment.
  • A simple standardization does not demonstrate that the pH sensor is responding to pH, as does a buffer calibration, and in some cases, a broken pH electrode can result in a believable pH reading, which may be standardized to a grab sample value.
  • A sample can be prone to contamination from the sample container or even exposure to air; high purity water is a prime example, a referee measurement must be exposed to a flowing sample using a flowing reference electrode.
  • A reaction occurring in the sample may not have reached completion when the sample was taken, but will have completed by the time it reaches the lab.
  • Discrepancies between the laboratory measurement and an on-line measurement at an elevated temperature may be due to the solution pH being temperature dependent. Adjusting the analyzer’s solution temperature compensation (not a simple zero adjustment) is the proper course of action.
  • It must be remembered that the laboratory or portable analyzer used to adjust the on-line measurement is not a primary pH standard, as is a buffer solution, and while it is almost always assumed that the laboratory is right, this is not always the case.

The calibration of pH electrodes for non-aqueous solutions is even more challenging as discussed in the Control Talk column The wild side of pH measurement.

Additional Mentor Program Resources

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

About the Author
Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

Connect with Greg
LinkedIn



Source: ISA News

Book Excerpt + Q&A: Situation Management for Process Control

The post Book Excerpt + Q&A: Situation Management for Process Control first appeared on the ISA Interchange blog site.

This ISA author Q&A was edited by Joel Don, ISA’s community manager. ISA recently published Situation Management for Process Control: Decision Making for Operators in Industrial Control Rooms and Operation Centers by Douglas H. Rothenberg, Ph.D., a leading expert in alarm management and operator support technology for enterprise-wide industrial automation and control. In this Q&A feature, Rothenberg highlights the focus, importance, and differentiating qualities of the book. To download a free 122-page excerpt from the book, click this link

Q. Could you first define “situation management”?

Situation management is the competency, ability, and willingness of the human operator to properly and successfully manage the enterprise or activity under his or her charge. It is the end-game role for all operators responsible for effective real-time management. Success requires the ability to recognize the current environment of operation, the ability to develop appropriate and accurate assessment of that environment, the ability to transform that assessment into needed action for proper management of abnormal situations, and the ability to validate the effectiveness of the action. Situation management is the sum total of the decisions and actions that the operator makes that determine whether or not the enterprise operates safely and productively.

Q. Please briefly explain the objective of the book as it relates to situation management?

A. The book explains how to deliver real value to control room operations in industrial plants, specifically in improving safety and effectiveness. It advances a firm technical framework that ties together all of the traditional individual aspects (e.g., procedures, the human machine interface, control room design, and more) into a technology to understand and design effective control room management operations for enterprises. It’s a unified approach with explicit tools to deliver situation management to control room operators. An important new contribution is the concepts and technology of “weak signals” and their use to supplement alarm systems and cover situations that alarms are not intended or able to manage.

Q. Why do you believe the book is so beneficial and valuable to read?

A. The book builds on strong concepts and best practices to weave a comprehensive understanding of situation management. It covers the entire discipline, filling in gaps, extending understandings, and describing new competencies. It leverages an extensive body of knowledge in an informed narrative.  Taken as a whole, it enables both novice and seasoned practitioners to grasp the big picture and at the same time acquire core concepts and practical tools.  Every segment of the book is rooted in existing practice and experience. This is clearly evident in the extensive references and explanations of published material.

The content of the book can be categorized into two broad areas:

  • Concepts and technology that should be used and properly integrated into the appropriate enterprise infrastructure (specifications and design, implementation, MOC, training, and all the rest).
  • Concepts and tools that are consistent with existing enterprise infrastructure and could be used better to be make things more effective.

Blog Q&A Bonus! To purchase a copy of Situation Management for Process Control, click this link. To download a free 122-page excerpt from the book, click this link

Q. Could you shed some light on the concept of “weak signals”?

A. Weak signals is a very new concept. This book introduces it. They provide a tool for operators to detect early or subtle problems in the making. Each weak signal is what we might call a small indicator of something that doesn’t appear quite right. Treating them as weak signals offers an important methodology operators can use to understand and decide what they mean. They are part of a planned activity operators use to see if something odd might bear fruit if explored more carefully. They can be discovered everywhere; processing them will lead to valuable clues and then confirmation of something going amiss.  The technology is an important way to ‘fill in the cracks’ of every operator’s tool kit. 

Q. Who could benefit most by reading the book?

A. The book is an important read for managers, supervisors, operators, engineers, safety personnel, and technicians in industrial enterprises and operation centers.  It’s also highly pertinent for regulators, specialists, engineers, system designers, and trainers at commercial firms (controls equipment manufacturers, A&E firms, systems integrators) who provide monitoring and controls hardware, software, and technology to end‑users. These professionals have a unique understanding of the needs and requirements of the control room. Without their care and innovation and attention to purpose, effective operator situation management wouldn’t be possible.  They are the enablers, champions, providers, and deliverers of the technology.

Q. How does your book differ from other books written on the topic of situation management?

A. The unique value of this book is how it weaves the myriad of individual components of control room design, operator interface design, operational protocols, and operator support technology into a coherent and useable methodology. The book makes all the tools and processes explicit where before they were either implicit, missing from the control room operator tool kit, or not included in the operating culture (qualifications, procedures, training, and the like). The book enables readers to clearly recognize what the operator is responsible for and how it can be provided to assist in meeting the responsibility for successful operation.

Many of the individual tools and methodologies are currently in use in one industry or another.  But their use might be haphazard and fragmented.  Until now, it was difficult to fully understand how each might be used to bolster the other.  This comprehensive treatment exposes a better basic understanding of each tool and methodology.  And, more importantly, it demonstrates how they fit together in ways that significantly improve the ability of operators to successfully manage and execute their responsibilities.

About the Author
Douglas H. Rothenberg, Ph.D., is a leading expert in alarm management and operator support technology, possessing in-depth experience in developing innovative solutions, technologies, and concepts for enterprise-wide industrial automation and control. As a globally recognized consultant and trainer in state-of-the-art alarm management technology, Douglas pioneered the design for industrial distributed control system (DCS) alarm management technology now accepted as international best practice. He has been awarded patents in alarm management, process control, and instrumentation, and his works have been published and presented broadly in the field. He is the author of Alarm Management for Process Control, a best-practice resource for the design, implementation, and operation of industrial alarm systems. Since 1999, Douglas has served as president of D-RoTH, Inc., a consulting firm serving leading industrial manufacturing and technology providers in the areas of alarm management, design innovation, process safety management, process control technology, plant operability, and smart field actuators. Douglas earned a bachelor of science degree in electrical engineering from Virginia Tech, a master of science degree in electrical engineering from Case Institute of Technology, and a doctorate degree in systems engineering from Case Western Reserve University. He has been an active ISA member and contributor for many years. He currently serves as a member of the ISA 18.2 Alarm Management Standards Committee, and is a former president, vice president and secretary of ISA’s Cleveland Section.

Connect with Douglas
LinkedIn


Source: ISA News

AutoQuiz: How to Measure the Resistance in a Circuit

The post AutoQuiz: How to Measure the Resistance in a Circuit first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

A circuit has a 100 Ω resistor, a 50 Ω resistor, and a 200 Ω resistor all in parallel with each other. What is the total resistance of the circuit?

a) 28.6 Ω
b) 35.0 Ω
c) 175 Ω
d) 0.28 M Ω
e) none of the above

Click Here to Reveal the Answer

The correct answer is A, 28.6 Ω. To find the total resistance for resistors in parallel, use the equation:

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

Decentralized Control Strategy vs MPC for Industrial Methanol Distillation [technical]

The post Decentralized Control Strategy vs MPC for Industrial Methanol Distillation [technical] first appeared on the ISA Interchange blog site.

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.

Abstract: In this work we have developed a novel, robust practical control structure to regulate an industrial methanol distillation column. This proposed control scheme is based on a override control framework and can manage a non-key trace ethanol product impurity specification while maintaining high product recovery. For comparison purposes, an MPC with a discrete process model (based on step tests) was also developed and tested. The results from process disturbance testing shows that, both the MPC and the proposed controller were capable of maintaining both the trace level ethanol specification in the distillate (XD) and high product recovery (β). Closer analysis revealed that the MPC controller has a tighter XD control, while the proposed controller was tighter in β control. The tight XD control allowed the MPC to operate at a higher XD set point (closer to the 10 ppm AA grade methanol standard), allowing for savings in energy usage. Despite the energy savings of the MPC, the proposed control scheme has lower installation and running costs. An economic analysis revealed a multitude of other external economic and plant design factors, that should be considered when making a decision between the two controllers. In general, we found relatively high energy costs favor MPC.

Free Bonus! To read the full version of this ISA Transactions article, click here.

Enjoy this technical resource article? Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus professional networking and discounts on technical training, books, conferences, and professional certification.

Click here to join ISA … learn, advance, succeed!

 

 

2006-2019 Elsevier Science Ltd. All rights reserved.

 

 

 



Source: ISA News

How Long Does It Take to Detect a Leak on an Oil or Gas Pipeline?

The post How Long Does It Take to Detect a Leak on an Oil or Gas Pipeline? first appeared on the ISA Interchange blog site.

This guest blog post was written by Edward J. Farmer, PE, industrial process expert and author of the ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to obtain a copy of the book, click this link.

In Detecting Leaks in Pipelines, I mention and discuss a concept referred to as “coherence” as being important in interpreting the underlying message within a set of observations. (Refer to page 149.) My attraction to the coherence concept may come from my BSEE in which we used it a lot in assessing the “information” within communication “signals.”

It also showed up in some military intelligence work involving analyzing the likely outcome indicated by multiple sets of observations. In the first case, the usual motivation was assessing whether a communications stream contained particular characteristics. The usual textbook issue is looking for a pulse in an otherwise stochastic stream.

In military intelligence work we were usually trying to discern whether a stream of observations meant anything of consequence, or, if they did, which of several possible “consequences” was most likely. As it turns out, both points of view are useful in thinking one’s way through pipeline leak detection and many similar process management and control issues.

Coherence of a set of observations suggests a logical “fitting together,” implying either a common source, a common purpose, the result of common processing, or all of them. Establishing the interconnected linkage uniting what appear to be puzzle parts into some discernible picture is the analysis process. It can be tedious and obtuse and “success often seems to be the result of an ‘aha!’” experience.

Observations are the set of things required to discern likely (or sometimes pertinent) outcomes. If the problem was finding the beautiful landscape image in a sea of jigsaw puzzle parts one might begin with some sort of algorithmic approach, perhaps putting all the pieces with straight edges into a pile, then sorting the other pieces based on some persistent characteristic, such as color. As the observations are processed, categorized, and fit together in so far as possible, elements that could be parts of several pictures emerge.

As more fitting together is done by various methods, the picture improves and begins to show a small set of likely outcomes. Eventually, confidence reaches a comfortable level and fitting the remaining minor pieces together is simple and declines in value – you see and experience the result.

There are almost always “outliers,” perhaps a black piece of the right shape and apparent connection methodology that would come from a discernible area if it were blue. That might invoke thinking about the “blue area” assumption or perhaps the unique shape of the subject piece. Maybe it’s from another region, or maybe the key to confluence of other regions? Who knows at this point? An open mind and a logical process will soon make it all perfectly (or at least statistically adequately) clear.

It should be easy to close one’s eyes at this point and see a collection of Venn diagrams, all leading to the ultimate categorization of the pieces and eventually groups of pieces, and finally conformance into some likely image. Some process work is much easier than puzzles. In real life, especially in intelligence work, there may be apparent reasons why things seem to go one way other than another. The challenge becomes: What do we need to observe in order to establish another categorization criterion?

As I’ve previously discussed, in process control and analysis we usually have but a few possible outcomes. Sometimes it’s easier to discern whether a particular set of conditions suggest that a situation we should worry about is emerging. Quickly, it becomes practical to focus effort on things that matter considering the trail marked by the indicators as we observe and analyze them.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free 37-page excerpt from the book, click here.

Consider a reach of pipe and what we observe about it. If we know upstream and downstream pressure and flow, we can make some assumptions, reading by reading, about what is happening on that piece of pipe.

  • Matched flow indicates the line is stable and free of transients.
  • Inflow greater than outflow at decreasing pressure suggests a leak.
  • A decrease in pressure and flow at the downstream end warns there may be a problem there.
  • An increase in upstream flow with a decrease in pressure suggests a possible leak.
  • In a time-series of data, the onset of a decrease at an end suggests the precipitating event (e.g., the leak) may be closer to that observation than the “other end” of the line.
  • The time between when a change is seen at opposite ends of the line can indicate the precipitating event’s actual location. If the times are the same the event is near the middle of the segment. Otherwise, it is calculably closer to the end where it is first seen.

There is more, but you get the idea. Also remember the quality and thus dependability of our conclusions improves as the stream of readings becomes larger.

A pipeline hydraulic event (a change in flow or in pressure) propagates along the pipeline at the speed of sound in the fluid in the pipe. The speed of sound can be estimated or can be updated automatically as needed. The transit time is easily calculated at the length of the line divided by the acoustic velocity.

I’ll dwell on this a bit more in a future blog. From this we know that any two events separated by more than the transit time are not coherent with a single event on the segment. There are lots of things we can surmise from such a situation by looking into, and sorting out how such a thing could happen.

  • If the interval is less than or equal to the transit time the source is either a leak on the segment or at one of its ends.
  • If the interval is exactly the transit time it may be at an end but is more likely beyond the end at which it is seen first.
  • If we’re willing to consider more than one leak within the time-frame then there are even more options.

Given a specific situation, what possibilities could be coherent? What do we need to know to separate the possibilities? From this thought-process we can discern the requirements for confident leak detection (or some other event of interest) from unlikely spoofing. Note that understanding coherence frames our problem and observations provide what is needed to resolve the ambiguity.

About the Author
Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. He has authored three books, including the ISA book Detecting Leaks in Pipelines, plus numerous articles, and has developed four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. During his long industry career, he established EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.

Connect with Ed
48x48-linkedinEmail

 



Source: ISA News

AutoQuiz: How to Calculate Uptime for an Automation System

The post AutoQuiz: How to Calculate Uptime for an Automation System first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Consider the following automation system data:

  • Preventive maintenance for 1 hour every month
  • Quarterly preventive maintenance for 2 hours each quarter
  • One failure that results in 6 hours of downtime
  • One failure that results in 4 hours of downtime

What is the uptime for this automation system if it runs 24 hours a day, 365 days a year?

a) 99.66%
b) 99.77%
c) 99.86%
d) 99.89%
e) none of the above

Click Here to Reveal the Answer

The most important measure for production equipment support is operational availability, or “uptime.”

Automation equipment that operates for 365 days x 24 hours per day = 8,760 total possible “up” hours. This equipment gets preventive maintenance for 1 hour every month (12 hours per year), plus additional quarterly preventive maintenance of another 2 hours each quarter (8 more hours per year).

There was one failure that resulted in 6 hours of downtime and a second failure that resulted in 4 hours of downtime. Thus, total downtime for all maintenance was 12 + 8 + 6 + 4 = 30.

The correct answer is A, “99.66%.”

Reference: Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP., A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News