Thank You Sponsors!

CANCOPPAS.COM

CBAUTOMATION.COM

CGIS.CA

CONVALPSI.COM

DAVISCONTROLS.COM

EVERESTAUTOMATION.COM

FRANKLINEMPIRE.COM

HCS1.COM

MAC-WELD.COM

SWAGELOK.COM

THERMO-KINETICS.COM

THERMON.COM

VANKO.NET

VERONICS.COM

WAJAX.COM

WESTECH-IND.COM

WIKA.CA

AutoQuiz: What Device Is Used to Convert an Analog Signal From a Transmitter to the Signal Required by a Digital Controller?

The post AutoQuiz: What Device Is Used to Convert an Analog Signal From a Transmitter to the Signal Required by a Digital Controller? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

To change a 4–20 mA analog signal from a transmitter to the signal required by a digital controller, a(n) _____ must be between the transmitter and controller in the measurement loop. 

a) I/P transducer
b) P/I transducer
c) DP transmitter 
d) A/D converter
e) none of the above

Click Here to Reveal the Answer

An “I/P transducer” is used to convert an analog current (I) signal to a pneumatic (P) signal, as for actuation of final control elements.

A “P/I transducer” is used to convert a pneumatic signal (P) to an analog current (I) signal, as for a pneumatic transmitter in a programmable logic controller loop.

A “DP transmitter” is a differential pressure transmitter, which can output a pneumatic, an analog, or a digital signal, depending on the model of transmitter used.

The correct answer is D, “A/D converter.” A digital controller requires a digital signal as its input. A 4–20 mA transmitter outputs an analog signal. Therefore, a device to convert an analog (A) signal to a digital (D) is required. This class of device is referred to as an “A/D converter.”

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

The Ball Is in Your Court Now

The post The Ball Is in Your Court Now first appeared on the ISA Interchange blog site.

This post is authored by Paul Gruhn, president of ISA 2019.

I have used this blog every month to inform you of the various resources and activities going on within the society that are available to benefit you and your employer. Our standards, books, journals, training, certificate and certification programs, conferences, licensure, divisions, sections, leader meetings, new vision/mission/values statements, affiliated organizations, and our new strategic objectives (along with associated goals and tactics) are all intended to increase your technical competence (i.e., your employability) and the operational performance of your company (e.g., safety, security, and profitability).

We have made great strides this year ranging from a surplus budget, membership growth, and getting all the volunteer leaders to row their collective boats in the same direction to achieve our collectively agreed-upon strategic objectives. It has taken a lot of work from a lot of people, and we are not “done” by any means.

It has been an honor serving as your 2019 society president and seeing the advancements we have made. While there will be various activities for me to remain involved in serving as your 2020 past president, your incoming 2020 president Eric Cosman will be leading the society in the coming year. New volunteers will be serving in various leadership roles. While we do have a professional staff, setting the strategic objectives of the society and leading many of our programs are activities performed by volunteers. If there is something you are not satisfied with, if you think there is something we should offer that we currently do not, or if you think the society could do something better, do not sit back on the sidelines and complain; step up to the plate, get involved, and help improve the situation.

I have used this quote from Teddy Roosevelt before, but it is worth mentioning again:

Every person owes part of their time and money to the business or industry in which they are engaged. No person has a moral right to withhold their support from an organization that is striving to improve conditions within their sphere.

If you are early in your career, get involved to build up your network of connections, learn from mentors, and advance your career faster than you would be able to do on your own. If you are more experienced, get involved to give back to your industry and mentor those entering the field.

Ninety-plus percent of members and volunteers I know of joined ISA and/or became volunteers because someone asked them to. Who have you suggested lately to join ISA to increase their knowledge and further their career? Who have you asked lately to come to a monthly meeting that you knew would be of interest to them? Who have you recruited lately to become a volunteer and put their career on fast track by getting involved? This stuff does not just happen on its own; you need to drive it. It’s your society. The ball is in your court now.

About the Author
Paul Gruhn is a global functional safety consultant at AE Solutions and a highly respected and awarded safety expert in the industrial automation and control field. Paul is an ISA Fellow, a member of the ISA84 standards committee (on safety instrumented systems), a developer and instructor of ISA courses on safety systems, and the primary author of the ISA book Safety Instrumented Systems: Design, Analysis, and Justification. He also has contributed to several automation industry book chapters and has written more than two dozen technical articles. He developed the first commercial safety system modeling software. Paul is a licensed Professional Engineer (PE) in Texas, a certified functional safety expert (CFSE), a member of the control system engineer PE exam team, and an ISA84 expert. He earned a bachelor’s degree in mechanical engineering from the Illinois Institute of Technology.

Connect with Paul
LinkedInTwitterEmail



Source: ISA News

Will Blockchain Technology Disrupt the Industrial Control System World?

The post Will Blockchain Technology Disrupt the Industrial Control System World? first appeared on the ISA Interchange blog site.

This post was authored by Steve Mustard, an industrial control system and cybersecurity consultant, and author of the ISA book Mission Critical Operations Primer, and Mark Davison, is a software engineer with more than 30 years of experience. Currently he is an owner/director of Terzo Digital.

Blockchain is a novel technology that leading industry players predict will cause major disruption to many existing industries, including banking, real estate, supply chain, voting, and energy management.

Large businesses, such as IBM, Samsung, UBS, and Barclays, are already working on blockchain-related projects and services, and hundreds of startup businesses are developing their own killer applications. Blockchain technology could also disrupt the industrial control system (ICS) world, so it is worth looking now to see what might be coming and how it might affect us.

Blockchain technology

Blockchain technology is a decentralized method for recording transactions. These transactions are recorded in a distributed ledger (known as the blockchain) that is stored across thousands of computers worldwide.

Transactions are recorded in the ledger and grouped together in blocks. They are secured using a form of cryptography called “hashing.” Because the ledger is distributed and secured using hashing, it is theoretically impossible to make changes once something is recorded.

Hashing converts the data in a block into a hash, a format that cannot be decrypted to obtain the original data. It is unique such that any changes to the original data will yield a different result. Blocks in a blockchain incorporate the hash from the previous block, and so tampering with or forging transactions by changing data in a block is easily identified and prevented.

1) Someone wants to send money to someone else.
2) A block is created online to represent the transaction.
3) The new block is broadcast to all blockchain miners in the network.
4) The miners approve the transaction and validate it.
5) The block is then added to the blockchain, providing a permanent record. At this point, the transaction is valid. All miners receive a copy of the updated blockchain, making any discrepancies quickly evident.
6) The recipient receives the payment.

 

Where is blockchain used?

The best-known use of blockchain technology is in Bitcoin, a “cryptocurrency” that allows users to send and receive money electronically. Bitcoin uses blockchain technology to maintain a ledger of every Bitcoin transaction. A growing number of major businesses use Bitcoin, including Microsoft, Subway, and Whole Foods, as well as many small restaurants and traders. The total value of all existing Bitcoins now exceeds $20 billion (up from $2.7 billion in 2015), and millions of dollars are exchanged daily.

New Bitcoins are generated through a process called mining. This process involves individuals called miners, who use special software to “mine” blocks, or create the hash required to update the blockchain. Miners are issued a certain number of Bitcoins in exchange for this processing. Mining requires significant processing power to perform the hashing in order to conform to strict rules known as proof of work. The complex processing required to achieve the proof of work helps manage the rate at which Bitcoins are issued.

Hashing algorithms produce a fixed size output (called a hash code or digest), irrespective of the data being hashed. Bitcoin uses a secure hashing algorithm (SHA) with 256 bits (32 bytes) in its output, or SHA-256 for short. For example, the SHA-256 hash of “International Society of Automation” (35 characters) is:

75b8e883214c8543f22fcf1adb6682666f5308fcb
9dcc896846b2d53fba2141e

and the SHA-256 hash for “Automation Federation” (21 characters) is:

8da363f674c49fa3f5b4bbdfac92610d0906ad
e2d58f38a39c8ee8faa74bad91

The very first block in the Bitcoin ledger (called the genesis block) has the hash:

000000000019d6689c085ae165831e934f
f763ae46a2a6c172b3f1b60a8ce26f

In the proof of work process, the miner is presented with a number of pieces of data, including the SHA-256 hash representing the previous block in the chain; details of the current transactions to be processed, such as a timestamp (created by the miner); and information pertinent to the transactions themselves. The miner combines all this data into one hash. This is referred to as the challenge. The miner’s task is to produce what is known as a proof, such that the SHA-256 hash of the challenge and proof results in a hash that has a fixed number of leading zeroes (out of the total 256 bits in the hash).

Due to the unique one-way nature of hashing algorithms, the only way the miner can determine the proof (also known as nonce, a term commonly used in cryptography for a number that is used only once) is to try all possible permutations until an answer is found. The number of leading zeroes in the hash determines the number of possible permutations. For example, if it were necessary to have the first 40 bits of the hash as zero, there would be approximately 1 trillion possible combinations (240). Varying the number of zeroes halves or doubles the amount of work (239 = 549 billion, 241 = 2.2 trillion).

In Bitcoin, the proof of work is designed to take approximately 10 minutes to perform. At present, this results in a hash with 18 leading zeroes, or 262,144 possible permutations (218). Once a miner determines the proof, the resulting hash is stored in the transaction block, and this hash will be subsequently used in the processing of the next block.

The reasons why blockchain technology can successfully manage $20 billion of currency are also reasons why it can be useful in other transaction management applications:

  • Due to its decentralized nature, blockchain technology does not have a central point of failure and is better able to withstand malicious attacks.
  • Changes to public blockchains are publicly viewable by all parties, creating transparency, and transactions cannot be altered or deleted.

Bitcoin has perhaps gained more notoriety than respect from the general public to date, because hackers have used it to collect their fees from ransomware attacks. Bitcoin transactions involve transfers between anonymous addresses, and the lack of central control makes it difficult, but not impossible, to trace. However, blockchain technology can be a force for good.

Already disruptive

Blockchain technology is already being used in a wide variety of industries. More than $500 million was invested in venture capital-backed blockchain companies in 2016. Some high-profile applications include:

  • The diamond industry to track individual diamonds from mine to consumer. This addresses counterfeiting, loss of revenue, insurance fraud, and conflict diamond detection.
  • The medical industry to maintain a backup of a person’s DNA that can be readily and securely accessed for medical applications.
  • In retail to record every action that happens in a retail supply chain and make all the data searchable in real time for consumers. This allows the consumer to scan a QR code on a can in the supermarket and find out where the food inside was obtained, who certified it, where it was canned, etc.
  • Legal to lock down a video or photograph, so it is impossible to change one pixel without a record of the transaction, allowing uses like recording indisputable insurance claims or police brutality.
  • Energy management to allow customers to buy and sell energy directly, without going through a central provider.

Closer to home, IBM is working in partnership with Samsung to develop a decentralized Internet of Things (IoT). Autonomous decentralized peer-to-peer telemetry (ADEPT) uses blockchain technology to secure transactions between devices. IBM and Samsung are planning networks of devices that can autonomously maintaining themselves by broadcasting transactions between peers, as opposed to the current model of all devices communicating only with centralized, or cloud, services.

Central to this concept is the registration of IoT devices in a publicly maintained blockchain, creating a level of trust that cannot be achieved for rogue devices.

Blockchain in the ICS world

Blockchain technology has other potential applications for ICS, such as the protection and verification of device firmware and application software updates. As ICS users have secured their networks, attackers have taken to other methods to infiltrate systems. One such method involves inserting Trojan malware into ICS software that is downloaded by users for installation on their networks. In 2014, a variant of the malware Havex contained code that scanned networks for OPC-aware devices. It then collected information on the tag configuration and uploaded it to an external server. This Trojan was found in downloadable software on ICS vendor websites. Registering firmware and software in a blockchain could provide an immutable record of code, making an attack like the Havex OPC example impossible. Other potential ICS-based applications are:

  • authentication, authorization, and nonrepudiation of device configuration and program changes
  • protection, verification, and nonrepudiation of critical data, such as historian streams or regulatory reports

Challenges

One of the challenges for the non-Bitcoin blockchain solutions is that a key benefit of a truly distributed ledger is only achievable if there is some obvious financial gain for miners. The above examples, such as the ICS firmware or software ledger, may only be achievable with a private (e.g., run by one ICS vendor for their products only) or consortium blockchain (perhaps managed by a collective of ICS vendors). In this application, there may still be some concerns about the security of the ledgers, but in the case of ICS firmware and software verification, a private or consortium blockchain would still provide significantly more assurance than existing methods.

As with all disruptive technologies, it is impossible to predict with certainty what will happen. All we can do is watch this space. What is certain is that blockchain technology is not going away anytime soon.

Bitcoin has perhaps gained more notoriety than respect from the general public to date, because hackers have used it to collect their fees from ransomware attacks. Bitcoin transactions involve transfers between anonymous addresses, and the lack of central control makes it difficult, but not impossible, to trace. However, blockchain technology can be a force for good.

Already disruptive

Blockchain technology is already being used in a wide variety of industries. More than $500 million was invested in venture capital-backed blockchain companies in 2016. Some high-profile applications include:

  • The diamond industry to track individual diamonds from mine to consumer. This addresses counterfeiting, loss of revenue, insurance fraud, and conflict diamond detection.
  • The medical industry to maintain a backup of a person’s DNA that can be readily and securely accessed for medical applications.
  • In retail to record every action that happens in a retail supply chain and make all the data searchable in real time for consumers. This allows the consumer to scan a QR code on a can in the supermarket and find out where the food inside was obtained, who certified it, where it was canned, etc.
  • Legal to lock down a video or photograph, so it is impossible to change one pixel without a record of the transaction, allowing uses like recording indisputable insurance claims or police brutality.
  • Energy management to allow customers to buy and sell energy directly, without going through a central provider.

Closer to home, IBM is working in partnership with Samsung to develop a decentralized Internet of Things (IoT). Autonomous decentralized peer-to-peer telemetry (ADEPT) uses blockchain technology to secure transactions between devices. IBM and Samsung are planning networks of devices that can autonomously maintaining themselves by broadcasting transactions between peers, as opposed to the current model of all devices communicating only with centralized, or cloud, services.

Central to this concept is the registration of IoT devices in a publicly maintained blockchain, creating a level of trust that cannot be achieved for rogue devices.

Blockchain in the ICS world

Blockchain technology has other potential applications for ICS, such as the protection and verification of device firmware and application software updates. As ICS users have secured their networks, attackers have taken to other methods to infiltrate systems. One such method involves inserting Trojan malware into ICS software that is downloaded by users for installation on their networks. In 2014, a variant of the malware Havex contained code that scanned networks for OPC-aware devices. It then collected information on the tag configuration and uploaded it to an external server. This Trojan was found in downloadable software on ICS vendor websites. Registering firmware and software in a blockchain could provide an immutable record of code, making an attack like the Havex OPC example impossible. Other potential ICS-based applications are:

  • authentication, authorization, and nonrepudiation of device configuration and program changes
  • protection, verification, and nonrepudiation of critical data, such as historian streams or regulatory reports

Challenges

One of the challenges for the non-Bitcoin blockchain solutions is that a key benefit of a truly distributed ledger is only achievable if there is some obvious financial gain for miners. The above examples, such as the ICS firmware or software ledger, may only be achievable with a private (e.g., run by one ICS vendor for their products only) or consortium blockchain (perhaps managed by a collective of ICS vendors). In this application, there may still be some concerns about the security of the ledgers, but in the case of ICS firmware and software verification, a private or consortium blockchain would still provide significantly more assurance than existing methods.

As with all disruptive technologies, it is impossible to predict with certainty what will happen. All we can do is watch this space. What is certain is that blockchain technology is not going away anytime soon.

Learn more about industrial security and mission critical operations. Click this link to download a free 48-page excerpt from Mission Critical Operations Primer.

About the Author
Steve Mustard, author of the ISA book, Mission Critical Operations Primer, is an independent automation consultant and subject-matter expert of ISA and its umbrella association, the Automation Federation. He also is an ISA Executive Board member. Backed by nearly 30 years of software development experience, Mustard specializes in the development and management of real-time embedded equipment and automation systems, and the integration of real-time processing, decision-support and other disparate systems to improve business processes. He serves as president of National Automation, Inc. Mustard is a recognized authority on industrial cybersecurity, having developed and delivered cybersecurity management systems, procedures, training and guidance to multiple critical infrastructure organizations. He serves as the chair of the Automation Federation’s Cybersecurity Committee. Mustard is a licensed Professional Engineer, UK registered Chartered Engineer, a European registered Eur Ing, an ISA Certified Automation Professional (CAP) and a certified Global Industrial Cybersecurity Professional (GICSP). He also is a Fellow in the Institution of Engineering and Technology (IET), and a senior member of ISA.

Connect with Steve
LinkedInTwitterEmail

About the Author
Mark Davison is a software engineer with more than 30 years of experience. Currently he is an owner/director of Terzo Digital, a software consultancy firm specializing in IoT and telemetry. Davison is a current committee member for the Water Industry Telemetry Standards (WITS) Protocol Standards Association, helping to develop new standards in the IoT arena, aimed at more than just the water industry.

Connect with Mark
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

Why Most Modern Process Plants Fail to Take Full Advantage of Automation System Capabilities

The post Why Most Modern Process Plants Fail to Take Full Advantage of Automation System Capabilities first appeared on the ISA Interchange blog site.

This guest blog post was written by Paul Darnbrough, P.E., CAP, principal at ControlsPR and previously with the Automation Solutions Group at MAVERICK Technologies, a Rockwell Automation company.

The way we have started automobiles through the years is a simple analogy to the progress, or lack thereof, of process automation technology. Car owners have been carrying keys for the better part of a century. At first it took two operations to start a vehicle: the key turned on a switch, and the driver stepped on the starter. Later, starting was performed as a single action with the key. Later still, cars added central locking functions, still performed with a mechanical key.

As technology advanced, functions performed with a key became more complex, until the key itself was made largely obsolete. Basic control functions, such as locking the doors and starting or stopping the engine, became more sophisticated. Now the car is able to sense the owner (or at least the owner’s key fob) approaching, and it unlocks the door as he or she grasps the handle. Once inside, the system provides a secure means of push-button starting the vehicle, and it may even go so far as adjusting the mirrors, seat position, and entertainment settings to meet a specific driver’s preset desires. One could say the basic functions of opening, starting, and adjusting a car have been advanced and elevated into the realm of advanced automation.

Process industries have developed along similar lines with control systems. To answer the question of what process control is exactly, we have to go back to the earliest introductions of control mechanisms, where first-generation electro-pneumatic-mechanical loop controllers replaced people doing tasks such as manually adjusting valves in response to some local indicator like a pressure gauge.

Although a device was used to automate a human function in an effort to control a variable, there was no sense of what the process was doing overall. A basic controller could keep an individual loop on an even keel, more or less, so long as there was not too much disruption. Complex processes might employ dozens or even hundreds of such controllers, each with its performance displayed on a panel board, but keeping an eye on the big picture was still a human process.

Moving to electronic control

When distributed control system (DCS) platforms were introduced in the 1970s, they simplified the mechanics of the panel board, but did not do much to improve its capabilities. Big-picture analysis was still largely a human responsibility. Sure, getting beyond the technical constraints of pneumatic field devices with their troublesome compressed air tubing made it easier to install more instruments and actuators, but the basic control concepts did not really change. Any movement to advanced process control (APC) and other forms of control optimization were still in their infancy. Process automation capable of supporting APC had to encompass many technologies and techniques. It was characterized by incorporating many more input data points into algorithms and orchestrating more complex sequences.

Older systems did have powerful capabilities available to those willing to explore them. Some sophisticated users were operating with fundamental APC concepts even back in the pneumatic era, but those successes required a high degree of internal engineering capability. There were few, if any, tools available commercially to support such efforts. The same applied to early DCS platforms. Few companies ever overtaxed the brute computing power of the processors running a DCS, but creating the kind of programming necessary to drive APC in such an environment was no small task.

The hard work of optimization

The transition to process automation and APC was empowered by being able to create an all-encompassing platform capable of coordinating more than single loops or small cascade groups. One major advantage of newer platforms is the ability to optimize a process to suit the owner’s specific economic goals based on any number of desired outcomes. The process automation system can operate the plant to minimize energy consumption, maximize output, and deliver specific product quality attributes. Companies using these systems effectively swear by their capabilities.

Implementing such systems is challenging, and having an automation solutions provider working with an internal engineering department can make the task much easier. During the initial design phase of a control system upgrade or a new installation, it is far too easy to focus just on process fundamentals, and never get beyond considering desired steady-state conditions. Automation system upgrades and new installations can therefore miss opportunities to engage with process and automation technology experts capable of uncovering better ways of doing things.

Bringing in fresh ideas

An automation solutions provider can bring new eyes and ideas to advance a project beyond what designers conceived initially. While the individuals within a given plant may understand their plant processes intimately, such a group may not have the time to go beyond current capabilities. In some instances, these individuals may also lack broader knowledge of automation systems, particularly as applied to processes in other plants.

One of the major advantages of bringing in outside talent is tapping the collective knowledge of a larger group of engineers who have worked on many projects in many environments. Each new experience adds to the knowledge base, and it can be transferred as part of a planning process. Even a question as simple as, “Why is this control action performed in this manner?” can prompt discussion and cause companies to consider new and better ways of performing routine functions.

Many capabilities of modern process automation systems are still underutilized in most process plants, even among companies most people would consider sophisticated (figure 1). Far fewer companies use APC as effectively as they could, even though basic APC technologies have been around for decades.

 

Figure 1. Even the most modern process plants typically do not take full advantage of the capabilities of their control systems.

Even fewer have developed systems for implementing procedure automation to deal with startups, grade changes, shutdowns, and other disruptions—even though such situations are the primary causes for process upsets and safety incidents due to the high degree of human intervention involved and the infrequency with which they occur. The ISA-106 standard covering procedure automation may be relatively new, but the concepts embodied in the standard have been around for many years.

As a purely practical matter, human capabilities and the skills of experienced operators are indispensable to operating a plant well, but too many plants are overly dependent on unwritten tribal knowledge. A review of reports analyzing process safety incidents will turn up many situations where an inadequately trained operator had to take manual control of a process during a startup or other changeover, and ended up making the wrong decisions. Companies lose huge amounts of money in such situations.

Properly developed process automation systems are always on the lookout for trouble, and are ready to respond and alert operators when a problem is anticipated or detected. More advanced control sequences stand ready to be executed via procedure automation, even if they are only used once per year. Comprehensive process automation systems cannot only handle plant operations automatically, but can also supplement operator knowledge and activities by supplying the right amount of information at the right time to the right people.

Capturing operator knowledge

Automating actions through procedure automation is an excellent way to capture tribal knowledge and the understanding of a plant’s best people before they retire or move on (figure 2). The need for operator training remains, but procedure automation reduces dependence on human memories and an individual’s ability to make the right decisions in a crisis. Control systems, even relatively old ones, can perform such functions when programmed properly, but outside assistance may be required to incorporate this functionality.

Figure 2. Procedure automation and other techniques can capture tribal knowledge from a plant’s best operators.

As we deal with the “great shift change” driven by worker demographics, the ability to automate the entire range of process control functions through procedure automation will become even more important. Experienced long-time operators often have a wealth of unwritten knowledge regarding plant operations waiting to be captured and automated. The technologies exist; it is a matter of taking up the challenge and doing it—and automation solutions providers can help.

More devices, smarter devices

Another area where a higher level of sophistication in process automation is critical relates to the increasing numbers of smart devices and systems in process plants, both wired and wireless (figure 3). The quantity of modern field devices offering extensive reporting and diagnostic capabilities has grown by orders of magnitude, as has the information each can deliver. These devices are easily networked via a variety of protocols, which provide a huge pipe for delivering mass quantities of data.

Figure 3. Smart instruments like this wireless guided wave radar level transmitter supply much valuable information beyond the process variable measurement. Source: Emerson Process Management

No longer does each device provide a single 4–20 mA signal corresponding to the process variables, as now there is status information about a transmitter’s health or a valve’s condition (table 1). In fact, the flood of information can be too much of a good thing if not handled correctly.

Table 1. Smart valve information transmitted to control system

  • Precise position
  • Time spent in a given position
  • Opening and closing force
  • Stiction and binding
  • Process noise
  • Number of actuations 

However, a well-configured process automation system is capable of harvesting what may seem to be an overload of data, then digesting it to make it useful. From a process standpoint, extended process data can be boiled down to established key performance indicators, which in turn feed back to optimize operations.

Careful consolidation of this data into control room or mobile dashboards gives operations personnel at-a-glance visibility into the system status.

Modern process automation components also have functionality beyond what is needed to directly control or automate a process, as they now often supply valuable data to maintenance management systems, historians, mobile devices, and so on.

More than the sum of the parts

Making all these elements work together to create a symbiosis of technologies and work processes is a daunting task. Choosing the best approaches from the dozens or hundreds of possibilities in a given situation can seem overwhelming, and may cause some companies to remain in the past for fear of investing too heavily in wrong technologies or applying the right ones ineffectively. An automation solutions provider can help users sort through seemingly endless options and make appropriate choices.

Once those choices are made, all the individual elements have to be networked together to support optimized interaction. This is where the participation of an automation solutions provider is critical, as control systems and components are selected and implemented to connect disparate parts into a seamless whole. These activities depend on the accumulated know-how of engineers and technicians who have worked with a variety of major platforms, countless subsystems, and numerous plant processes.

Companies that have implemented major projects thoughtfully with careful planning and help from a capable automation solutions provider typically realize better performance, reduced costs, improved safety, and other benefits (table 2). Having automation systems capable of controlling plant processes without constant human intervention creates a much safer environment, and allows a company to thrive even in the face of changing and challenging conditions.

Table 2. Benefits of moving from basic control to advanced automation

  • Facilitates process optimization
  • Applicable to normal steady-state operation
  • Can be applied to disruptive operations like startups and shutdowns
  • Enhances worker safety with quick responses to unusual situations
  • Efficiently supplements human intervention
  • Captures knowledge from a retiring workforce
  • Integrates well with advanced smart device information
  • Natural fit with maintenance management systems, historians, and mobile reporting to identify issues     

About the Author
Paul Darnbrough, P.E., CAP, is a principal at ControlsPR and previously worked in the Automation Solutions Group at MAVERICK Technologies, a Rockwell Automation company. He has more than 25 years of experience in engineering, documentation, and construction of automated industrial and process control systems. Paul has worked with clients ranging in size from small single-owner operations up to Fortune 500 companies and government agencies, involving operations in the plastics, food, dairy, chemicals, material handling, discrete manufacturing, water treatment, and pharmaceutical industries.

Connect with Paul
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

AutoQuiz: What Are the NPV and IRR Methods for Evaluating Industrial Automation System Capital Investments?

The post AutoQuiz: What Are the NPV and IRR Methods for Evaluating Industrial Automation System Capital Investments? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Which statement accurately characterizes both the NPV and IRR methods for evaluating automation system capital investments?

a) both provide a high degree of reliability in decision making to accept/reject a project
b) the final computations of both yield a dollar figure for individual projects
c) multiple projects can be added and averaged to evaluate any combination of capital investments
d) they both adjust cash flows over time for the time value of money
e) none of the above

Click Here to Reveal the Answer

The common concept between NPV and internal rate of return (IRR) is that both of these financial measures adjust cash flows over time to account for the time value of money (interest rate or cost of capital). It is the “time value” of money over the duration of the project that can help engineers determine the best project alternative or the viability of a single project through calculations such as IRR.

The correct answer is D, “They both adjust cash flows over time for the time value of money.” Net present value (NPV) by itself is not a good indicator of the viability of a project or a good differentiator between two competing projects, except to identify clearly nonviable projects (negative NPV).

Reference: Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP., A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

How to Use Industrial Simulation to Increase Learning and Innovation

The post How to Use Industrial Simulation to Increase Learning and Innovation first appeared on the ISA Interchange blog site.

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Damien Hurley.

Damien Hurley is a control and instrumentation (C&I) engineer for Fluor in the UK. He is currently involved in the detailed design phase of a project to build a new energy plant in an existing refinery in Scotland. His chief responsibility is C&I interface coordinator with construction, the existing site C&I contractor and the client.

Damien Hurley’s Question

How can I begin implementing process simulations in my learning? My background is in drone control where all learning has a significant emphasis on simulation and testing, usually via programs such as MATLAB. Upon starting in the oil and gas engineering, procurement and construction (EPC) industry, I began getting to grips with the wide array of final elements and my knowledge of Process simulation has suffered as a result.

I’m also not exposed to simulations on a daily basis, as I was previously in the unmanned aerial vehicle (UAV) industry. How can I get started with simulation again? Specifically is the simulation of processes relevant to our industry? Can you point me in the direction of a good resource to begin getting to grips with this worthwhile subject?

Greg McMillan’s Answer

Dynamic simulation is the key to most of deep learning and significant innovation in my 50-year career. Simulation has played a big role in industrial processes, especially in refining and energy plants. There are a lot of basic and advanced modeling objects for the unit operations in these plants. You can learn a lot about what process inputs and parameters are important in the building of first principle models. Even if the simulations are built for you, the practice of changing process inputs and seeing the effect on process outputs is a great learning experience. You are free to experiment and see results where your desire to learn is the main limit.

You can also learn a lot about what affects process control. Here it is critical to include all of the automation system dynamics often ignored in the literature despite most often being the biggest source of control loop dead time with also a significant contributing effect to the open loop gain and nonlinearity by way of the installed flow characteristic of control valves and variable frequency drives (VFDs).

You need to add variable filter times to simulate sensors particularly thermowell and electrode lags, transmitter damping, and signal filters. You need to add variable dead time blocks to simulate transportation delays associated with injection of manipulated fluids into the unit operation and to the sensor for measurement of the controlled variables. The variable deadtime block is also needed for simulating the effect of positioners with poor sensitivity where the response time increases by two orders of magnitude for changes in signal less than 0.25 percent. You need backlash-stiction blocks to simulate the deadband and resolution limits of control valves as detailed in the Control article How to specify control valves and positioners that don’t compromise control.

VFDs can have a surprisingly large deadband introduced in the setup in a misguided attempt to reduce reaction to noise and a resolution limit caused by an 8-bit signal input card. You also need to add rate of change limits to model slewing rates for large control valves and introduced in the VFD setup in a misguided attempt to reduce motor overload instead of properly sizing the motor. You need software that will provide PID tuning settings with proper identification of total loop dead time. Finally, a performance metrics block to identify the integrated and peak error for load disturbances and the rise time, overshoot, undershoot, and settling time for disturbances is a way of judging how well you are doing.

A couple of years ago I helped develop a dynamic simulation of the control system and the many headers, boilers, and users at a large plant to optimize the cogeneration and minimize the disruption to the steam system from large changes in the steam use and generation in all the headers for the whole plant. ISA Mentor Program resource James Beall and protégé Syed Misbahuddin were part of the team. Over 30 feedforward and decouple signals were developed and thoroughly tested by dynamic simulation resulting in a smooth implementation of much more efficient and safe system. I learned via the simulation in one case that the feedforward I thought was needed for a boiler caused more harm than good due to changes in header pressure preceding the supposedly proactive feedforward to a header letdown valve to compensate for the effect of a change in firing rate demand.

First principle process models material and energy balances of volumes in series can model the many unanticipated changes. I recently was alerted to the fact that the use of a bypass valve around a heat exchanger provides first a fast response from a change in flow bypassing and going through the exchanger but is followed by a delayed response in the opposite direction caused by the same utility flow rate heating or cooling a different flow rate through the exchanger. Unless a feedforward changes the utility flow, the tuning of the PID for temperature of the blended stream must not overreact to the initial temperature change.

Often there are leads besides lags in the temperature response associated with inline temperature control loops for jackets. For heat exchangers in a recirculation line for a volume, the self-regulating response of the exchanger outlet temperature controller is followed by a slow integrating response from recirculation of the changes in the volume temperature. Also, feedforward signals that arrive too soon can create an inverse response or that arrive too late create a second disturbance that makes control worse than the original feedback control. Getting the dynamics right by inclusion of automation besides process dynamic is critical.

ISA Mentor Program Posts & Webinars

Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars.

We learn the most by our mistakes. To avoid the price of making them in field, we can use dynamic simulation as a safe way of hands-on learning for exploration and prototyping of existing and new systems finding good and bad effects that offers much more flexibility and is non-intrusive to the process. Dynamic models using the digital twin enables a deeper process understanding to be gained and used to make much more intelligent automation. See the Control Talk blog Simulation breeds innovation for an insightful history and future of opportunities for a safe sandbox allowing creativity by synergy of process and automation system knowledge.

Often simulation fidelity is simply stated as low, medium or high. I prefer defining at least five levels as seen below in the chapter Tip #98: How to Achieve Process Simulation Fidelity in the ISA book 101 Tips for a Successful Automation Career. Note that the term “virtual plant” I have been using for decades should be replaced with the term “digital twin” in my books and articles prior to 2018 to be in tune with the terminology for digitalization and digital transformation.

  • Fidelity Level 1: measurements can match setpoints and respond in the proper direction to loop outputs; for operator training.
  • Fidelity Level 2: measurements can match setpoints and respond in the proper direction when control and block valves open and close and prime movers (e.g., pumps, fans, and compressors) start and stop; for operator training.
  • Fidelity Level 3: loop dynamics (e.g., process gain, time constant, and deadtime) are sufficiently accurate to tune loops, prototype process control improvements, and see process interactions; for basic process control demonstrations.
  • Fidelity Level 4: measurement dynamics (e.g., response to valves, prime movers, and disturbances) are sufficiently accurate to track down and analyze process variability and quantitatively assess control system capability and improvement opportunities; for rating control system capability, and conducting control system research and development.
  • Fidelity Level 5: process relationships and metrics (e.g., yield, raw material costs, energy costs, product quality, production rate, production revenue) and process optimums are sufficiently accurately modeled for the design and implementation of advanced control, such as model predictive control (MPC) and real time optimization (RTO), and in some cases virtual experimentation.

A lot of learning is possible by using Fidelity Levels 3 models. Fidelity Level 4 and 5 simulations with advanced modeling objects are generally needed for complex unit operations where components are being separated or formed, such as biological and chemical reactors and distillation columns, or to match the dynamic response of trajectories to detail advanced process control including PID control that involves feedforwards, decouplers, and state based control. Developing and testing inferential measurements, data analytics, performance metrics, and MPC and RTO applications, generally requires Level 5.

In all cases I recommend a digital twin that has the blocks addressing nearly every type of automation system dynamics and metrics often neglected in dynamic simulation packages. The digital twin should have the same PID Form, Structure and options used in the process industry and a tool like the Mimic Rough-n-Ready tuner to get started with reasonable PID tuning settings.

Many software packages that were not developed by automation professionals may unfortunately seriously mess you up by not having the many sources of dead time, lags, and nonlinearities, and by employing a PID with a Parallel (Independent) Form working in engineering units instead of percent signals. A fellow protégé also in the UK who is now an automation engineer at Phillips 66 can relate his experiences in using Mimic software. If you pursue this dynamic simulation opportunity, we can do articles and Control Talk blogs together to share the understanding gained to help advance our profession.

For Additional Reference:

McMillan, Gregory K., and Vegas, Hunter, 101 Tips for a Successful Automation Career.

Additional Mentor Program Resources

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

About the Author
Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

Connect with Greg
LinkedIn



Source: ISA News

New Method for Estimating State of Charge of Lithium-Ion Batteries in Electric Vehicles [technical]

The post New Method for Estimating State of Charge of Lithium-Ion Batteries in Electric Vehicles [technical] first appeared on the ISA Interchange blog site.

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.

Abstract: This paper presents a state of charge (SOC) estimation method based on fractional order sliding mode observer (SMO) for lithium-ion batteries. A fractional order RC equivalent circuit model (FORCECM) is firstly constructed to describe the charging and discharging dynamic characteristics of the battery. Then, based on the differential equations of the FORCECM, fractional order SMOs for SOC, polarization voltage and terminal voltage estimation are designed. After that, convergence of the proposed observers is analyzed by Lyapunov’s stability theory method. The framework of the designed observer system is simple and easy to implement. The SMOs can overcome the uncertainties of parameters, modeling and measurement errors, and present good robustness. Simulation results show that the presented estima- tion method is effective, and the designed observers have good performance.

Free Bonus! To read the full version of this ISA Transactions article, click here.

Enjoy this technical resource article? Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus professional networking and discounts on technical training, books, conferences, and professional certification.

Click here to join ISA … learn, advance, succeed!

Copyright © 2019 Elsevier Science Ltd. All rights reserved.



Source: ISA News

AutoQuiz: What Is the Logical Analysis Troubleshooting Method for an Industrial Process?

The post AutoQuiz: What Is the Logical Analysis Troubleshooting Method for an Industrial Process? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

In logical analysis troubleshooting, which step comes immediately after the first step: “Identify and define the problem”?

a) implement a solution or conduct a test
b) if the problem is not resolved, reiterate until the problem is found and resolved
c) gather information about the problem
d) evaluate the information/data
e) none of the above

Click Here to Reveal the Answer

The logical analysis troubleshooting method consists of (in order):

1. Identify and define the problem.
2. Gather information about the problem.
3. Evaluate the information/data.
4. Propose a solution or develop a test.
5. Implement the solution or conduct the test.
6. Evaluate the results of the solution or test.
7. If the problem is not resolved, reiterate until the problem is found and resolved.
8. If the problem is resolved: document, store/file, and send to the appropriate department for follow up if required.

The correct answer is C, “Gather information about the problem.” Once a problem is identified, data must be gathered and analyzed to determine a viable set of potential actions and solutions.

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

Energy in Fluid Mechanics: How to Ensure Physical Line and Operating Data Are Consistent

The post Energy in Fluid Mechanics: How to Ensure Physical Line and Operating Data Are Consistent first appeared on the ISA Interchange blog site.

This guest blog post is part of a series written by Edward J. Farmer, PE, ISA Fellow and author of the new ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to purchase the book, click this link. To read all the posts in this series, scroll to the bottom of this post for the link archive.

A common issue in a lot of pipeline work is ensuring the physical line data and operating data are consistent. This establishes confidence in the information about a project or situation, helps discern if a hypothetical situation can exist, or suggests that a broader view of a situation is appropriate. It also reminds the analyst of all the factors that pertain to a flow situation on a pipeline.

The Bernoulli equation looks at energy at selected locations along a pipeline. The analyst is free to choose these locations but must be sensitive to observability. Ends are always a good place to start. Often, the highest elevation point will be interesting. In some situations, the lowest points can be interesting. Points of delivery from the pipeline or injection into it may be interesting. Usually, work begins with some pipeline data and some operating data from specific sites along the line. Start analysis with those and ensure the core data the study will be based on is valid and consistent with the other known issues first.

For reasons that become apparent with some experience, Bernoulli and his follower, Euler, normally use a surrogate parameter for energy. This parameter is the “head” at the subject locations, reported in a length unit such as m for meters. Reported data is normally in typical engineering units such as velocity and pressure. Converting between these and head is fairly easy, albeit a bit tedious and obscure for newcomers. To efficiently summarize, the common transformations are:

Using the SI system:

  • The head of a defined point is determined by its height relative to the project’s elevation datum essentially H = h – d where H is the head in meters, h is the elevation of the point, and d is the elevation of the datum.
  • The head of a column of fluid of height y and density ρ is Hy = y * ρ / ρw meters, where ρw is the density of water at standard conditions, and y is the height of the fluid column above the point.
  • The head due to pressure is Hp = P / ( ρ*g) meters where g is the gravitational constant.
  • The head due to flow velocity is Hv = V2 / (2*g) meters where V is the flow velocity in m/s.

Bernoulli’s (and Euler’s) development of these concepts was based on the idea of an isentropic pipeline, one in which the energy in the fluid itself, is constant. This presumes, for example, a constant temperature. Work since then introduces an internal energy term:

  • Head due to internal specific energy is e/g where e is calculated in calories per kilogram from specific heat and temperature. This term is not commonly used albeit the concept is often involved via calculations of the fluid in accordance with its thermodynamic properties.

Essentially, the Bernoulli equation develops energy at the points for which the terms are calculated. The difference in energy between those points goes to the mechanical friction involved in moving the fluid. Fundamentally, the change in energy between location 1 and location 2 is dE = E2 – E1 m2/s2. This converts to a head difference of dH = dE/g meters.

The commonly used Darcy formula for friction flow loss, in head terms, is:

  • hl = f * L * v2 / (2D) meters where f is the appropriate friction factor, L is the line length, and D is the diameter of flow.

The head loss between points along the pipeline should match the computed head loss between them. When it doesn’t there is incentive to understand why.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free 37-page excerpt from the book, click here.

There are usually at least three entities involved is obtaining pipeline data for these studies. The engineering department will normally know the characteristics of what was built and its current status. Normally they will have pump or compressor curves, data about the pipe and appetences as installed, and about the fluid as used in the design calculations.

The operators will know the current flows and pressures as well as the characteristics of the fluids in use. The business department will know what came into the system and what came out along with some useful data about energy consumed moving product. Hopefully the data needed to resolve a specific question or inquiry will match across all the sources.

If there are discrepancies, there may be some sort of observability issue with one or more of the involved groups and one or more of the involved places. Fluid mechanics, due to noise and measurement limitations may not always be as precise as some engineering undertakings in which everything is easily known in real-time to decimal places.

While a general Bernoulli analysis is not always adequate for resolving pipeline issues it will quickly, understandably, and simply establish where to look for more information or data. Sometimes the point-oriented concept motivates segmenting the analysis to concentrate on specific parts of the general pipeline.

The nature of the equations makes mathematical analysis, such as comparisons and sensitivity analysis, very straightforward and understandable. More precise analysis may involve continuous monitoring, special equipment, or investigating special situations. Keep an open mind and always thing back toward conditions that would produce or exacerbate the issue motivating the original request.

Learn more about pipeline leak detection and related industry topics

Book Excerpt + Author Q&A: Detecting Leaks in Pipelines
How to Optimize Pipeline Leak Detection: Focus on Design, Equipment and Insightful Operating Practices
What You Can Learn About Pipeline Leaks From Government Statistics
Is Theft the New Frontier for Process Control Equipment?
What Is the Impact of Theft, Accidents, and Natural Losses From Pipelines?
Can Risk Analysis Really Be Reduced to a Simple Procedure?
Do Government Pipeline Regulations Improve Safety?
What Are the Performance Measures for Pipeline Leak Detection?
What Observations Improve Specificity in Pipeline Leak Detection?
Three Decades of Life with Pipeline Leak Detection
How to Test and Validate a Pipeline Leak Detection System
Does Instrument Placement Matter in Dynamic Process Control?
Condition-Dependent Conundrum: How to Obtain Accurate Measurement in the Process Industries
Are Pipeline Leaks Deterministic or Stochastic?
How Differing Conditions Impact the Validity of Industrial Pipeline Monitoring and Leak Detection Assumptions
How Does Heat Transfer Affect Operation of Your Natural Gas or Crude Oil Pipeline?
Why You Must Factor Maintenance Into the Cost of Any Industrial System
Raw Beginnings: The Evolution of Offshore Oil Industry Pipeline Safety
How Long Does It Take to Detect a Leak on an Oil or Gas Pipeline?
Pipeline Leak Size: If We Can’t See It, We Can’t Detect It
An Introduction to Operations Research in the Process Industries
The Enigma of Process Knowledge
Energy in Fluid Mechanics: How to Ensure Physical Line and Operating Data Are Consistent

About the Author
Edward Farmer, author and ISA Fellow, has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. He has authored three books, including the ISA book Detecting Leaks in Pipelines, plus numerous articles, and has developed four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. During his long industry career, he established EFA Technologies, Inc., a manufacturer of pipeline leak detection technology.

Connect with Ed
48x48-linkedinEmail

 



Source: ISA News

How Standards and New Technology Enable Humans and Robots to Work Safely Together

The post How Standards and New Technology Enable Humans and Robots to Work Safely Together first appeared on the ISA Interchange blog site.

This post was written by Carole Franklin, is the director of standards development for the Robotic Industries Association.

Not many years ago, the idea of collaborative robotics – with a robot and human worker sharing an active workspace – was met by strong skepticism. To ensure the safety of human workers, a variety of safeguarding systems prevented direct physical contact between the robot and its operator while the system was operational. Safeguarding might be a physical barrier or a light curtain that would shut down the robot system if the operator intruded into the safeguarded space, or a variety of other technologies.

The conversation has changed, however. Today, as long as we can ensure that it will not cause pain or injury, we are comfortable with the idea of a robot or its tooling or workpiece touching a human. And technologies have advanced to the point where we can be more confident of our ability to prevent pain or injury from such contact. The change in available technology and attitude has helped usher in an era of automation that enables humans and robots to work more closely together, while still being safe

This possibility of safe, close proximity is what we mean by collaborative robotics. Collaborative robotic applications are intended to optimize the use of human workers and robots, using both to their greatest advantage. The capability becomes important when attempting to automate processes that include delicate or compliant materials, for instance, which are difficult for robots to handle. In a collaborative robot system, we gain the benefit of the strength and precision of the robot, together with the creative problem solving, flexibility, and sensitivity of the human operator.

This approach is certainly gaining momentum. An ABI Research study predicts the collaborative robotics market will exceed $1 billion in the 2020s, populating factories and businesses with more than 40,000 collaborative robots. While a key selling point for these robots is their ability to work side by side with humans, typically without fencing or guarding, care must still be taken to ensure safety. What the industry calls a “collaborative robot” (sometimes termed a “cobot”) is simply one that is designed for use in a collaborative workspace. These robots are designed to have safe contact with humans through the implementation of safety features of the robot or the control system, such as power and force limiting (PFL). These types of robots are typically made from lightweight materials, have force and torque sensing in their joints, and may have soft, padded skins or rounded corners.

But despite how the robot was designed or marketed, its actual use might not be safe for collaborative operation if appropriate risk assessments have not been performed, and if the workspace has not been carefully planned and integrated. Some tasks are simply not well suited for collaborative operation, even if the robot performing the task is PFL and is marketed as a collaborative robot.

For example, it is important to remember that the robot arm by itself cannot do any work. The robot system or workstation also includes the end effector, the workpiece, and the presence of other robots or equipment in a cell. All these factors and more must be considered when companies plan for a safe, collaborative robot system. A robot that is operating a welding torch or is moving razor-sharp sheets of metal presents significant opportunities for injury if people are in close proximity. In this example, the robot system as a whole-including the workpiece, end effector, and so on-is not appropriate for collaborative operation, regardless of whether the robot arm itself is a “collaborative” type.

When using a robot designed for collaborative use, safety standards require companies to complete a risk assessment and mitigate any risks identified in the system. The highly anticipated technical specification for collaborative robotics – ISO/TS 15066:2016 Robots and Robotic Devices: Collaborative Robots – provides data-driven guidelines for designers, integrators, and users of human-robot collaborative systems on how to evaluate and mitigate risks. (The full technical specification is available at the Robotic Industries Association [RIA] bookstore.)

robotics

Collaborative robot arms on the assembly line. The robots work in tandem with employees, picking up parts at the end of the line for wire cutting and outbound conveyor placement.

 

Four methods of collaborative operation

Under the ANSI/RIA 15.06 and ISO 10218 harmonized robot safety standards and the new TS 15066, there are four methods of collaborative operation that reflect different use scenarios:

  • Safety-rated monitored stop
  • Hand guiding
  • Speed and separation monitoring
  • Power and force limiting

These tend to be the most misunderstood aspects of human-robot collaboration. It is important to gain a thorough understanding of what each collaborative method requires. For instance, a safety-rated monitored stop requires that the robot does not move at all if a person enters the shared space. The benefit is a quicker restart after the human leaves, compared to a noncollaborative system. But in this case, it is not a situation in which human and robot are working together at the same time and in the same space, which is what most people think of when they think of “collaborative robots.”

Similarly, hand guiding is very similar to a common method of “teaching” the robot its tasks. When used to describe a type of collaborative operation, however, hand guiding indicates a condition where the robot and person occupy a shared space and the robot is only moving when it is under direct control of the person.

In speed and separation monitoring, both the robot and the person can be present in the space, but if the distance between the robot and the person becomes too close, the robot first slows, and then stops. This is effectively the first scenario of a safety-rated monitored stop. In power and force limiting, there can be contact between the person and the robot, but the robot is power and force limited and sufficiently padded. If there is any impact, there is no pain and no injury. It is also possible to have any mix of some or all of these four methods of collaborative operation in one robot system.

The new TS 15066 specification includes formulas for calculating the protective separation distance for speed and separation monitoring. But perhaps the most interesting part of the technical specification is Annex A. It contains guidance on pain threshold limits for various parts of the body, for use when designing power- and force-limiting applications. These pain thresholds were established by a study from the University of Mainz, Germany, using male and female volunteer human test subjects of a variety of ages, sizes, and occupations. The data can be used to set limits on levels of power and force used by the collaborative robot system or application.

The ISO standard TS 15066 and TR 606, which explains safety requirements specific to collaborative robots and robot systems, establish pain thresholds to guide appropriate use of safety guards or protective devices.

 

Risk assessment: application, not robot

The most important aspect for any collaborative robot integration is a risk assessment. But it is important to remember that when assessing risk, the application, not the robot, is the main concern. In fact, the standard document rarely uses the term “robot.” Instead, it discusses collaborative work cells or collaborative applications: all the elements involving cables, jigs, clamps, the robot, and the gripper that are inside the cell.

If the application requires somewhat higher force or power than what is stated in the document, it does not mean the application is not safe. The technical specification relates to pain, while what is required from 10218 is that no injury should occur. There is a difference between pain and injury. Tests could show that even if the impact is above the amount stated in 15066, the application may still be safe if it can be proven that the robot cannot hurt or injure the people in those circumstances.

Another common misconception is that if the robot is “inherently safe,” then the operation is safe. The term “inherently safe” is similar to the term “collaborative robot.” It describes built-in safety features of the robot’s design. Again, no matter how “safe” or “collaborative” your robot arm, it needs to be assessed as integrated into a complete robot system-and the system as a whole may not be safe for collaborative use. For instance, if the operation requires your robot to manipulate sharp objects, then it is not safe to have a human beside it-no matter how small, rounded, or padded the robot arm itself might be-without additional protective safety measures. Another case is if the robot is handling a heavy object, which could cause injury if it were dropped, or could become a projectile at a higher rate of speed.

These issues are covered in the ANSI-registered technical report RIA TR R15.306-2016, Task-based Risk Assessment Methodology. TR 306 describes one method of risk assessment that complies with requirements of the 2012 R15.06 standard and was updated in 2016.

This collaborative robot has six-axis articulation and a 35-kg payload. In this palletizing stacking operation, its soft cover and force sensors protect workers who are in direct contact with the robot for training or operation.

Gripper safety guidelines still to come

Although in the works by the ISO committee, currently there are no specific safety guidelines for robot end effectors or end-of-arm tooling in collaborative applications. In the interim, designers and integrators should follow the guidelines in TS 15066, such as requirements that an operator must not be trapped under any circumstances by the robot. If there is no power to the robot and a person is trapped, the person must be able to escape by applying minimal force to the robot to remove the part of the body that is trapped. This applies to the gripper as well; for instance, if a person’s fingers are stuck between the gripper jaws, he or she must be able to escape from the jaws to avoid danger, such as a fire.

A study of pain thresholds for PFL applications was done at the University of Mainz in Germany. It covered 100 human test subjects of both genders and a wide range of ages and body dimensions. Sources: ISO/TS 15066:2016, Annex A.

 

Annex A: “The Body Model” incorporates important data from the study, with maximum permissible pressure values that represent the 75th percentile.

 

What about cybersecurity?

With the rise of Industry 4.0 and the Industrial Internet of Things, robots and other automation equipment are increasingly being connected to each other and to other computer systems, networks, and applications. And with continued news of hackers taking control of financial or industrial systems, medical devices, and vehicles, we are increasingly aware of the tight connection between security and safety-not to mention protecting sensitive company data being collected by automated systems. But now that robots are no longer isolated devices, serious information technology concerns are arising.

There is an entire body of standards describing requirements for cybersecurity developed through decades of experience with software. For example, a good place to start is IEC 62443, a set of standards describing cybersecurity in an industrial setting. RIA will offer at least one presentation on cybersecurity and industrial robots at this year’s National Robot Safety Conference, set for 10-12 October 2017, in Pittsburgh, Penn.

New standard provides data-driven safety guidance to manage risk

When robots work alongside humans, companies have a responsibility to ensure that the application does not put a human in danger. Until the release of ISO/TS 15066, robot system suppliers and integrators only had general information about requirements for collaborative systems. Now they have the specific, data-driven safety guidance they need to evaluate and control risks.

About the Author
Carole Franklin, is the director of standards development for the Robotic Industries Association. She leads RIA’s standards development activities for the ANSI and ISO robot safety standards. Before joining RIA, Franklin was at the management consulting firm Booz Allen Hamilton, where she led projects on business process improvement, internal communications, and executive communications. Before Booz Allen, Franklin worked for Ford Motor Company for 10 years in the market research department, leading consumer research projects, and also served as project manager for the North American customer satisfaction tracking study. Her career has been spent translating the needs of end users into actionable guidance for engineers and leaders—and vice versa. She holds BA and MBA degrees from the University of Michigan.

Connect with Carole
LinkedIn

 

A version of this article also was published at InTech magazine.



Source: ISA News