Thank You Sponsors!

CANCOPPAS.COM

CBAUTOMATION.COM

CGIS.CA

CONVALPSI.COM

DAVISCONTROLS.COM

EVERESTAUTOMATION.COM

FRANKLINEMPIRE.COM

HCS1.COM

MAC-WELD.COM

SWAGELOK.COM

THERMO-KINETICS.COM

THERMON.COM

VANKO.NET

VERONICS.COM

WAJAX.COM

WESTECH-IND.COM

WIKA.CA

AutoQuiz: Which Valve Actuation Method Is Best for a Very Large Force and a Small Actuator?

The post AutoQuiz: Which Valve Actuation Method Is Best for a Very Large Force and a Small Actuator? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

Which of the following valve actuation methods would be the best choice for an application that requires a very large force and a small actuator?

a) pneumatic actuation
b) hydraulic actuation
c) manual actuation
d) electrical actuation
e) none of the above

Click Here to Reveal the Answer

Although many pneumatic actuators can provide a large force, they require either a large diaphragm area (in the case of a diaphragm actuator) or a large cylinder (in the case of a rack and pinion actuator).

Manual actuation is accomplished by turning a valve handle, and is limited to the amount of force that an operator can exert on the lever or hand wheel.

Electric actuation delivers high torques for rotary-style valves, but electric actuators tend to be large and heavy compared to hydraulic actuators.

Hydraulic actuators are driven by a high-pressure fluid (up to 4,000 psig) that can be delivered to the actuator by a pump that is remote from the actuator itself. Hydraulic cylinders can deliver up to 25 times more force than a pneumatic cylinder of the same size.

The correct answer is B, hydraulic actuation.

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

The Enigma of Process Knowledge

The post The Enigma of Process Knowledge first appeared on the ISA Interchange blog site.

This guest blog post is part of a series written by Edward J. Farmer, PE, ISA Fellow and author of the new ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to purchase the book, click this link. To read all the posts in this series, scroll to the bottom of this post for the link archive.

Sometimes when I get wrapped up in automation issues hordes of observation and analysis equipment flood my mind. New visions promote change which motivates new ways of seeing and doing things.

In Europe, World War II military operations shifted from the mostly static trench warfare of World War I to high mobility. In World War I the “big thing” was impenetrable static defense; but the “next big thing” was the fully mechanized division that could reach a critical point and prevail at the critical moment.

Communications became dependent on go-anywhere mobile technology, which relied on encryption for security; mostly provided by radio operators using Morse code. That spawned a new field called cryptography which, in retrospect, was shockingly similar in mission and prosecution to concepts we now associate with automation science; both stemming from automation research concepts.

The theory was that even when encrypted, “it” is in there and just like secrets of nature, can be found and unraveled with the right methodology. I’ve always loved that quest: The Army taught me cryptography and college taught me process control. Eventually, I was imbued with fascination about how we could learn those things hiding just beyond our ability to perceive them. Keep in mind, the answer to secrets we hope to unravel are “in there,” installed by humans in one case and by nature in the other.

Cryptography is a good way to enhance understanding of the more esoteric portions of this journey – it is clever and has philosophically discussable difference with what is stochastic and what is deterministic. When we see evidence of the mind and fingerprints of mankind we are on the road to some deterministic outcome while “randomicity” or some impenetrable illusion of it suggests mystery still beyond our capability.

The Axis powers used an encryption machine called the Enigma machine. It was a letter-substitution approach but the connection between the clear-text letter and the encrypted one was not as simple as the ancient codes that used some hopefully secret algorithm to accomplish the same sort of thing.

The Enigma machine changed the linkage between the input and output letter each time a letter was encrypted. A string of “Zs” for example, would not produce the same coded letter each time. In fact (and depending on the design of a specific machine) it could take half a million letters before a pattern might emerge.

Enigma wiring diagram with arrows and the numbers 1 to 9 showing how current flows from key depression to a lamp being lit. The A key is encoded to the D lamp. D yields A, but A never yields A; this property was due to a patented feature unique to the Enigmas, and could be exploited by cryptanalysts in some situations.

What’s more, this sequence of redundant encryptions of “Z” would not always begin with the same encrypted letter. The sequence would begin depending on the settings of a three-letter starting code and proceed for perhaps a half-million letter encryptions before any hint of a pattern might become evident. Seemingly, if the message length did not exceed a half-million letters it would appear random and hence undecipherable. If this were one of nature’s secrets it would appear stochastic – hopelessly indecipherable.

Somehow, early in the war, the mystery lifted when a brilliant British fellow named Alan Turing, working with a team of outstanding peers, unraveled the Enigma code to the extent that Allied commanders were able to read Enigma messages more quickly than the intended Axis recipient. The early visions of modern computer theory were developed and manifested by Colossus, the most powerful and versatile programmable computer of its day.

All that, though stemmed from the realization that portions of a project may be stochastic and other portions deterministic. There is always a way, with enough time and effort to unravel the deterministic parts and once you do the only obfuscation remains in the seemingly stochastic portions. Finding a way through some modest level of pseudo-randomness is a lot easier than confronting the situation as though it was entirely stochastic. Turing’s efforts with Enigma illustrate the analytical power of insightfully dividing the problem and conquering it piece by piece.

One can, and many have, observed that a man-made process intended to appear random was created by the mind of a man according to some set of rules: an algorithm or “recipe.” The product is, in effect, a pseudorandom process and can be unraveled by investigating the “pseudo” part of the “random.”

In this case, near-complete determinism emerged when some Polish mathematicians set out to unravel the Enigma machine during their early encounters before World War II. Their work made it to England and hence to Bletchley Park before Britain entered the war. Essentially, most of encryption was done by three (sometimes four) mechanical rings selected from a small set of choices. They interchanged an “input” letter for a different “output” letter by means of physical wiring within the rings. Each of the rings and the path between them, was deterministic, only the starting position was chosen, presumably randomly, by the user.

There are some other interesting features but for the present the rotors illustrate the core issue. Essentially, once the Poles had the wiring of the rotors, all they needed to know was which rotors, and in what order, were installed, and what the starting position of each was. That starting position was significant. On a three-rotor Enigma with 26 letters there were nearly 18,000 possibilities. While trying each of them might trigger the production of something that looked like clear-text that was a lot of button-pushing in those days.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free 37-page excerpt from the book, click here.

This was a motivation for the creation of Colossus. Keeping in mind there is a longer story (see Gordon Welchman’s book, The Hut Six Story), notice that a series of a half-million pseudo-random encryptions could now be converted to cleartext by knowing three letters in proper sequence. With the help of some Colossus derivative work Enigma messages could be converted to cleartext as they were fed to the computer, printed on paper, and hand-carried to the interested parties all in less time than it took an Axis commander to decrypt the received message with his Enigma machine. In this case, the mysterious and complex secret devolved into three letters in a specific order. Of course, there was a lot of effort, ingenuity, and dedication involved. Think for a moment. A machine with a randomization interval of a half-million operations could be defeated by simply knowing three letters. Automation (operations) research provided an enormous advantage for the Allies.

Even with understanding, the Enigma process had the potential of being somewhat secure – there were those three letters in sequence. Somehow, regardless of the emphasis it is hard for many people to perceive the importance of seemingly simple things.

It was common for some operators to use their initials, something like AGB on all the messages they transmitted. Long before World War II it was common for radio operators to discern who was sending Morse code by subtleties in the way the coding of the letters was formed, pacing and spacing between letters, where delays occurred, and procedural characteristics.

Colloquially, this was referred to as the sender’s “fist,” envisioning a hand huddled over a telegraph key. From the fist, an analyst could know which of the active operators was sending it, and by records augmented with some detective work, what his initials were. This eliminated the need to use repetitive analysis to unlock the three-letter code and the message could be directly handled by Colossus or a Colossus clone.

Think about this in terms of the passwords you use to restrict access to everything from your health records to an Amazon account. There is no perfect security without adequate randomization, and no convenience in a completely random world. This is one of those areas of practice in which there is room and motivation for fresh thinking, and the need for it has been established by the lessons of history. We have come a long way from Hut 6 in Bletchley Park but remain far short of where we need to go.

Think about this in terms of discovery. Can you find the linkage between what the process does and the parameters about it that you can observe? There must be logic and order (determinism) hiding in there!

Years ago, people would ask me whether cryptography or leak detection (finding a somewhat deterministic event obscured by stochastic disturbances in a nominally deterministic system) was the harder undertaking. Without a doubt, and buoyed by the work, thinking, and inspiration for this kind of journey by Alan Turing and his colleagues, leak detection was a lot easier. That is partly because of the science developed by, and since, their work and because of their inspiration.

Learn more about pipeline leak detection and related industry topics

About the Author
Edward Farmer, author and ISA Fellow, has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. He has authored three books, including the ISA book Detecting Leaks in Pipelines, plus numerous articles, and has developed four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. During his long industry career, he established EFA Technologies, Inc., a manufacturer of pipeline leak detection technology.

Connect with Ed
48x48-linkedinEmail

 



Source: ISA News

How to Achieve Pilot-Scale Industrial Process Control Flexibility and Agility

The post How to Achieve Pilot-Scale Industrial Process Control Flexibility and Agility first appeared on the ISA Interchange blog site.

This post was written by Chris Marinucci, director of advanced manufacturing at O’Brien & Gere.

Pilot-scale process control has posed some of the biggest automation challenges I have faced working in an advanced manufacturing environment. Extreme turndown ratios, process modularity, and rapid and frequent data acquisition, combined with the need for high accuracy and repeatability are hallmarks of any research and development process. The request from our client was straightforward: Make the pilot lab more flexible, accurate, and productive while maintaining the Class 1, Division 2 hazardous area classification.

The existing pilot lab was a combination of rigidly constructed and permanently affixed pumps, valves, heat exchangers, tanks, and instruments. Over the years, a spaghetti-like arrangement of bypasses and spool pieces were added to suit the process testing needs.

Our solution was to break down each unit process operation into single systems and make them portable. Existing pumps, Coriolis flowmeters, and heat exchangers were mounted on wheeled carts, making it easy to mix and match the correct equipment for the pilot run. Cam-lock hoses replaced rigid stainless-steel piping, and 250-gallon totes or 55-gallon drums became the vessels of choice. The programmable logic controller (PLC) control panel itself was mounted in a console-style cabinet with a graphical interface and placed on casters.

Another challenge was how to deal with all the different kinds of instruments, flow pressure, turbidity, color, and temperature, that could not be permanently affixed to the process equipment. Each trial posed unique challenges for process control and data acquisition and its need to mimic real constraints at a variety of manufacturing plants in North America. The instruments had to be just as modular as the process equipment and allow the technician to place them anywhere in the process. Further complicating matters were the specialty instruments like turbidity and color analyzers that were large, heavy, and expensive.

With dozens of instruments and process elements that could be combined in seemingly infinite combinations, a wireless networking solution could tie all our pieces and parts together, but which wireless solution? Splitting our connectivity needs into real-time and periodic ones, we focused on the wireless Ethernet for real-time applications and wireless HART for periodic applications.

Wireless Ethernet provided near real-time control and data feedback for our process equipment. The centrifugal and positive displacement pump carts used variable frequency drives networked to a wireless Ethernet radio. The Coriolis flowmeter had to provide near real-time feedback to operate either pump in a closed-loop mode, so it, too, was fitted with a wireless Ethernet radio. Lastly, our control panel was fitted with a wireless Ethernet radio. To coordinate all the wireless Ethernet devices, a wireless Ethernet access point was mounted in the center of the 2,000-square-foot lab space on a beam 20 feet above the process equipment.

For those devices that required only periodic monitoring, a wireless HART system was used. Various battery-operated pressure and temperature instruments came with HART wireless thumbs, allowing them to broadcast their data back to the HART gateway approximately every 5 seconds. Being battery powered, the periodic pressure and temperature-sensing devices could be placed anywhere in the process by simply selecting the correct hose and pipe fitting.

This left our specialty analytical instruments, as well as our real-time pressure and temperature devices, to be placed. All of these instruments found a home mounted to the back of our PLC/human-machine interface console. This allowed us to use a combination of hardwired network cables and traditional 4-20 mA signals directly to the PLC.

The HART wireless gateway and the wireless access point were hardwired together through a managed switch with CAT 6 Ethernet cable. A Modbus TCP card allowed the PLC to read the HART wireless device data through the HART gateway for the purposed of alarming and graphical display. The hardwired Ethernet network linked the supervisory control and data acquisition (SCADA) workstation to the HART wireless gateway and the wireless Ethernet gateway, giving the SCADA system access to the PLC.

To maintain the area classification, the pump variable frequency drive panels and PLC panels used panel purge units. Hardwired devices used intrinsically safe I/O, while hardwired network devices were explosion-proof and used rigid conduit. The wireless network system has operated without failure of service. It has proven to be every bit as reliable as a wired solution, while providing much-needed simplification and flexibility to the pilot lab process.

About the Author
Chris Marinucci is director of advanced manufacturing at O’Brien & Gere. His career has focused on taking the control and mechanical systems associated with a variety of processes used in industry and designing the control systems and graphical interfaces to make them work as a single purpose-built system. O’Brien & Gere is a certified member of the Control System Integrators Association.

Connect with Chris
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

AutoQuiz: What Are the Benefits of Self-Study Workbooks for Operator Training?

The post AutoQuiz: What Are the Benefits of Self-Study Workbooks for Operator Training? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Which of the following statements describes an advantage of a self-study workbook for operator training?

a) promotes reading, which is the most preferred learning style of adult learners
b) provides interactivity through paper-and-pencil self-check questions and tests
c) allows participants to delve deeper in areas of particular interest
d) provides all participants with a common baseline of knowledge
e) none of the above

Click Here to Reveal the Answer

Answer A is incorrect, since the most preferred learning style for adult learners is interactive or hands-on learning. Answer B is important, but this same advantage can be said of classroom/lecture and online/computerized study methods.

Answer C is not an advantage particular to self-study workbook training, as all forms of training will spark interest in many areas, which should lead to further study.

The correct answer is D, provides all participants with a common baseline of knowledge. Self-study workbooks ensure that all students have been exposed to the same learning objectives and material, assuming successful completion of the self-study course requirements.

ReferenceNicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP.A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

How to Design Industrial Control Panels for Hazardous Locations

The post How to Design Industrial Control Panels for Hazardous Locations first appeared on the ISA Interchange blog site.

This post was written by Jim Dunn, product manager at Carlo Gavazzi.

Certain areas in industrial settings are classified as hazardous due to the presence of flammable gases, vapors, dusts, or fibers. This blog describes various ways to design panels so they do not become potential ignition sources, using the International Electrotechnical Commission (IEC) zone hazardous area classification system for this purpose.

Figure 1 below depicts how the IEC zone system classifies areas based on the ignitable concentrations of flammable gases or vapors. Zone 0 is the most hazardous area, followed by Zone 1, and then Zone 2. It is much more expensive, complex, and time consuming to design, fabricate, and maintain control panels to use in Zone 0 rather than Zone 1 or 2. So, the first step is to locate control panels outside of Zone 0 areas wherever possible. This can often be accomplished by moving panels just a short distance, often as little as a few feet

Figure 1 depicts how the IEC zone system classifies areas based on the ignitable concentrations of flammable gases or vapors. Zone 0 is the most hazardous area, followed by Zone 1, and then Zone 2. It is much more expensive, complex, and time consuming to design, fabricate, and maintain control panels to use in Zone 0 rather than Zone 1 or 2. So, the first step is to locate control panels outside of Zone 0 areas wherever possible. This can often be accomplished by moving panels just a short distance, often as little as a few feet.

Figure 2 shows a typical zone classification scheme for a process plant. Zone 0 areas are only those inside or right next to vessels or pipes, with Zone 1 areas in relatively close proximity to Zone 0, and Zone 2 areas a bit farther away.

Once all panels have been located to the least restrictive zone possible, the three main design methods for compliance can be considered:

  • rated enclosures and components
  • air-purge systems
  • intrinsic safety practices

Figure 1. The IEC zone system classifies areas according to the expected presence of hazardous atmospheres.

 

Figure 2. Control stations and panels should be located outside of Zone 0 whenever possible, which often just requires relocation to a few feet away.

Use Zone 1 and 2 components and enclosures

Using properly rated components and enclosures is the simplest method for compliance in Zone 1 and 2, although it is not suitable for Zone 0 in most cases due to the lack of availability of Zone 0-rated components. Any control panel suitable for installation in Zone 1 will also be suitable for Zone 2, so the focus of the rest of this article will be on Zone 1. The enclosure and the components should all be specified for use in Zone 1, and the panel design should also meet Zone 1 requirements.

This method works well for smaller and simpler control panels-often referred to as control stations-populated with push button, switch, and indicator light devices. However, it often does not work well for enclosures populated with more complex components, such as programmable logic controllers (PLCs), motor drives, and human-machine interfaces (HMIs), because many of these components are not available with proper ratings. For example, your company’s preferred make and model PLC might not be certified for use in Zone 1.

Preassembled Zone 1 standard control stations populated by an assortment of rated push buttons, switches, and lights are available from some suppliers (figure 3). Some suppliers will build customized control stations, so end users and integrators can tailor the panel to their exact requirements, saving time and reducing cost. Buying preassembled Zone 1 control stations, either standard or custom, eliminates design expense.

Once Zone 1 control stations or control panels are specified or designed-and then installed-maintenance is minimal, and years of trouble-free service can be expected. The cost of designing with this method of protection, as well as the cost of the components, is considerably less than the next two methods of protection for most control panels.

Figure 3. Preassembled control stations eliminate design expense for Zone 1 and 2 installations.

 

Figure 4. An air-purge system can allow installation of general-purpose automation devices and components in a Zone 1 area. Source: P+F

Air-purge systems

Another popular protection method is an air-purge system (figure 4), which is suitable for Zone 1, but not Zone 0. These systems supply air or an inert gas to the enclosure to maintain a positive internal pressure with respect to the environment, thereby preventing flammable gases or vapors from entering the enclosure. Because the enclosure is under positive pressure, general-purpose enclosures, devices, and components can be used.

Designing these systems is a bit more complex than simply specifying Zone 1-rated enclosures, devices, and components. Different zones require different types of purge systems, with costs increasing as the zone becomes more hazardous (e.g., a purge system that allows the use of general-purpose components in Zone 1 is more expensive than one that allows the use of Zone 2 components in Zone 1).

These systems are not practical for small, simple control stations. The cost of the purge system does not decrease in linear proportion to the enclosure size, but instead has a relatively high minimum price. On the other hand, purge systems do work well for larger and more complex control panels, particularly those populated by more complex components, such as PLCs, motor drives, and HMIs. This is because they allow nonrated, general-purpose components to be installed in Zone 1, a feature not available with the other two methods of protection.

Maintenance consists of making sure the air-purge system is operating as designed, a task made simpler by systems with a pressure gauge or transmitter to ensure positive pressure is maintained. A gauge must be manually monitored to ensure correct pressure. A transmitter can send a signal proportional to pressure to a remote monitoring and control system, easing maintenance.

If a panel needs to be opened for any reason, such as to repair or replace a component, pressure and protection are lost. Consequently, the entire area must first be made safe with respect to the presence of flammable gases or vapors. This can be problematic, because it often requires a partial or full shutdown of the plant area.

Intrinsic safety

This method of protection limits the amount of electrical energy that can be released to a level insufficient to ignite flammable gases or vapors. This is the only method of protection suitable for Zone 0, and of course also works in any Zone 1 or 2 area.

The design of these systems is much more complex than the previous two protection methods, because every component must be carefully selected to make sure it is intrinsically safe. The electrical energy delivered by wiring to these components from outside Zone 0 must be limited to levels insufficient to ignite flammable gases or vapors. Intrinsic safety barriers are commonly used for this purpose.

With proper precautions regarding tools and work methods, maintenance can be performed on any component without having to ensure the area is free of flammable gases or vapors. This is possible because each component in the system cannot release electrical energy sufficient to ignite flammable gases or vapors.

Zone 1 components

In the past, most Zone 1 installations used either air-purge systems or intrinsic safety protection methods. This has changed in recent years due to the more widespread availability of devices, components, and enclosures rated for use in Zone 1.

Push buttons, switches, lights, panel meters, and other simple devices are widely available with Zone 1 ratings-and the variety of devices available for use in these areas has grown rapidly over the past few years.

The Zone 1 product offerings continue to grow as some suppliers now offer more complex components, such as PLCs, HMIs, and power supplies with this rating. It is now possible to design a complex control panel for use in Zone 1 by simply selecting the right devices and components and by following simple design guidelines.

Preassembled Zone 1 standard and custom control stations populated by an assortment of push buttons, switches, and lights are also available. This expanded array of options makes it simpler to design control stations and panels for Zone 1, while reducing upfront and maintenance costs. 

About the Author
Jim Dunn is product manager at Carlo Gavazzi. Previously he worked for IDEC. Jim is an experienced industrial automation professional. He has held multiple product marketing/management positions with Japanese and European industrial automation companies responsible for various sensor and safety products.

Connect with Jim
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

Does Industrial Control System Cybersecurity Need to Be Complicated?

The post Does Industrial Control System Cybersecurity Need to Be Complicated? first appeared on the ISA Interchange blog site.

This post was authored by Lee Neitzel, senior engineer at Wurldtech Security Technologies, a GE company, and Gabe Faifman, global cybersecurity architect & innovation manager at Schneider Electric.

Industrial control system (ICS) cybersecurity can be intimidating. The stakes are high, and the technology is sophisticated. Ask any industrial cybersecurity expert about it, and you are likely to hear that ICS cybersecurity is different from information technology (IT) cybersecurity, that security risk assessments are indispensable, and that cybersecurity discussions are littered with highly specialized terminology. So far, that leaves us where we started, with a seemingly complex problem.

But, does ICS cybersecurity really have to be so complicated? In theory, yes, but in practice, no. From a practical perspective, you can begin building in a layered ICS security approach, often called defense-in-depth, by answering a few basic questions:

  • Where can the attacker gain entry or break in to your ICS? Adding security to protect entrypoints is a first layer of defense.
  • Once an attacker gains entry, what will the attacker do next? Providing barriers to absorb or deflect different types of attacks adds a second layer of defense.
  • What are the ultimate objectives of an attack? Hardening the targeted elements of the system provides a third layer of defense.

This blog explores possible answers to these questions and describes common defense-in-depth mechanisms for ICS systems. The selection of appropriate mechanisms for an ICS should be based on a thorough security assessment.

The primary goal of defense-in-depth is to restrict access to the ICS in terms of who has access and what they are permitted to do. Access controls include physical, procedural, and technical means. While perfect security is seldom achieved, the intention is to make it so difficult for attackers that they never try or they abandon an attack after becoming frustrated with attempting to penetrate multiple layers of defense.

Of primary importance for success is both the organization and its people accepting the need for a defense-in-depth strategy. Without conscious efforts to keep systems secure, technical means by themselves will be inadequate. It is much easier to steal a car with its doors unlocked and with keys in the ignition than it is to steal one that is locked with keys nowhere to be found.

Gaining entry to an ICS

The first step in developing an effective ICS defense-in-depth strategy is to determine where the attacker may be able to gain entry to the system. Entry points are interfaces of devices in the ICS that are accessible to attackers, such as communications interfaces and USB ports. Attackers can not only use these entry points to break into an ICS, but they can also use them to attempt denial-of-service (DoS) attacks on the ICS. DoS attacks attempt to reduce availability of the system or its components. Common DoS attacks attempt to crash devices or their software applications. More sophisticated DoS attacks may attempt to shut down equipment or affect physical processes.

The most common entry points in an ICS are human-machine interfaces (HMIs), ICS network interfaces, and device interfaces (figure 1). Each is discussed below.

Figure 1. HMI entry points

Human-machine interface entry points are devices used within the ICS that have a user interface. They include operator consoles, engineering workstations, handheld devices, laptops, tablets, and smartphones. They are arguably the most commonly used entry points, because every successful login, even by authorized personnel, provides the potential for an attack. Common examples include engineering, operator, and maintenance HMIs.

For an attack through an HMI to succeed, a login session with the ICS must first be established. For authorized users, logging in is often part of normal operation. If unauthorized users can gain physical access to an HMI, they may be able to look over a user’s shoulder to observe displays and keystrokes, or even record them with a cellphone. The attacker may then use stolen credentials to log in to an unattended workstation, or alternatively, may be able to hijack a user session if it is left unattended.

Defensive measures:  The first layer of defense for HMIs is to attempt to prevent attackers from viewing HMI displays and to keep them from using HMIs to access the system. Protections should include both physical security controls and user authentication mechanisms.

Physical security controls create restricted access areas for HMIs where physical access is limited to authorized users, and physical actions can be monitored and recorded for suspicious and malicious activity. Physical access controls are best supplemented by an active security awareness program that describes what to do, such as challenging unknown individuals and locking the screen when leaving an HMI unattended, and what not to do, such as using unauthorized USB memory sticks.

User authentication mechanisms verify the user’s identity. In general, users are represented by accounts that associate user identities with roles, privileges, and permissions. User identities are verified through the use of credentials the user provides to authentication mechanisms.

Credentials usually consist of a user identifier, such as a name, number, or email address, and one or more secrets known or possessed by the user, such as a password, personal identification number (PIN), smart card, retinal scan, or fingerprint. All successful and unsuccessful credential validations, such as login attempts, should be logged. Credentials should be carefully managed using industry best practices to protect against disclosure and unauthorized modification.

It is not uncommon in an ICS environment for an HMI to require the operating system (OS) login that follows startup (boot) of the HMI workstation to persist across operator shift changes. In these cases, all operators use the same OS session to eliminate the need for closing and reopening control system applications and data viewing screens that would occur if an OS logout and login were required.

The identity of the operator, not the identity of the logged-on OS user, should be used when authorizing and logging control system actions. Therefore, a separate login and authorization scheme for control system users should be provided.

It is highly recommended that control system users not be allowed to log in to perform control actions when the logged-on OS user has elevated privileges and permissions, such as those of the administrator or power user. This helps prevent lower-privileged users, such as operators, engineers, and desktop applications that have been infected, from installing software, modifying installed software, changing system settings, or otherwise manipulating protected OS resources.

It is also highly recommended that the built-in OS administrator account be removed or renamed if removal is not possible. Instead of using this built-in account, administrators should be given unique user names without any indication of their administrator status and be assigned to administrator groups. This will make it more difficult for attackers to target well-known administrator accounts.

When selecting user-authentication mechanisms, it is important to provide authentication for all logins. Guest and anonymous logins and user logins to service accounts for server applications should not be allowed. When passwords are used, password policies should be configured for adequate password length and complexity, periodic password changes, and rules against password reuse.

Multifactor authentication, such as smart cards and a PIN, should be used for all HMIs that are in open areas or that can connect from a remote location. For remote access, a secure channel, such as a virtual private network, and an intervening perimeter security device (e.g., firewall) should be used.

Mutual authentication should be used for website and server application logins. Mutual authentication requires websites and servers to additionally provide their identity to the user to protect the user from giving login credentials to an attacker pretending to be the desired website or server. Kerberos is used in Microsoft Windows environments for mutual authentication, while websites often rely on public key infrastructure (PKI) technology. Note that although PKI supports the use of self-signed certificates, their use is discouraged because their authenticity cannot be easily verified.

Finally, ICS OS accounts should be defined and managed independently from other OS accounts, such as plant IT accounts. This is typically accomplished by the ICS having its own user account directory, which is more commonly referred to as a domain (after the Microsoft Windows domain). Further, trusts should be prohibited between ICS domains and non-ICS domains, including those used for demilitarized zones (DMZs). Trusts between domains generally allow users logged into one domain to access a second domain without supplying credentials for the second domain. Plant system users who need to access the ICS should be given their own ICS OS account. All other plant system users should not have access to the ICS.

Perimeter security devices

Perimeter security devices are network security devices used to segment ICS networks from external networks. Perimeter security devices include firewalls and routers (e.g., those with access control lists), and mediate communications between external devices and the ICS. Figure 2 illustrates the use of perimeter security devices.

Figure 2. Perimeter security devices

They can be used to gain entry to the ICS by discovering and exploiting weaknesses in rules that forward authorized messages to the ICS and block unauthorized messages. Entry is also possible if the attacker is able to gain configuration access to a perimeter security device and change its rule.

Defensive measures: Perimeter security devices designed specifically for industrial applications should be used to protect the ICS from external access. Network security devices designed for industrial networks provide visibility and defense against ICS-related protocols and traffic, which are significantly different from traditional IT traffic.

Perimeter security devices should be physically protected from tampering and from having unauthorized devices connected to them. This is often done by placing them in locked closets or locked cabinets and using conduit to protect network cabling.

Perimeter security devices should be configured to allow only communications with devices that are essential to the operation of the ICS. This configuration should explicitly authorize or whitelist inbound and outbound communication paths in terms of their source and destination network addresses, application end points (e.g., TCP/UDP port), protocols, and allowed content. Role-based access controls should be used to allow configuration by authorized network administrators and review of the configuration by authorized maintenance personnel.

Communication paths that are not authorized by the configuration should be rejected and logged, or at least logged if rejection is not practical. When possible, configure intrusion detection capabilities of perimeter security devices to inspect for known attacks and suspicious traffic. Additionally, traffic flows to and from an ICS are predictable, and deviations often are indicative of an attack. Therefore, traffic flows should also be inspected for potential attacks. Lack of an industrial perimeter security device or deficient configurations can allow unauthorized communications to enter and leave the ICS.

Perimeter security devices should be connected externally only to DMZs that reside between perimeter security devices and plant networks, as shown in figure 3.

 

Figure 3. Perimeter security devices should be connected externally only to DMZs that reside between perimeter security devices and plant networks.

DMZs are buffers that protect ICSs from being directly accessed by external systems. Perimeter security devices should be configured to allow only DMZ devices to communicate with the ICS workstation networks. All external devices, including wireless workstations (e.g., laptops), should have their communications with the ICS mediated by the DMZ by terminating them in the DMZ, validating them, and then reestablishing them between the DMZ and the ICS.

In addition, remote desktop access to the ICS from outside the plant should require a pair of remote access connections, one from the remote device to the DMZ, and the other from the DMZ to the ICS that is mediated by an ICS perimeter security device. If the remote device is external to the plant, a VPN connection to the plant network should also be used. The perimeter security device should be configured for each remote access connection, and also specify when and for how long remote access is allowed.

Finally, the ability to configure and maintain perimeter security must be controlled and must be performed only from authorized HMIs within the ICS. External HMI access to perimeter security devices should not be allowed. All configuration and maintenance should be controlled by change management procedures, and they should be performed only by authorized network administrators. In addition, backup copies of the configuration should be stored in a secure location.

Administrative connections for configuration and maintenance should use mutual authentication and role-based access controls that limit network administrators to only those operations that they need, such as viewing, configuring, upgrading software, and installing patches. Encryption should be used for configuration. Maintenance sessions and cryptographic methods, such as encryption or cryptographic hashes, should be used for storing sensitive data, such as user credentials, in the perimeter security device.

Local network devices

Local network devices, as shown in figure 4, are network devices, such as routers, switches, and wireless access points, that connect Ethernet devices, including workstations, servers, and controllers/programmable logic controllers (PLCs), to the ICS network. Although local network devices support connection to a limited number of Ethernet devices, they can be interconnected to each other to expand the size of the network. Routers can be used to divide the ICS network into separate Ethernet segments if necessary.

Figure 4. Local network devices connect Ethernet device to the ICS network.

Defensive measures: Like perimeter security devices, local network devices and their cabling should be physically protected from tampering and from having unauthorized devices connected to them. Apply the recommendations given above for perimeter security device configuration and maintenance to local network devices. Also, wireless Ethernet access points should only be connected to networks external to the DMZ, and they should be configured to enforce security that protects against eavesdropping and having unauthorized wireless devices connected to them.

ICS networks should have their own IP address spaces that are separate from DMZ and plant networks to further isolate them. Static, private IP addresses should be assigned to devices, because automatic assignment mechanisms, such as DHCP, have been shown to be susceptible to exploit by attackers. The ICS network should be scanned periodically to look for unauthorized IP addresses and unauthorized application end points (e.g., TCP/UDP ports). Because scans have the potential to be disruptive, care must be taken when using them.

In addition, only authorized network administrators should be allowed to connect Ethernet devices to the network. This should be supplemented with role-based access controls that allow authorized network administrators to configure switches to allow connectivity to authorized devices and deny connectivity to all others. Connectivity is often controlled using IEEE 802.1 switch port authentication, or by other mechanisms that support specifying the switch ports that are enabled, those that are disabled, and the devices/users that are authorized to communicate through these ports. In addition, maintenance personnel should periodically verify that only authorized devices are connected, and role-based access controls should be used to allow them to review switch configurations to ensure configurations are correct.

Device interfaces

Device interfaces, as shown in figure 5, provide connectivity to ICS devices through their Ethernet ports and application end points, peripheral interfaces, and field I/O modules. All of these interfaces, except Ethernet ports and application end points, require the attacker to have direct physical access to the device, or indirect access via a legitimate user, for example, by giving the user an infected USB memory device.

Figure 5. Device interfaces provide connectivity to ICS devices through their Ethernet ports and application end points, peripheral interfaces, and field I/O modules.

Ethernet ports

Ethernet ports provide access to Ethernet networks at the transmit/receive level. They manage the transfer of messages between application end points and the network. They include both wired and wireless connections to the network.

Defensive measures: Attacks against Ethernet ports usually try to exhaust buffer space or processing capabilities of the network interface card or its associated communications software. These attacks may be intentional or unintentional, such as a network storms or network scans that are configured to run too rapidly.

To protect against these attacks, physical access to these devices and ports should be controlled, and unused ports should be disabled or locked. For devices with both wired and wireless Ethernet ports, the wireless Ethernet ports should be removed or disabled. In addition, firewalls, either internal or external, should be used and configured to allow only authorized devices to communicate with the device.

In addition, devices should be required to pass a recognized communication robustness certification, such as Achilles Communication Certification. These certifications use a battery of tests to verify that network ports and their communications software have been implemented to withstand high traffic rates and malformed packets.

Application end points

Application end points are addressable access points within a device, such as TCP/UDP ports, that software applications use to communicate over the network. There are two basic types of software applications that communicate over the network: desktop applications and server applications.

Desktop applications are software programs with a user interface. They are typically started by selecting an item on the main menu, such as the Windows start menu, or on the menu of another desktop application. They often operate as clients, such as OPC clients and Web browsers, that issue requests to server applications. They typically run under the account of the logged-on OS user, but they may be able to change their account privileges by supplying the credentials of a different user.

Server applications are standalone programs, often called services or daemons, that do not interact directly with user interface devices (e.g., keyboard, monitor). They typically are configured to start either manually or automatically, and to run under an account configured for their execution. However, some can be configured to start dynamically in response to a client program and to run under their configured account, the account of the client, or an account for which credentials are supplied by the client. And, like desktop applications, they may also be able to dynamically change their account privileges.

Attacks against application end points attempt first to establish communications with the application and then to use its capabilities. Alternatively, attacks may attempt to find vulnerabilities in the application and then take advantage of (exploit) these vulnerabilities to crash the application or to insert code into it. Inserting code allows the attacker to take control of the application and potentially the platform on which it runs.

Defensive measures: Like Ethernet ports, access to application end points should be limited. Only approved end points should be enabled and accessible. Software applications that are installed but not approved should be uninstalled, disabled, or their end points blocked from receiving messages from the network.

Communications sent to an application end point should employ some level of authentication. End points or their applications should verify that received messages are sent by authorized senders. This may be accomplished with user authentication mechanisms or by using firewalls configured to limit access to the application to authorized senders. For example, industrial protocols often do not have authentication built into them, so using a firewall between their end points provides a base level of authentication using the addresses of the communicating end points.

When user authentication mechanisms are used, the credentials passed should identify a user account with permission to access the application and its data or database. Further, for communications with devices external to the ICS or that are in public areas, multifactor authentication is desirable to protect against attackers who have infected these devices or who are attempting to use stolen or disclosed passwords.

In addition to authentication of the user, some means of protecting the communications from disclosure or modification, including man-in-the-middle attacks, should be used. This is usually accomplished using digital signatures or encryption.

Finally, well-known application end point identifiers, such as TCP Port 23 for Telnet, should be changed if possible. Using standard identifiers gives the attacker a significant head start. Attackers who find open TCP/UDP ports on a device will generally not know what protocol is being used for nonstandard ports and will then have to expend additional effort to determine the protocol.

Peripheral ports

Peripheral ports are device interfaces used to connect peripheral devices, such as CDs/DVDs, printers, user interface devices (e.g., keyboard, mouse, displays), and serial devices, such as handheld maintenance and diagnostic devices. They pose a problem of being infected with malicious software, often referred to as malware, that can be used to attack or infect the device to which they connect.

With the advent of the USB protocol, the range of serial devices increased to include data devices, such as USB sticks, and more recently, fully programmable devices, such as smartphones and tablets. Because smartphones and tablets support external communications via their cellphone capabilities, they are an alternate avenue of attack for remote attackers. This further expands the sophistication of attacks that can be conducted through these interfaces, putting them on par with attacks conducted through network interfaces.

Handheld devices historically have been special-purpose serial port devices used for troubleshooting, diagnostics, maintenance, and configuration. As technology has advanced, handheld devices are now using wireless Ethernet and cellphone technology. All of these technologies provide yet another avenue of attack for assailants who have the use of a handheld device at the site or are able to infect one that will be connected to a device.

Defensive measures: Protection of peripheral ports should be a combination of technology, training, and procedures. Personnel should be trained to be aware of the dangers posed by these interfaces and should be instructed accordingly, with the instruction supplemented by written policies and procedures as necessary.

For example, policies and procedures that prohibit phones and tablets from being connected to ICS devices should be enforced. Policies and procedures should also require all removable media and portable devices, such as USB memory sticks and diagnostics/maintenance devices, to be approved before being used. Additionally, policies and procedures should require approved removable media to be used only within the ICS and to be scanned and inspected before use. Finally, supply-chain requirements should be applied to all removable media shipped to the site and include the use of tamperproof seals and digital signatures.

Coupled with these policies and procedures, technical measures should be employed for disabling all peripheral ports when they are not needed. They should be enabled only for the period of time when they are needed.

Ports used for memory devices, such as USB ports and DVD/CD drives, should be configured to show hidden files contained on memory devices. They should not allow files on them to be automatically opened or executed. This will allow for inspection of their contents before use.

ISA Cybersecurity Resources

ISA offers standards-based industrial cybersecurity training, certificate programs, conformity assessment programs, and technical resources. Please visit the following ISA links for more information:

I/O ports

I/O ports connect field I/O devices to PLCs and controllers using protocols such as HART, FOUNDATION Fieldbus, Profibus, DeviceNet, and Modbus. The attack surface for I/O ports like these is relatively small, but it does exist.

Defensive measures: Physical access controls should be used for I/O devices and for the wiring. Examples include access restrictions for personnel, locked marshalling cabinets, and conduit for wiring that is in easily accessible areas. For wireless I/O, standards that have built-in security measures should be used, such as WirelessHART and SP100. Additionally, surveillance cameras can be used to monitor and record all physical access to these devices.

Once inside, the attack continues

Understanding how various types of attacks are conducted after an attacker gains access is critical to defending an ICS. This understanding will help you to develop customized scenarios of how your ICS can be attacked, often referred to as threat modeling, and add appropriate defense-in-depth safeguards.

Once access has been gained to the system, the attacker generally attempts to misuse, abuse, or corrupt the system with a combination of user interfaces, communications protocols, and untested software (e.g., games and malware). One of the primary objectives of these attacks is to escalate OS privileges to give the attacker control of the device under attack. If an attacker gains control, he or she will have free reign to carry out the attack.

Misuse or abuse of the system can affect the system at two levels: compromising OS resources, such as files, registry entries, and OS user account information, and compromising control system resources, such as set points, alarm limits, historical values, and control system account information. Of considerable concern is the ability of an attacker who has access to OS resources that contain control system data, such as configuration files and access control lists of the control system. In these cases, the attacker may be able to compromise the control system by tampering with data managed by the OS, even if the attacker does not have control system privileges and permissions.

An example attack begins with a logged-on user copying a file infected with malware through the network, from a USB memory device, or from a CD or DVD. Alternatively, the attacker may find a vulnerability in a communications protocol and exploit it to infect the application with the malware. The purpose of this malware may be to record keystrokes or to prompt users for passwords, and then send them back to the attacker to use to log in to the system or pivot/hop to another computer in the system.

User interface attacks

User interface attacks generally target command-line interfaces and desktop applications to access OS resources or to manipulate the control system and its data. They generally follow an HMI entry point attack or an attack through a remote desktop communications protocol (see below) that results in a successful login. It is also possible that the attacker gains access to the user interface through the insertion of malware to command line interfaces or desktop applications.

Attackers may be authorized or unauthorized users. Attacks by authorized users can be unintentional and are often regarded as “user mistakes,” but they can also be intentional and regarded as malicious. Whether conducted by an authorized or unauthorized user, the result of a user interface attack allows the attacker to manipulate the system through menu items on user interfaces and through commands using a command prompt.

Examples of compromises through these interfaces include command execution, which lets the attacker directly modify the operation of elements of the control system. Data-related compromises include disclosure, modification, addition, or deletion of data and files, including configuration data, calibration data, run-time parameters, alarm limits, and logs. Attacks may also be able to execute or copy untested software to the system or send commands and downloads to controllers and other devices.

Defensive measures: One of the primary defenses against user interface attacks is to enforce least privilege. Least privilege is the principle of giving users only the privileges and permissions they need. This applies to both their OS and their control system accounts. Users who are able to access the control system should have their own control system account configured for least privilege. Without their own account, their control system actions cannot be traced back to them. Where possible, role-based access controls should be used to simplify management of user privileges and permissions, including the reduction of account configuration errors.

Additionally, OS and control system administrator accounts should be closely controlled. They should be granted only to a limited few who perform administrative tasks, such as account management and the installation of new or replacement devices, software, and patches. When possible, control system administrator accounts should be configured with only the specific OS administrator privileges and permissions that they need, rather than making them OS administrators or assigning them to the OS administrators group.

To further protect administrator privileges in Microsoft Windows systems, enable Microsoft Windows user account control (UAC). Of course, this feature should be tested for compatibility with the control system during development and before use at a site. UAC causes all user sessions, including those for administrators, to run as standard users until administrative privileges are actually needed.

UAC can be configured to prompt the user, including administrators, for administrator credentials when this occurs. While this seems like an unnecessary burden, it is recommended for developers, integrators, and end users for two reasons. First, it is much more secure, and second, it raises security awareness by telling the user when privileged functions are being used, something that is sorely missing in today’s operations.

In certain cases, authorization to perform critical control system operations should use an authorization scheme that requires two or more users to work together to complete an action critical to the ICS. Examples include (1) having one user turn a key switch that enables another user to change the configuration of a safety system, (2) having to schedule a remote access connection or secure shell (SSH) session through a security administrator, (3) requiring two operators to approve batch step changes, and (4) requiring a shift supervisor to approve operator changes to sensitive set points or alarm limits.

In addition, some display screens are often used in ways for which they were not intended. For example, many applications display a built-in “File Open” dialog box to allow a user to browse and select a file to open. However, this dialog box is often used as the basis for all file operations, including “File Save” and “File Delete.” As a result, it is not uncommon for a “File Open” dialog box to also allow the user to delete files, bypassing normal file delete user interfaces and safeguards. Therefore, the use of such display screen capabilities should be evaluated and approved for the control system.

Finally, documented procedures and training should be available to provide an authoritative source of information for directing user operations. Without them, users may make uninformed decisions that could lead to misuse or abuse of the system.

Communications protocol attacks

Communications protocol attacks target OS and ICS communications protocols and their applications. These attacks can be attempted after the attacker has successfully gained access to a device entry point or to the network or its media. Gaining access to the network or its media (cabling or wireless) may let the attacker listen to and potentially modify communications while in transit (“on the wire”). It may also allow the attacker to construct and send protocol messages to the listening OS or ICS application.

Malicious software is often the attacker in communications protocol attacks. Malicious software normally attempts to use features of the protocol to manipulate the ICS application, cause it to fail, or break into it. Malicious software may be able to use features of the protocol to send data to the application, to read data from it, and to cause it to perform some specified operation. In a sense, protocol features provide the software equivalent of the user interfaces just described.

A communications protocol break-in occurs when the attacker detects a weakness (vulnerability) in the application or the protocol and then exploits that vulnerability to cause the application to accept and run attacker software. Vulnerabilities are usually caused by software bugs or deficiencies in the system design and implementation. The attacker detects them by learning the version numbers of the various software components running in the ICS.

OPC provides an example of how communications protocols can lead to an attack. OPC Classic (DA, HDA, A&E) is based on Microsoft DCOM, which assigns an application end point address (a TCP port) to a client connection during connection establishment. The TCP port is taken from a range of dynamically assigned ports managed by the OS. This requires this range of ports to be opened in firewalls. Opening access to these ports allows them to be attacked through the firewall, whether or not they were actually assigned to an OPC server.

Conversely, OPC UA uses a single TCP port, which alleviates this DCOM problem. However, its authentication is based on self-signed certificates that have their own security issues. Further, these certificates are often not integrated with Windows authentication. OPC UA servers are configured to recognize certificates assigned to client applications, but they have no means of recognizing or authenticating the OS or control system user, and thus no ability to authorize actions for individual users or trace their actions back to them.

Defensive measures: The principle of least privilege, recommended above for ICS desktop and service applications, should also be applied. As a general rule, ICS desktop applications and services with network access should not have elevated OS privileges and permissions. This will prevent attackers from having elevated privileges and permissions should they gain access to one of these applications.

Privileges and permissions for desktop applications are inherited from the user account used to start them (see “user interface attacks” above). To further protect desktop applications that support communications access, including remote logins, access control lists should be explicitly configured to identify the users who are authorized to connect to them. In addition, remote login capabilities should be disabled and enabled only when needed.

Services, on the other hand, are configured with a specific account that should have only the privileges and permissions needed by the service. Further, privileges for service accounts should not support interactive logins. This prevents an attacker from logging into the system using service credentials.

Also, unlike desktop applications, service applications are not assigned control system privileges when they run. Therefore, it is highly recommended that service applications use the control system privileges of the logged-on control system user when a request originates from a desktop application in the same workstation. When the request comes from another device, the service should default to have a minimum set of control system privileges or it should have some method of obtaining the identity of the requesting control system user. Options include performing a control system user login, explicitly passing the control system user identity in a secure manner, or mapping the OS user to a corresponding control system user. In all cases, the assignment of control system privileges to the service should be logged.

Again, the example of an OPC server can be used to illustrate this point. If the OPC client desktop application is on the same workstation as the OPC server, then the OPC server should use a control system user account that has been mapped to the logged-on control system user to authorize requested operations. When the OPC client is on another workstation, then the OPC server should have a means of determining the control system credentials of the user making the request.

Equal in importance to the use of these measures, applications should be developed to validate input parameters for length and content. This is a primary method of preventing or minimizing exploitable vulnerabilities. And, in addition to normal testing, use fuzz testing to verify comprehensiveness of parameter validation. Fuzz tests send a variety of malformed messages as well as messages with unexpected parameter values and lengths to try to cause improper behavior in the end point or the application. For website applications, web page security best practices are well documented and should be used.

A second line of defense against vulnerabilities is assessing and resolving them using formal incident and vulnerability handling processes. If the resolution results in a patch, the patch should be developed, tested, and installed with all due diligence. In addition, OS and other third party software patches should be treated as untested software and should be thoroughly tested for compatibility with the ICS.

A third line of defense is encryption and digital signatures. Encryption is used to prevent disclosure of information, and should be employed whenever confidential data is being transmitted. In addition, communications used to write data/commands to an application, whether from an external or internal location, should be digitally signed or otherwise protected against tampering.

Untested code attacks

Untested code attacks occur when the attacker loads or inserts software that has not been tested with the system. Untested software is generally transferred to a system through user interfaces and communications protocols, and also by the exploitation of vulnerabilities.

Untested software, whether legitimate software or malware, has the potential to negatively affect the operation of the control system. Legitimate software includes games, graphics and drawing tools, analysis software, and in general, commercially available software. Malware includes a wide variety of malicious software. Its name is usually based on the behavior of the software, such as virus, worm, bot, rootkit, and spyware.

Untested legitimate software is dangerous because it may degrade performance by consuming too much memory or requiring too much processor time. It may also compete for other resources and cause the system to fail. Malicious software, on the other hand, intentionally tries to disrupt operation of the device through a variety of methods-deleting, renaming, encrypting, or changing files and changing configuration or operational settings. Disruptions can range from crashing the device to simply creating an annoyance for the user.

Sophisticated malware may elevate its privileges, scan the network for other devices, monitor system data, record keystrokes (including user names and passwords), and also attempt to hide itself from detection. Further, it may run continuously or put itself to sleep awaiting a specific event or time to trigger its operation. It may establish a connection back to the attacker, if one does not already exist, and give the attacker the ability to direct its operation. This may include infecting other devices and penetrating deeper into the system.

These are only a few of the consequences of falling victim to untested software attacks. The unfortunate part is that malware attacks can, and often do, happen without involving any of the control system software. That is, the insertion of malware is often accomplished by using commercially available hacker tools that exploit (take advantage of) known weaknesses in the OS or other commonly used software, such as drivers and applications. Of course, attackers may also be able to exploit vulnerabilities in control system applications that have network interfaces or other accessible interfaces, such as dynamic link library (DLL) interfaces and inter-process communications interfaces.

Hacker tool vendors pay hackers a premium for finding vulnerabilities and for providing malicious code to exploit them. They sell these tools to hackers who can use them to infect the device being attacked. The rationale is that software suppliers can buy these tools to harden their software against attack. However, malicious hackers can also discover vulnerabilities and exploit them to attack control systems.

A typical scenario for a malware attack is as follows. Once the attacker discovers a vulnerability, he or she sends a message that contains an infection (software instructions) to the device being attacked, and the vulnerability causes the instructions to be executed. The malware instructions then create a message that is sent back to the attacker asking for more code to be returned to the target program. In this way, the malware increases its capabilities and eventually graduates to a full-blown program that may even save itself to disk and restart itself whenever the computer reboots.

Another approach is through a legitimately established connection to an infected or malicious website. Phishing is one technique that is used to try to encourage, or trick (spoof) a user into connecting to such a site. Others involve changing network settings to cause legitimate connection requests (e.g., HTTP or TCP/UDP) to be redirected to malicious sites. Once connected to a malicious site, the malicious site can infect the user’s computer by, for example, exploiting vulnerabilities in the client application or sending a web page that contains code to infect the user’s computer. In most cases, the user will be unaware that his or her computer has been infected.

A third method is to copy an infected file to a computer, either from portable media or from another computer, or by downloading it from a web site. Common examples of this type of attack include malicious users who intentionally copy infected files, and nonmalicious users who copy files without knowing they are infected. This type of attack may also occur when a user inserts an infected USB memory device into a workstation and an infected file is automatically copied. More sophisticated attacks hide these files from view to make the user think they are not present on the USB device.

The types of malware that can infect a computer are limited only by the imagination of the malware programmer. For this reason, it is essential to protect the system from the mechanisms used to deliver malware.

Defensive measures: To help protect against malware, anti-malware mechanisms, including intrusion detection and prevention systems, can be added to the network to look for known malware code signatures (byte patterns) before they reach the target device. Anti-virus should also be employed in devices that support them to look for malware signatures and quarantine them. Also, application whitelisting can be used to allow only approved software to be run. Application whitelisting protects against untested software that has been copied to disk.

However, anti-virus and whitelisting software should be thoroughly tested to ensure they do not interfere with the operation, performance, or safety of the system. This testing should include stress testing and testing of a wide variety of operational scenarios. Operational scenario testing protects against surprises related to the use of this software. Whitelisting, in particular, has shown the need for extensive operational scenario testing. Whitelisting software sometimes degrades system performance during unusual circumstances or fails to authorize execution of an infrequently used application, because it was not added to the whitelist. Both can be serious problems.

Digital signatures for approved software executable and DLL files should be used to complement anti-virus and whitelisting. Digital signatures are secure checksums that detect when a file has been changed. They can be used to not only notify the user when changes in software and data files are detected, but also to prevent infected software files from being executed and used.

If the attacker gains administrative privileges and is familiar with the anti-virus, whitelisting, or digital signature capabilities of the system, he or she may be able to defeat these safeguards, but it would require a very sophisticated attack.

Finally, if a device becomes infected through exploitation of a vulnerability, formal incident and vulnerability handling processes ensure that the vulnerability is adequately assessed and resolved. If the resolution results in a patch, the patch should be developed and installed promptly.

Reaching the target of the attack

Targets of attack are often referred to as assets in traditional cybersecurity risk assessments. Attackers generally attack three types of assets: devices, information, and software.

The previous two sections discussed how attackers gain entry to the ICS and then attempt to use or misuse the system. This section describes elements of the system that are often targeted to achieve the ultimate goal for an attack, such as financial gain/industrial espionage (theft), sabotage (production loss), vandalism (damage), terrorism (destruction), joy riding (thrill seeking), and notoriety (recognition).

Device targets can be any of the devices used within the ICS. Attacks of primary concern against network, control, and safety devices are those that can directly command operational behavior or cause denial of service.

ICS supervisory workstations and servers are also devices that are commonly targets of attack. They include operator workstations, engineering workstations, application servers, virtualization servers, and special-purpose servers commonly referred to as appliances. Operator workstations let the operator view and modify run-time data of the physical process being controlled. Engineering workstations are used for developing and deploying configurations for the control strategy of the process, while application servers run control applications, such as OPC servers and historians. Virtualization servers support virtual machines. Appliances are typically used to monitor the health and safety of the ICS. Denial of service, including loss of view, modification of run-time parameters, and modification or disclosure of information are primary concerns for these targets.

Information targets are numerous, but generally can be classified as confidential data, configuration and administrative data, and run-time data. They normally reside on the targeted devices just discussed. Examples include the file system, OS data (e.g., the registry), network shares, control system data and configurations, and recipes. Attacks may be launched on the stored, backed-up, and redundant versions of these, on their values in executing software, and while they are being transferred between devices and applications.

Software targets primarily include executable files and DLLs. DLLs are files that contain binary code that is shared by executables. Attacks against these generally attempt to infect them or replace them with attacker code.

Defensive measures: The first step in protecting assets is to partition the ICS into security zones. Security zones separate devices, information, and software based on how critical they are to business objectives. The more critical a target is, the more its integrity has to be trusted. For devices and software, integrity refers to their ability to perform as required. For information, integrity refers to the validity of the data.

If trust is low (you are not sure about the integrity), then consequences critical to production can occur, such as safety events, loss of equipment/production, and loss of trade secrets/competitive edge. In the past, trust has been gained through testing and maturity. The more heavily tested a target is and the more it is used, the higher the trust-the more confidence there is that it can be used successfully in critical operations.

From a security perspective, that trust has to be protected from cybersecurity threats. This means that higher trust zones have to be protected from lower trust zones. Protective barriers, such as network segmentation and separate access domains with no trust between them (e.g., Active Directory domains) should be employed to minimize security risks within and between zones. Instead, users who need to access resources residing in another zone should be given credentials to access that zone.

Partitioning an ICS is not a new concept. The Purdue Enterprise Reference Architecture model, as standardized by ISA-95 and IEC 62264-1, divides the functions of a plant into levels that are directly applicable to the definition of ICS security zones. Security partitioning begins by segmenting the ICS from external systems using perimeter security devices and DMZ, as described previously.

Then, within the ICS perimeter, devices involved in controlling physical processes (Purdue Model Level 1), such as field devices, are the most critical to production, followed in criticality by ICS devices in Level 2 and Level 3, respectively. These levels are a starting point for defining security zones within the ICS, with further partitioning within these levels becoming necessary if the security risks between components is deemed significant. For example, controllers in Level 2 will generally have a higher criticality than Level 2 HMI devices. As a result, they are placed in separate zones within Level 2. Additionally, safety instrumented systems and control systems, both of which reside in Levels 1 and 2, should be placed into separate security zones.

By the time the ICS becomes operational, much of the groundwork for identifying potential targets within zones has already been done. Alarm rationalization activities confirm which control elements are critical to the operation of the ICS. Similarly, safety hazard analysis activities can be used to identify safety-critical targets. In some cases, these targets coincide with those identified for alarms. In addition, devices that contain critical or valuable data are well known and documented, such as user accounts, control strategy configurations, calibration data, logs, and recipes. These should be placed in zones appropriate to their level of criticality.

Once zones are defined, security policies for each zone should be defined, and controls between zones should be identified and implemented. Common controls include firewalls, authentication mechanisms, access control lists, and cryptographic measures. The primary objective of these controls is to restrict access between zones to only those that are necessary.

The second step in protecting assets is additional hardening measures that complement those discussed in previous sections. Complementary hardening measures include disabling or removing all nonessential software and implementing security configuration/hardening benchmarks, such as those published by the Center for Internet Security.

In addition, unused certificates and Certificate Authorities should be removed from the certificate store. Further, self-signed certificates should not be used. The purpose of the certificate is to give the client application confidence that it is connecting to a server whose identity is verified by a trusted Certificate Authority. Self-signed certificates have no such verification, and they allow attackers to insert their own server into the ICS and provide a self-signed certificate for it.

Some information targets, such as SQL databases, may have their own access controls that are separate from those used by the OS and control system. When these access controls are present, they should be configured separately to prevent attackers who have gained access to a system from directly accessing these information targets.

The third step in protecting assets is anti-tampering mechanisms for software, data files, and downloads (e.g., configuration and recipe downloads). Digitally signing these when they are created is a recommended practice to protect their integrity and allow their users to verify that their contents have not changed.

The fourth step in protecting assets is the use of encryption to protect information transfers against unauthorized disclosure. Encryption should be used for transfers between the ICS and external systems, and within and between zones when it does not interfere with diagnostics and troubleshooting.

If a target is compromised, a combination of redundancy and backup/restore capabilities should be available to recover the target if necessary. Redundancy provides for rapid recovery through switchover from the compromised target to its redundant partner. However, there must be assurances that the redundant partner is not also compromised. Backup/restore capabilities are used to recover the target back to a known, uncompromised state, and their use should be integrated into a disaster recovery plan maintained for the system.

To protect against in-memory infections-those that have not been written to disk-devices can be rebooted periodically to clear them of memory-resident malware. While this is not always convenient, it will ensure that software processes are restored to their disk images. If that does not work, backups can restore the device to a previous state. Backups of both software and data should be made for this purpose.

Finally, if a vulnerability is discovered in a device, use incident and vulnerability handling processes aligned with those described above for communications protocol and untested software attacks to assess and resolve the issue, and patch the device if necessary. To supplement these processes, regularly monitor for and log suspicious activities, breaches, and compromises. Review these logs, along with control system logs, to better understand the security posture of the ICS and to determine the root causes of identified issues.

About the Author
Lee Neitzel is a senior engineer at Wurldtech Security Technologies, a GE company. He has been involved in security and network standards for more than 30 years and is currently leading the development of the IEC 62443 standards and conformance assessment program. Neitzel holds multiple patents in the area of control system cybersecurity and has a master’s degree in computer science with a focus on computer security from George Washington University in Washington, D.C.

Connect with Lee
LinkedIn

About the Author
Gabe Faifman global cybersecurity architect & innovation manager at Schneider Electric. He formerly was director of strategic programs for Wurldtech Security Technologies. Faifman has been involved for 25 years in the design, deployment, and operation of automation and security infrastructure projects in the power generation, power distribution, oil and gas, and food and beverage industries. He holds a BS in electronics engineering from the University of Buenos Aires and a CSS1 InfoSec Professional certification from NSA.

Connect with Gabe
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

AutoQuiz: How are Pressure Changes Measured in Industrial Applications?

The post AutoQuiz: How are Pressure Changes Measured in Industrial Applications? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

Which of the following is true of most pressure measurement methods?

a) They are not able to measure a small differential pressure
b) The sensor matches the digital or analog signal conditioning and transmission
c) They are sensitive to volume but not temperature
d) They measure pressure by sensing the deflection of the diaphragm
e) none of the above

Click Here to Reveal the Answer

The deflection is converted into an electrical signal (voltage) by a piezoelectric or capacitance device. The small electrical current is converted to a standard signal (e.g., 4–20 mA or a digital signal) by a transmitter. Therefore, answer B is not correct.

Answer A is not correct, because pressure sensors can measure very small pressure changes (inches of water) and in some cases, millimeters of water.

Pressure measurement devices are not affected by volume, since they are measuring force over an area only. Many pressure sensors are sensitive to temperature (capillary tubes are filled with fluids that can expand with temperature). Therefore, answer C is not correct.

The correct answer is D, they measure pressure by sensing the deflection of the diaphragm. For most pressure applications, changes in pressure are detected by the change in deflection of a measuring diaphragm.

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

Why Automation Professionals Need to Go Beyond the Obvious and Create Elegant Solutions

The post Why Automation Professionals Need to Go Beyond the Obvious and Create Elegant Solutions first appeared on the ISA Interchange blog site.

This article was written by Bill Lydon, automation industry authority, journalist and former chief editor at InTech magazine.

I once attended a valuable and thought-provoking presentation by Matthew E. May, “The Elegant Solution” based upon lessons learned as a University of Toyota consultant focused on broadening the application of the Toyota Production System to other areas of the company.

The elegant solution is Toyota’s formula for mastering innovation with a core concept of satisfying needs and creating value, not new gadgetry. A fundamental of the elegant solution concept is superior solutions are elegant in their simplicity. Creating elegant solutions generally requires a great deal of thinking and design work, with results that appear obvious after they are created. An insightful comment by May is elegant solutions are found at the far side of complexity.

Some of the obstacles to elegant solutions include the “home run” trap, which invariably destroys a strong batting average over time and carries with it huge risks and high cost. In my experience, it is rare to find a “silver bullet,” that one thing that would solve major problems. More often there are a number of incremental changes that lead to improvements.

May makes the point that framing an issue or problem is critical and a lost art. Properly framing an issue or problem goes far in avoiding the typical pitfalls that limit the ability to reach an elegant solution. Many times, we are impatient, with short attention spans, limiting the time and effort expended to frame issues and problems. The obsession for immediate fixes blocks us from creating optimal solutions. A great problem framer focuses on asking the right question and fights the urge to be prescriptive right away. You gain no insight by jumping to immediate solutions.

Solving problems frivolously can be a brainstorm trap. Another issue is without a clear focus on outcomes, too much cleverness in adding bells and whistles can easily get out of control and carry the danger of complexity that can creep into projects.

May cited studies of brainstorming sessions revealing idea generation generally falls short after about 20 minutes. At that point, most groups stop and turn their attention to evaluating their ideas; however, the research shows the teams with the best ideas do not stop there. They embrace a psychological barrier and manage to find more novel and innovative ideas that are widely divergent and enormously creative. This is a fundamental I learned at the Creative Education Foundation when trained as a group facilitator to keep participants working longer on problems and issues using techniques to take them out of their comfort zone to stimulate the creation of better ideas.

Thinking is hard work, and in general we would rather not do much of it. That is why we satisfice, accept the first solution that is satisfactory, rather than explore many alternatives. The result is that we inhibit problem solving, not so much from the analytical viewpoint, but by not expending energy to develop a wide variety of creative and novel options to analyze. A solution should never be entertained as final before exploring the question, “What is possible?”

Innovation is trying to figure out a way to do something better than it has ever been done before. Automation professionals may well benefit from taking a chance to develop options beyond the obvious, finding elegant solutions that are superior.

About the Author
Bill Lydon is an automation industry expert, author, journalist and formerly served as chief editor of InTech magazine. Lydon has been active in manufacturing automation for more than 25 years. He started his career as a designer of computer-based machine tool controls; in other positions, he applied programmable logic controllers and process control technology. In addition to experience at various large companies, he co-founded and was president of a venture-capital-funded industrial automation software company. Lydon believes the success factors in manufacturing are changing, making it imperative to apply automation as a strategic tool to compete.

Connect with Bill
48x48-linkedinTwitterEmail

A version of this article also was published at InTech magazine



Source: ISA News

What Is Your Executive Board Doing for You?

The post What Is Your Executive Board Doing for You? first appeared on the ISA Interchange blog site.

This post is authored by Paul Gruhn, president of ISA 2019.

In mid-September, your incoming 2020 Executive Board met for two and a half days in Raleigh, NC. This summit meeting consisted of board orientation training, strategy and visioning discussions and planning, a third-party review of our operational documents, updates on our finances and IT/web infrastructure; all with a bit of socializing and fun thrown in. The majority of the 2020 board are members of the 2019 board, and some were also members of the 2018 board. Such continuity is helpful and beneficial. Yet I realize many members and volunteer leaders may be wondering, “What are you doing for us?” Let me take this opportunity to tell you!

Two years ago, the 2018 board saw the need to update our vision and mission statements at the summit meeting. I hope you are familiar with these two high-level statements by now. Our vision is to “Create a better world through automation.” Our mission is to “Advance technical competence by connecting the automation community to achieve operational excellence.” Everything your board has been doing for the last two years, and all that we are planning, have been with these statements in mind. And they are all intended to benefit you!

Your 2019 board has been taking this commitment seriously. Since the beginning of the year, we have been meeting for three hours every month to discuss strategic issues. In a previous post, I summarized four high-level objectives that the Executive Board and other volunteer leaders solidified during previous leader meetings. These are:

  1. Establish and advance ISA’s relevance and credibility as the home of automation by anticipating industry needs, collaborating with stakeholders, and developing and delivering pertinent technical content.
  2. Enhance member value and expand engagement opportunities to nurture and grow a more diverse and global community to advance the automation profession.
  3. Become the recognized leader in automation and control education, providing training, certifications, and publications to prepare the workforce to address technology changes and industry challenges in the most flexible and relevant
  4. Create opportunities for members to improve critical leadership skills, to build a network of industry professionals, and to develop the next generation of automation professionals.

The board formed four working groups, which have been meeting monthly since early 2019. These working groups engaged other volunteer leaders to further refine these long-range objectives into more defined and shorter-term goals, tactics, and key performance indicators. A RACI matrix (responsible, accountable, consulted, informed) spreadsheet has been created to document and track our work and accountabilities. The spreadsheet has been shared with others who have been using it within their groups to brainstorm ideas and tactics. This work will continue over the next few years.

In summary, your executive board has worked very hard to get everyone rowing their collective ISA boats in the same direction. We are focused, communicating, and working together for the collective good of all. I am immensely pleased and proud to be part of this! While you may not see the immediate impact of all this work, you deserve to be aware of all that is going on. Your board, and the hundreds of other ISA volunteer leaders, exist to serve you. And we take our commitment seriously.

About the Author
Paul Gruhn is a global functional safety consultant at AE Solutions and a highly respected and awarded safety expert in the industrial automation and control field. Paul is an ISA Fellow, a member of the ISA84 standards committee (on safety instrumented systems), a developer and instructor of ISA courses on safety systems, and the primary author of the ISA book Safety Instrumented Systems: Design, Analysis, and Justification. He also has contributed to several automation industry book chapters and has written more than two dozen technical articles. He developed the first commercial safety system modeling software. Paul is a licensed Professional Engineer (PE) in Texas, a certified functional safety expert (CFSE), a member of the control system engineer PE exam team, and an ISA84 expert. He earned a bachelor’s degree in mechanical engineering from the Illinois Institute of Technology. Paul is the 2018 ISA president-elect/secretary.

Connect with Paul
LinkedInTwitterEmail



Source: ISA News

AutoQuiz: What Are the Cybersecurity Risks to a Geothermal Plant?

The post AutoQuiz: What Are the Cybersecurity Risks to a Geothermal Plant? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Which of the following security risks is LEAST likely to disrupt operations at a geothermal power plant and result in an emergency situation?

a) connections to the Internet
b) inadvertent network failures
c) email viruses
d) remote access to network components
e) none of the above

Click Here to Reveal the Answer

Automation systems are vulnerable to all of these risks, but email viruses pose the least risk to disrupt operations and create an emergency situation. Email is the only item above that is potentially not directly connected to the operating control system. Information technology and other experts should work together to find alternatives that will provide adequate security commensurate with the individual risks identified by the security assessment and security audit processes.

The correct answer is C, email viruses.

ReferenceNicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP.A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News