Thank You Sponsors!

CANCOPPAS.COM

CBAUTOMATION.COM

CONVALPSI.COM

DAVISCONTROLS.COM

ELECTROZAD.COM

EVERESTAUTOMATION.COM

HCS1.COM

MAC-WELD.COM

SWAGELOK.COM

THERMON.COM

VANKO.NET

WESTECH-IND.COM

WIKA.CA

Licensure, ISA Turning 75, and the 2019 Annual Leadership Conference

The post Licensure, ISA Turning 75, and the 2019 Annual Leadership Conference first appeared on the ISA Interchange blog site.

This post is authored by Paul Gruhn, president of ISA 2019.

On 14-16 August, I attended the National Council of Examiners for Engineering and Surveying (NCEES) meeting in Washington, D.C. Over 550 people from licensing boards and various participating organizations (such as ISA) attended. In many ways, their annual meeting is similar to our Annual Leadership Conference.

The delegate meeting lasted 1.5 days (our leader meeting lasts only a half day), covered many more items, and drew a larger audience. Many proposed motions were included and accepted in a default ballot. Other items were discussed and voted on individually and some passed with little discussion or opposition, while some were debated on further and did not pass. During this meeting, I met leaders of professional societies of all sizes. All have concerns regarding licensure and regulation, hence their membership with NCEES and attendance at this meeting.

Concerns over licensure

Many lawmakers are taking steps to weaken or eliminate occupational licensing laws across the country. Some believe that licensure limits business growth and competition. Unfortunately, these bills often make no distinction for highly complex, technical professions, resulting in the licensure of engineers being dragged into the fray. A consumer can verify the qualifications of a doctor or lawyer before obtaining their services. However, the same cannot be said of an engineer. How does one know that the high-rise building you are working in, the bridge you are driving across, or the plane you are flying in were designed by qualified engineers? Consumers often do not have the specialized knowledge needed to evaluate the qualifications and performance of these systems. These examples significantly impact the health, safety, and welfare of the public.

Enter the Alliance for Responsible Professional Licensing (ARPL). ARPL is a coalition of national associations representing complex technical professions. Their goal is to lead with responsible licensing models that ensure the education, experience, and testing necessary to protect the public. To learn more about this organization, please visit www.responsiblelicensing.org

ISA turns 75 next year!

In 1945, more than a dozen individual, regional instrumentation societies decided to band together for the greater good into one single organization—the Instrument Society of America (ISA). Civilization, industry, and technology have evolved substantially in the past 74 years, and our Society has changed in response, including changing our name to reflect a broader geographic and technical focus. We will celebrate our 75th anniversary in 2020 by reviewing our heritage and looking forward to the future of the automation profession.

Eric Cosman will be the Society president next year. He has assembled a committee of volunteers and staff to plan commemorative activities and events throughout the year. We will share more detailed information as it becomes available, beginning at the Annual Leadership Conference in San Diego. We encourage all sections, districts, and departments to help us celebrate.

Attend the Annual Leadership Conference in San Diego, Oct. 25-28

The 2019 Annual Leadership Conference will take place at the end of October in San Diego, Calif. If you are currently an ISA volunteer leader in any capacity, or you are thinking of becoming one, please attend this meeting. There will be an orientation for new leaders, leader training sessions, business meetings, an awards gala, and even a party on the beach! The more you know about the Society, and the more leaders you meet and build into your network, the more effective you will become. 

How to Attend the 2019 Annual Leadership Conference

Would you like to attend the 2019 Annual Leadership Conference in San Diego? Click this link to get more information about the event, location and lodging information, special events and corporate sponsorship opportunities. 

About the Author
Paul Gruhn is a global functional safety consultant at AE Solutions and a highly respected and awarded safety expert in the industrial automation and control field. Paul is an ISA Fellow, a member of the ISA84 standards committee (on safety instrumented systems), a developer and instructor of ISA courses on safety systems, and the primary author of the ISA book Safety Instrumented Systems: Design, Analysis, and Justification. He also has contributed to several automation industry book chapters and has written more than two dozen technical articles. He developed the first commercial safety system modeling software. Paul is a licensed Professional Engineer (PE) in Texas, a certified functional safety expert (CFSE), a member of the control system engineer PE exam team, and an ISA84 expert. He earned a bachelor’s degree in mechanical engineering from the Illinois Institute of Technology. Paul is the 2018 ISA president-elect/secretary.

Connect with Paul
LinkedInTwitterEmail



Source: ISA News

AutoQuiz: How Are Errors Corrected in Industrial Network Data Communications?

The post AutoQuiz: How Are Errors Corrected in Industrial Network Data Communications? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Which type of communications error protection and correction uses a complex polynomial that results in a frame check sequence being added to a message?

a) checksum
b) cyclic redundancy check
c) error correcting code
d) parity error correction
e) none of the above

Click Here to Reveal the Answer

Ethernet uses a 32-bit CRC, while Modbus uses a 16-bit CRC for error detection and correction.

Checksums and parity bits (a special case of a checksum) are calculated by various methods of binomial addition of words within a message, not with more complex polynomials.

Error correcting code is redundant data added to the communications message that allows the correction of errors, within the limits of the code used, without requiring retransmission of data.

The correct answer is B, cyclic redundancy check. Cyclic redundancy checking is a method of checking for errors in data that has been transmitted on a communications link. A sending device applies a 16- or 32-bit polynomial to a block of data that is to be transmitted and appends the resulting cyclic redundancy code (CRC) to the block. The receiving end applies the same polynomial to the data and compares its result with the result appended by the sender. If they agree, the data has been received successfully. If not, the sender can be notified to resend the block of data.

ReferenceNicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP.A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

Radiometric Level Measurement: When Other Industrial Measuring Techniques Fail

The post Radiometric Level Measurement: When Other Industrial Measuring Techniques Fail first appeared on the ISA Interchange blog site.

This post was written by Gene Henry, who formerly served as senior level business manager for Endress+Hauser.

Radiometric or gamma level/density instruments are most often used in applications where other measuring techniques fail due to extreme temperatures or pressures, toxic media, complex geometries of vessels or pipes with difficult installation requirements, high viscosities, changing fluid behaviors, or abrasive or corrosive properties of the process media.

Because a radiometric measuring system is a noninvasive measuring technique (i.e., the emitter and detector are mounted external to the process), the behavior of a medium inside a vessel can be precisely observed with equipment fitted outside the vessel (figure 1).

A very simple installation is shown in figure 1. Because there is an agitator inside the vessel, installing a level measuring device inside the tank, such as an ultrasonic or guided radar instrument, may not be suitable. Depending on the conditions inside the vessel, the medium may vaporize, or the rotating agitator might cause a vortex at the surface of the fluid. These conditions may interfere with other types of level measuring devices, which are installed inside the vessel walls. With radiometric measurement, it is possible to detect all medium conditions.

Benefits of using radiometric include:

  • noncontact and noninvasive measuring
  • guaranteed process safety due to being outside the process vessel
  • precise and repeatable measurement for level, density, and interface applications
  • safe and easy to install
  • reliable measuring equipment

Radioactivity basics

Radioactivity can be roughly classified into three types, each emitted by the decay of the radioactive isotope:

  • alpha radiation: particle radiation in the form of a helium nucleon (alpha particle)
  • beta radiation: elemental particle radiation in the form of electrons and/or positrons (beta particle)
  • gamma radiation: high-energy electromagnetic waves similar to radio waves and light

With radiometric level and density measurement, only gamma radiation is used. Alpha and beta radiation are not strong enough to penetrate solid material, but the high energy and high-frequency wavelength of gamma waves radiate through material in the beam’s path. When a gamma ray passes through matter, the absorption rate is proportional to the thickness of the layer, the density of the material, the absorption cross section of the material, and the energy of the wave. Thus, absorption and energy are the main factors that influence the size of the required source and the quality of radiometric measurement. Typical industrial isotopes used in radiometric applications are cesium-137 (Cs-137) and cobalt-60 (Co-60). The two isotopes differ in their physical attributes, with cesium having a longer half-life but lower emitted gamma radiation energy. Cobalt-60 has a shorter half-life with higher energy.

Figure 1. In a radiometric level detection or density measurement system, an external source emits radiation that passes through the vessel and is measured by an external detector.

Half-life is the length of time that it takes for the source to decay until it reaches half of the activity generated by the original isotope. The half-life of Cs-137 is 30.17 years, and Co-60 is 5.2 years. Typically, Cs-137 is used in industrial applications, because it requires less maintenance (i.e., replacing the sources) and its activities or strength are sufficient for most applications. In special cases, Co-60 might be required for radiating through thick material or high-density fluids. A formula determines the source size, taking into account anything in the beam path (vessel walls, insulation, heating coils, and obstructions) and the distance from the source to the detector.

The calculation uses the following equation: P =  Fa · Fs · Fi       ___________       K where P = the required source activity in mCi K = the isotope coefficient (K = 3.55 for Cs-137 and 13.2 for Co-60) Fa = r2, where r refers to the distance from the source to the detector Fs = absorption, depending on the density of the material and the thickness of the material in the beam path Fi = the sensitivity of the detector Today, such calculations are done in a software program, which takes all the guesswork out of the sizing (figure 2). Most manufacturers have a sizing program. The program can calculate the size of the source and the exposure rate at the source holder and the detector-and use these calculations to estimate the accuracy of the application. All sizing is based on “as low as reasonably achievable” (ALARA) guidelines. That is, the size of the source is limited to the smallest size required to make the required measurement.

Figure 2. Online or downloadable gamma sizing program

Gamma elements

The typical gamma level or density system consists of four parts: the radioactive source, a source holder, a detector, and the brackets to mount the components to the process vessel or pipeline. The function of the source holder (figure 3) is simply to hold the radioactive source in a safe manner. The source holder is a lead container with a slot cut to direct the gamma wave toward the process. The emission angle through the slot will normally be about 6 degrees wide and 5, 20, or 40 degrees tall. This means radiation levels are very low at the source holder unless someone is directly in the beam path.

Figure 3. Source holder

Although not recommended, a person would have to sit on a source holder for two and a half hours to receive the same radiation dose as flying from New York to Miami in an airplane. The beam path must be shielded or screened to prevent someone from accidentally getting

a finger or hand in this beam path. Per Nuclear Regulatory Commission guidelines, the source holder must have a lockable shutter mechanism to block the radiation, or a mechanism to rotate the source away from the opening. This renders the source holder safe, allowing maintenance personnel to perform work inside the vessel and to install or remove the source holder. Detectors (figure 4) have changed much over the past few years and have become much more sensitive and responsive.

The purpose of the detector is to detect and quantify the amount of radiation received. In older gamma systems, an ion chamber was typically used in density applications. Modern detectors use a scintillator tube sensor. The scintillator tube absorbs the gamma photon and converts it into a light pulse. These light pulses create a photoelectron at the photo cathode, where they are multiplied and converted to an electrical pulse. These pulses or counts determine how much radiation is being received by the detector. With a scintillation detector tube made of sodium iodide (NaI) crystal or PVT plastic, the energy required to make an accurate measurement is minimal. For example, an 18-inch slurry pipeline might need a 250-mCi Cs-137 source to have enough activity for the older type of ion detector to work.

With a scintillation tube detector, a 30-mCi Cs-137 source handles the same application. Reduced radiation ensures the safety of people working in the area, and the detectors are much more stable even with large temperature changes. Flexible scintillation detectors offer easy installation but may not be as sensitive as rigid scintillators. In a density application, the higher the count rates, the lower the density. In a level application, the same applies, the higher the count rate, the lower the level in the process vessel. The detector contains a transmitter that converts the count rate to a 4-20 mA HART output signal to be sent to the control or monitoring system.

Profibus PA or FOUNDATION Fieldbus outputs may also be available. With today’s more sensitive scintillator detectors, it is often possible to extend the life of a gamma system. Old-style detectors require so much more energy, they tend to not work reliably as the source nears its half-life. A scintillator-style detector extends the life of the source, eliminating the need to purchase a new source and the cost of disposing the old source. As discussed, radioactive sources decay at a specific rate. In older gamma systems, frequent calibration was required to compensate for the decreased source activity.

Today’s detectors have built-in source decay compensation. They automatically compensate for decay, reducing calibration requirements and maintenance costs. Some detectors use a Geiger-Mueller tube for radiation detection. These units are not as sensitive as scintillator tube detectors, but they work well for point level detection and cost less.

Figure 4. Gamma detector

Outside interference

Radiation from external sources can be a major problem for gamma-based systems. External radiation can come from radioactive material in the process media, other gamma-emitting devices, or radiography testing. Refineries, petrochemical plants, and heavy chemical plants may do routine radiography testing of pipelines and vessels to ensure their integrity. Every time technicians perform an x-ray of a pipe or vessel, there is a huge surge in the background radiation.

The output from a gamma-based detector in the plant will most likely be affected. The increase in the background radiation is picked up by the gamma detector, causing the transmitter to report a much lower level than is actually present. This can cause major upsets in the process and may pose a safety risk. In the past, plants would try to shield the gamma detectors or put the control loop in manual during radiography testing to avoid process upsets. Today, a gamma modulator can eliminate any issue from external radiation.

A gamma modulator is mounted between the source holder and the process. It consists of two absorber rods rotating at a fixed speed directly in the gamma beam path. When the absorber rods are in line with the gamma beam, they attenuate the gamma energy so no energy reaches the detector, and the gamma detector reads the background radiation. As the absorber rods rotate parallel to the gamma beam path, the gamma rays pass between the absorber rods and continue to the process vessel and the detector. The detector is configured to look for this modulated gamma energy.

Internally, it subtracts the background radiation reading when the modulator is in the open position (figure 5). The resultant value is thus not affected by background radiation. Modern gamma systems for level or density measurement are reliable, accurate, and safe-and often work in level and density applications where other solutions will not. End users do need to make sure their supplier is knowledgeable about gamma licensing requirements and can provide full gamma support for your facility.

 

Figure 5. Internal view of modulator used when external radiation is present

About the Author
During his automation industry career, Gene Henry served as senior level business manager for Endress+Hauser in the U.S. He worked closely with the sales force on level applications and in the development of marketing strategies for Endress+Hauser level products. Gene started his career as an instrument foreman in the phosphate mining industry, and wored in instrumentation sales, providing consulting services to industrial and municipal plants.

Connect with Gene
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

Lessons Learned From a Forensic Analysis of the Ukrainian Power Grid Cyberattack

The post Lessons Learned From a Forensic Analysis of the Ukrainian Power Grid Cyberattack first appeared on the ISA Interchange blog site.

This post was authored by Patrice Bock, with the participation of Jean-Pierre Hauet, Romain Françoise, and Robert Foley.

Three power distribution companies sustained a cyberattack in western Ukraine on 23 December 2015. As the forensic information is extensive from a technical point of view, it is an opportunity to put ISA/IEC 62443-3-3 Security for industrial automation and control systems Part 3-3: System security requirements and security levels to the test with a real-life example. Several sources were used for this purpose that, overall, provide unusually detailed information. This blog post:

  • reviews the kinematics of the attack using the available reports and reasonable assumptions based on our experience of cyberattack scenarios and of typical operational technology (OT) systems and vulnerabilities
  • introduces a methodology for assessing the Security Level – Achieved (SL-A) by one of the Ukrainian distributors (corresponding to the best documented case)
  • applies this methodology; presents and discusses the estimated SL-A; reviews this SL-A per the foundational requirement (FR); and derives conclusions and takeaways
  • evaluates the security level (SL-T) that should be targeted to detect and prevent similar attacks

Kinematics of the cyberattack

Although the attack itself was triggered on 23 December 2015, it was carefully planned. Networks and systems were compromised as early as eight months before. Keeping this time frame in mind is essential for a proper understanding of the ways and means that should be used to detect, and eventually prevent, a similar attack.

Our analysis of the cyberattack is threefold:

  1. Initial intrusion of the information technology (IT) network using spear phishing
  2. Intelligence gathering on the IT and OT networks and systems using the flexible BlackEnergy malware: network scans, hopping from one system to another, identification of device vulnerabilities, design of the attack, and installation of further malware and backdoors
  3. Attack itself that lasted 10 minutes on 23 December

Step 1: Malware in the mail!

In spring 2015, a variant of the BlackEnergy malware was triggered as an employee of Prykarpattya Oblenergo opened the Excel attachment of an email. BlackEnergy is a malware “suite” that first hit the news in 2014, when it was used extensively to infiltrate energy utilities. Its aim was to gather intelligence about the infrastructure and networks and to help prepare for future cyberattacks.

The diagram in figure 1 is a simplified view of the network architectures (i.e., Internet, IT, OT) and will help depict each step of the cyberattack. The hacker is shown as the “black hat guy” at the top right side. The hacker used the utility’s IT connection to the Internet as the channel to prepare and eventually trigger the cyberattack.

We can see that the company had proper firewalls set up, one between the IT network and the Internet and the second between the IT and OT (industrial) network. The OT network included a distribution management system (DMS) supervisory control and data acquisition with servers and workstations and a set of gateways used to send orders from the DMS to remote terminal units that controlled the breakers and other equipment in the electrical substations. Additional devices were connected to the network too (e.g., engineering workstations and historian servers) but are not relevant for the attack kinematics.

At this step, the hacker managed to compromise one office laptop thanks to the BlackEnergy email attachment. This is difficult to prevent as long as people open attachments of legitimate-looking emails.

Figure 1. Simplified diagram of the control system architecture

Figure 2. Step two of the attack

Step 2: Attack preparation, network scans, and advanced persistent threat (APT)

During several months in the summer of 2015, the BlackEnergy malware was remotely controlled to collect data, hop from one host to another, detect vulnerabilities, and even make its way onto the OT network and perform similar “reconnaissance” activities.

Forensic data analysis about this phase is incomplete, because the hacker did some cleaning up and wiped out several disks during the actual attack. Nevertheless, prior analysis of BlackEnergy, as well as reasonable considerations about the standard process used for cyberattacks, makes the following reconstitution probable with reasonable confidence.

As displayed in figure 2, during step two, a large amount of network activity took place. The remote-controlled malware scanned the IT network, detected an open connection from an IT system to an OT supervision platform, performed OT network scans, collected OT component information, and eventually installed ready-to-trigger malware components on both the IT and OT systems.

This phase lasted weeks, maybe months, and allowed for a custom exploit development. An exploit is a bit of software designed and developed to exploit a specific vulnerability. It is embedded as a payload on malware that is configured to deliver the payload for execution on a target. Actually, this effort was somewhat limited. The only original piece of malware code developed was the one needed to cancel out the gateways as part of step three. And this really was not a significant “effort,” as gateways have for a long time been pointed out as vulnerable devices.

Step 3: Triggering the cyberattack

In the afternoon two days before Christmas, as stated by an operator, the mouse moved on the human-machine interface (HMI) and started switching off breakers remotely.

When the local operator attempted to regain control of the supervision interface, he was logged off and could not log in again, because the password had been changed (figure 3).

The whole attack only lasted for a couple of minutes. The hacker used the preinstalled malware to remotely take control of the HMI and switch off most of the switchgears of the grids. Additional malware, in particular the custom-developed exploit, was used to prevent the operator from regaining control of the network by wiping out many disks (using KillDisk) and overwriting the Ethernet-to-serial gateway firmware with random code, thus turning the devices into unrecoverable pieces of scrap.

Additional “bonus” activities included performing a distributed denial-of-service attack on the call center, preventing customers from contacting the distributor, and switching off the uninterruptible power supply to shut down the power on the control center itself (figure 4).

This step was obviously aimed at switching off the power for hundreds of thousands of western Ukrainian subscribers connected to the grid. However, most of the effort was spent making sure that the power would not be switched on again: all specific malwares were developed with that objective. Once triggered, the only way for the operator to prevent that issue was to stop the attack as it was performed.

But the attack was too fast to allow any reaction; indeed, in a critical infrastructure environment, operator actions may cause safety issues. Therefore, only predefined actions are allowed, and operators have to follow guidelines for taking any action. In the event of an unforecasted operational situation, they are not trained to make decisions on the spot. This was exactly the situation in the Ukrainian case. “Obvious” actions could have stopped the attack (like pulling the cable connecting the OT to the IT network), but untrained operators cannot be expected to take such disruptive steps on their own initiative in a stressful situation where mistakes are quite possible.

Figure 3. Step three of the attack (1)

Figure 4. Step three of the attack (2)

Takeaways

In retrospect, once we know all the details about the cyberattack, it looks easy to detect, given quite significant network activities and the levels of activity taking place on numerous systems.

But it is actually a challenge to figure out exactly what is happening on a network, especially if you do not have a clue about what is “normal” network activity. Once connections to both the Internet and to the OT network are allowed, detecting signs of cyberattacks is difficult because of the volume of traffic. Continuous monitoring with the capability to identify the few suspect packets in the midst of all of the “good” packets is needed. Multiple proofs of concept of such detection using correlated IT and OT detection have been performed and were presented at the conferences GovWare 2016 in Singapore, Exera Cybersecurity days 2016 in Paris, and SEE Cybersecurity week 2016 in Rennes (France).

Yet other means exist, and using IEC 62443-3-3 to scrutinize the Ukrainian distributor security helps to identify all the controls that were missing and that could have prevented the cyberattack.

Methodology to estimate the SL-A

ISA/IEC 62443-3-3 lists 51 system requirements (SRs) structured in seven foundational requirements (FRs). Each SR may be reinforced by one or more requirement enhancements (REs) that are selected based on the targeted security levels (SL-Ts). Evaluating the achieved security levels (SL-As) can therefore be performed:

  • for each SR, checking whether the basic requirement and possible enhancements are met
  • for each FR, the SL-A being the maximum level achieved on all SRs
  • with the overall SL-A evaluation being the maximum level achieved on all FRs

Table 1 summarizes the result of the evaluation on an FR that has few SRs for the sake of illustration.

The table 1 matrix is directly extracted from the IEC 62443-3-3 appendix that summarizes the requirements. As for the Prykarpattya Oblenergo case and for each requirement (basic or RE), we have identified three possible cases:

  • the available information is sufficient to consider the requirement met: ✔
  • the available information is enough to figure out that the requirement was missed: ✘
  • it is not possible to evaluate whether or not the requirement was met: ?

Table 1. Result of the evaluation of the SL-A for FR5

Once filled, table 1 corresponds to the actual evaluation of the FR5 for the case at hand (Ukraine), leading to an SL-A of 2. This means that network segmentation (“restrict data flow”) was implemented for at least the basic requirements and for a few requirement enhancements.

Application to the Ukrainian case

This analysis was performed on all SRs, and two situations were identified:

  • The SR may not be applicable (e.g., requirements about wireless communication in the absence of such media).
  • We may not have direct evidence that the SR was met or missed, but deduction based on typical similar installations and other inputs allows a reasonable speculation about whether the requirement was met or missed.

For instance, we can consider “backup” missing, because disks could not be restored several weeks after the attack. Considering SR 5.2 RE(1), it is reasonable to consider that the secure shell (SSH) connection through the firewall was an exception and that all the other traffic was denied. The hacker would not have gone through the burden of capturing the password if more direct ways to reach the OT network existed.

Out of the 51 SRs, four were deemed “not applicable” (1.6, 1.8, 1.9, and 2.2), and 25 could not be determined (“?“). This is a large quantity, which means that only half of the SRs could actually be evaluated. This actually favors a higher SL-A, because only evaluated SRs are taken into account, and because by default we consider that the SR is potentially met.

Another decision was made in terms of data presentation. Instead of presenting the information with one requirement (basic and RE) per line, as in table 1, we decided to have one line per SR and list the increasing RE on the various columns. Table 2 illustrates the same FR5 evaluation using this mode of presentation.

Table 2. Estimation of the SL-A (FR5)

 

Table 3. Overall estimation of the seven FRs

Eventually, a more synthesized view was used without the RE text in order to present the overall picture for all FRs, which would span several pages otherwise. The overall estimated SLs are regrouped in table 3.

The results depicted in table 3 are rather bad. Furthermore, half of the requirements could not be evaluated, and, therefore, this view is probably optimistic.

On the right side, the estimated SL-As are listed for the seven FRs. We can see that the SL-As are zero except for:

  • FR5 (restricted data flow): mainly due to the IT-IACS firewall and strict flow control. To comply with this requirement means that traffic between zones on the OT network should be filtered. The Ukrainian attack example demonstrates that this requirement could be reviewed in future updates of the standard:
    • Complying with SR 5.2 does not require one to define zones. As in the Ukrainian case, all OT systems could interact with each other. Note that recommendations about zone definitions are available in ISA/IEC 62443-3-2 that should be used before applying ISA/IEC 62443-3-3.
    • The requirement about traffic filtering between zones is set for SL=1. The return on investment is questionable, as the cost and risk of traffic filtering are high, and the effectiveness is questionable, as demonstrated by the Ukrainian case. It may make more sense to require detection as soon as SL-T=1 is targeted, and require active filtering/preventing for higher SLs.
  • FR6 (timely response to events): The very existence of detailed forensic information is the result of minimal logging being in place.

Table 4 shows a detailed analysis for some of the most significant SRs.

Table 4. Specific analysis for some the most significant SRs

Takeaways

At first, looking at the reports about the various Ukrainian operator security controls, it looked like they had paid significant attention to cybersecurity issues. Indeed:

  • nonobvious passwords were used
  • a firewall with strict data flow restriction was in place
  • significant logging was performed

But, as demonstrated in the SL-A evaluation, most FR security levels were null, because at least one of the SRs was not addressed at all. There is no point in setting up advanced security controls when some basic ones are missing. The weakest link drives the overall security effectiveness down. The fact that advanced security controls are useless if other basic security controls are missing is best illustrated by the configuration of the firewall with a single SSH link requiring a nonobvious password authentication. This is typically a painful operational constraint, as allowing direct remote desktop protocol (RDP) access for several systems, or virtual network connections (VNCs), would have been easier to use. Unfortunately, these additional constraints did not lead to increased security, because:

  • The lack of IT network supervision did allow extensive network scans, vulnerability searches, and discovery of the allowed SSH link.
  • The lack of strong authentication (two-factor) or local (OT) approval of remote connections made it possible to frequently connect from the IT to the OT network without detection over several months.
  • The lack of OT network intrusion detection allowed extensive OT network scans, vulnerability detection, and mobile code (malware, exploits) transfer restrictions.

When deploying security controls, it is essential to apply requirements in a consistent way across all aspects of security: detection, prevention, and reaction. It is best to use a well-designed standard such as IEC 62443-3-3. Do not aim for SL-T=2 or 3 on some FRs if the SL-A is still zero on other FRs, as this would likely be useless.

Which SL would have been required to prevent the attack?

Looking at the issues listed previously, it appears that raising the SL-A to level 2 would have allowed detection of the activity during step two, thus preventing the cyberattack. Plenty of time was available for the post-detection reaction. Additional controls, such as strong/local authentication, anti-malware, and SL 2 requirements would actually have prevented the specific attack kinematics.

The fact that setting the SL-T at level 2 would have been enough to detect and prevent the attack with several layers of defense may sound surprising to the reader, as this was (quite certainly) a state-sponsored cyberattack, which normally calls for SL-T=3 or even 4 to prevent.

Actually, it is likely that the hacker could have matched SL-A=2 by developing more advanced exploits and using attack vectors other than the Internet, such as mobile media or mobile equipment introduced by rogue employees or third parties. Nevertheless, those additional steps are more complex and expensive, and, because they were not needed, less advanced means were used.

To summarize the takeaways of this cyberattack using IEC 62443-3-3 guidance:

As a mandatory first step, power distribution utilities should aim for SL-T=2, ensuring at least minimal requirements about detection (SR 6.2) are met.

To have several layers of defense, prevention, detection, and time for reactions in anticipation of the most sophisticated attacks, it is best to aim for SL-T=3.

In any case, it is essential to set up security controls in a consistent way to ensure that all FR have achieved the same SL-A before aiming for a higher SL-T. Otherwise the efforts are useless, as demonstrated by the example at hand.

About the Author
Patrice Bock of Sentryo is the ISA-France technical leader.

Connect with Patrice
LinkedIn

About the Author
Jean-Pierre Hauet has served as ISA-France president and an ISA99 committee voting member, and is chairman of the scientific board at Equilibre des Energies.

Connect with Jean-Pierre
LinkedIn

About the Author
Romain Françoise is CTO of Sentryo.

Connect with Romain
LinkedIn

About the Author
Robert Foley is regional sales director at Siemens Industry USA.

Connect with Robert
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

AutoQuiz: How to Calibrate a High-Range Electronic Pressure Transmitter

The post AutoQuiz: How to Calibrate a High-Range Electronic Pressure Transmitter first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

Which of the following calibration instruments will be the best choice to calibrate a high-range electronic pressure transmitter for a range of 0–1000 psig?

a) decade box
b) squeeze bulb
c) function generator
d) dead-weight tester
e) none of the above

Click Here to Reveal the Answer

A squeeze bulb can only be used to generate pressures for calibration in the range of 0–500 inH2O (~18 psig).

A decade box is used to calibrate temperature transmitters and instruments and cannot be used for pressure calibration.

A function generator is a piece of electronic test equipment or software used to generate different types of electrical waveforms over a wide range of frequencies, and therefore cannot be used for pressure calibration.

The correct answer is D, dead-weight tester. A dead-weight tester apparatus uses known traceable weights to apply pressure to a fluid for checking the accuracy of readings from a pressure gauge or pressure transmitter. Due to the force-multiplying properties of a dead-weight tester, these primary calibration instruments can typically be used to generate accurate test pressures from 15–10,000 psig.

 

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

How IIoT Helps OEM Machine Builders Support Customers and Improve Equipment Availability

This post was authored by Jeff Brown, senior sales director at Advantech and formerly vice president for global IoT and embedded PC sales at Dell.

Industry analysts have long predicted that process industry leaders would be disrupted by “digitally enabled competitors.” To remain competitive, original equipment manufacturer (OEM) machine builders need to digitally transform themselves by embracing the Industrial Internet of Things (IIoT) and big data predictive analytics.

According to a survey conducted by the Aberdeen Group, “best-in-class” manufacturing companies are increasingly using IoT and big data to address and improve their top operational challenges, including:

  • reducing unplanned downtime
  • improving overall equipment effectiveness
  • reducing maintenance costs
  • increasing the return on assets

As the IIoT continues to evolve, there are more and more opportunities for machine builders to gain a competitive advantage to generate new revenue streams and improve their product development processes through the availability of real-time data. The vital focus is using this data to provide better service and support to the customer and improving equipment availability through remote monitoring and predictive maintenance models.

When innovative OEMs embed IIoT technology into machines, remote personnel can troubleshoot issues, change operating parameters, and oversee machine operation with supervisory control to avoid possible problems. OEMs can now advise on-site engineers and operators on how best to solve a problem or improve performance. This type of expertise combined with multi-tenant real-time visibility into daily operating conditions can extend the lifetime of machinery and process equipment.

To achieve these significant outcomes, organizations should follow these six steps when planning their connected machines implementation:

Define the business case for machine connectivity

Focus first on establishing clear business objectives for the way new data will be used to digitally transform the business. It is important to understand customers’ key performance metrics to create a competitive advantage. Clearly defining these unique metrics significantly influences customers’ ability to optimize operations while managing risk.

Determine what data is valuable to gather

Connected machines can generate large amounts of real-time data. Managing high volumes of machine data requires appropriate provisioning to accommodate secure network transport and storage. Therefore, it is important to determine what data is valuable to gather based on the business objectives. Begin by picking a specific business objective and let that determine the data that is captured.

Decide the best way to capture data 

Depending on the machines and the connectivity standards, there may be multiple data protocols being used. An ideal connected machine solution should be flexible enough to access data from any industry protocol and scalable enough to interface with a broad variety of industry protocols and data sources.

Develop a security strategy for connectivity 

Security is a key consideration for any IIoT deployment, and it is important to have a security strategy from the start. The first step is ensuring that the data being moved is the most critical to achieving business objectives. The next step is defining comprehensive security policies that determine how IoT-connected devices will communicate. Best-practice deployment safeguards may include the following: block all inbound wireless traffic to the gateway, lock all physical ports on gateways, partition your network of industrial machines for isolation from all other networks, and establish authentication/authorization access controls.

Give the customer the flexibility to distribute analytics 

As the OEM, it is important to help customers establish an advanced analytics foundation based on their specific operations. Take action immediately by detecting and responding to local events at the edge as they happen. A distributed approach allows simultaneous integration of additional data sources in the cloud, enabling remote access to critical data.

Digital transformation by acting on analytics 

Turn understanding into action by integrating connected machine data into the business and customer. Use newly available data insights to improve operator visibility and move from reactive or fixed-schedule maintenance models to predictive maintenance. Build contextually relevant user experiences for the people who know the machines best through web, mobile, and embedded applications that scale gracefully from smartphone to desktop.

Integrating IIoT technology into shop floor machines offers many advantages to the OEM and their customers. 

About the Author
Jeff Brown is senior sales director at Advantech.  He formerly serviced as vice president for global IoT and embedded PC sales at Dell. At Dell, he headed a global sales specialist team to ensure an aligned global channel strategy for OEM for IoT and embedded PC Solutions. Brown has spent more than 20 years in sales-related positions in the embedded hardware and software industry.  Brown has a BSEE from Marquette University and an MBA from Roosevelt University.

Connect with Jeff
LinkedIn

A version of this article also was published at InTech magazine



Source: ISA News

PIC 2019: Shedding Light on Critical Process Industry Issues

The post PIC 2019: Shedding Light on Critical Process Industry Issues first appeared on the ISA Interchange blog site.

This post was authored by Carlos Melgarejo, a member of the ISA marketing team.

Stars will shine brightly in the Lone Star State 4-6 November as experts in the energy and process manufacturing industries descend upon the Westin Houston, Memorial City.

ISA’s 2019 Process Industry Conference (PIC) will bring together engineers, automation professionals, and oil and gas professionals for a conversation with leading industry experts to discuss solutions to the most urgent challenges facing our changing world.

This three-day conference will be followed by a full-day of ISA technical training courses on 7 November, offering yet another professional development opportunity for attendees to expand their industry knowledge.

.videopopup.video__button:before {
border-left: 10px solid #ffffff !important;
}
a.popup-youtube:hover .videopopup.video__button {background: #8300e9 !important;}

This year’s keynote speakers are rock stars in their respective industries. Dr. Dennis Ong is a TEDx speaker, architect, and head of the 5G Mobility at Verizon Smart Communities. Previously, he was chief architect and director at Nokia where he led the systems engineering team for the loT IMPACT platform, which received the “Best loT Innovation for Mobile Networks” award at Mobile World Congress. At Alcatel-Lucent, Dr. Ong’s team collaborated with three start-ups creating a highly scalable analytics video optimization platform that served 50 million people worldwide.

Dr. Ong’s keynote speech is 4 November and it’s titled: “5G – Enabling Industrial Mobile Robots, Industrial loT, and Autonomous Vehicles.”

Another distinguished keynote speaker is Dr. John Thomas, Executive Director of MIT’s Engineering Systems Laboratory. He has researched engineering mistakes, design flaws, and human error to develop more effective methods to prevent them. He teaches classes on system safety, cybersecurity, system engineering software, digital hardware engineering, human factors, and system architecture. Dr. Thomas’ work has been widely adopted throughout the industry, including techniques for developing requirements, anticipating unexpected or dysfunctional interactions in engineered systems, analyzing software-intensive systems, human-centered design processes, and others.

Dr. Thomas will address the audience 5 November and his topic is titled: “A Systems Approach to Safety.”

As an attendee at this can’t-miss event, you’ll be able to:

  • View state-of-the-art products, technology services, and solutions
  • Gain a competitive edge by participating in ISA training courses
  • Attend though-provoking keynote sessions delivered by industry leaders
  • Participate in a variety of technical tracks and topical sessions
  • Network with peers and make valuable professional connections

Sponsorship and exhibitor opportunities still available!

PIC 2019 provides sponsors and exhibitors the opportunity to meet with a variety of industry decision-makers searching for solutions, products, services, and partners they can trust.

For questions about sponsoring, exhibiting, or to save your space, contact Elena Pitt at 919-323-4023, Richard Simpson at 919-414-7395, or Chris Nelson at 919-990-9265.

Join the conversation at #ISAPIC2019!

PIC 2019 offers a turnkey experience for gaining important foundational knowledge and training, which helps attendees develop hand-on skills and lasting professional development. The event covers critical areas in instrumentation/control, cybersecurity and safety systems, open architecture and infrastructure, subsea automation, and operational improvement.

Young professionals can also benefit greatly from this event. It’s a one-stop shop for gaining important foundational knowledge and training, which will help them develop hands-on skills and lasting professional development expertise.

You can download the preliminary program with the conference topics here.

About the Author
Carlos Melgarejo is a member of the ISA marketing team. He earned a master’s degree in communications and marketing from Southern New Hampshire University, and a bachelor’s degree in broadcast journalism from City University of New York-Brooklyn College.

Connect with Carlos
Email



Source: ISA News

AutoQuiz: What Is Receipt Verification in Industrial Processes?

The post AutoQuiz: What Is Receipt Verification in Industrial Processes? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

Which of the following is not part of receipt verification?

a) comparison of loop checks to loop diagrams
b) match to purchase order and instrumentation
c) correct quantity of operator manuals received
d) manufacturer and model number
e) none of the above

Click Here to Reveal the Answer

Receipt verification is a systematic process by which a site verifies that what was specified and ordered is actually what is received. To do this effectively, the person who performs this function will:

  1. Verify the received item was what was actually purchased by checking the item’s model number against the purchase order (answer B)
  2. Make sure the correct quantity of each item has been received (answer C)
  3. Verify that the manufacturer and model number match that shown on the specification for the item (answer D)

Loop checks (answer A) are completed in the same phase of the project (deployment phase), but only after receipt verification and installation tasks are completed. Therefore, A is the correct answer, as it is the only choice that is not part of the receipt verification task.

ReferenceNicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP.A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

Pipeline Leak Size: If We Can’t See It, We Can’t Detect It

The post Pipeline Leak Size: If We Can’t See It, We Can’t Detect It first appeared on the ISA Interchange blog site.

This guest blog post is part of a series written by Edward J. Farmer, PE, ISA Fellow and author of the new ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to purchase the book, click this link. To read all the posts in this series, scroll to the bottom of this post for the link archive.

Where is the leak detection performance frontier? Clearly, there can always be some event that qualifies as a “leak” but does not produce observable changes in monitored parameters that are adequate for detection. In general, it is possible to quickly detect “large” leaks (let’s say 1 percent or more of mainline flow). Smaller leaks can often be detected over longer periods of time.

There, however, will always be some leak (real or hypothetical) smaller than any detection threshold. Of course, performance depends on observability and the methodology to effectively see and interpret what is observed. Observation depends on the pertinent factors that can be monitored, the equipment used for monitoring (e.g., the pressure, flow, and temperature instruments), the ability to collect observations for processing, and the ability to assess and annunciate the meaning. All these performance – related issues involve design and equipment factors.

Pipelines carry large amounts of fluid at economic speeds over distance. In design, pumps or compressors are sized to produce the pressure and flow rate that provides the best economics for a particular situation. Lots of thought goes into trade-offs. This often involves flow rates (and the pressure differentials required to achieve them) and active storage matched with production and consumption. Often, all these decisions were made years ago for a far different application and the contemporary engineering work involves finding some financially optimal way to adapt to a current situation.

Leaks produce flow at locations and times associated with the leak-precipitating events. Essentially, the occurrence, size, location, stability, and impact are all stochastic. In a perfect world, where everything works when and as intended, the detection of a leak depends on the system devised to observe it.

There is no deterministic method for attaching significance to that which we cannot see. Leak flows depend on the leakage mechanism (typically the shape and environment of the leakage-producing defect), the pressure inside and outside the pipe, and the fluid characteristics at the conditions encountered. An irregular defect, such as something resulting from corrosion, might produce a much smaller flow than a similar orifice-like defect.

Depending on the pressures and the fluid involved, velocity in the pipeline might be a tiny fraction of the sonic velocity that is often produced when gas or some volatile component is escaping. Sometimes environmental conditions (e.g., freezing) change mechanical components in the leakage path (e.g., frozen earth cools and seals the leakage plume). The point is, as pipes become bigger the flow rate through a leak can be limited to a very small percentage of the pipeline flow rate.

All of this discussion pertains to liquid, gas, and multi-component flows. Everything described herein happens in the same way, only the numbers change. For the details, I’ve always liked The Crane Valve Company’s Technical Paper 410: Flow of Fluids through Valves, Fittings and Pipe.

In the case of multi-component hydrocarbon fluids (crude oil, NGL, NG, that sort of thing), various components have different vapor pressures. Among other things, a leak exposes a fluid running at pipeline pressure at an economic velocity toward its destination to a region of markedly lower pressure. First to leave the stream are the more volatile components.

It’s not hard to imagine an NGL or crude oil stream in which the lightest and highest vapor pressure component is methane which, at the lower pressure, flashes to its gaseous state and moves toward the region of ever-lower pressure and eventually outside the pipeline. If the leakage mechanism is small, flow on the pipeline is observed as unchanged, or nearly so. Leakage flow, though, involves a path through the mechanical restrictions of the leak itself and the environment it encounters on its way to “outside.” Flow accelerates toward the lowest pressure until it reaches, at some point along the way, its sonic velocity.

It also expands, reducing its density. It also cools as a result of the expansion. Generally, there is no way to know the effective area through which this sonic flow is occurring but there are various short-cuts we won’t dwell on here for help assessing such things. Usually, the question on the table comes the other way around, essentially wondering if a leak of a certain effective size (like an orifice bore) will be detectable or not. What isn’t going out the leak is going down the pipeline from pump or compressor station and its package of instruments to some receiving location and its similar package of instruments. Is this difference large enough to be observed?

Leak flow involves sonic velocity through some effective leakage area, and sonic velocity depends on temperature. The amount of mass flow involved depends on the leaking component itself and its density at the place where sonic velocity occurs. Essentially, the volumetric leak rate is the effective area (a) times sonic velocity. The mass leak rate involves the density at the location where sonic velocity flow passes through the leak.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free 37-page excerpt from the book, click here.

Line flow remains its pedantic “economic” velocity multiplied by the cross-sectional area of the pipe. Mass flow involves, of course, the fluid density at pipe conditions.

The change in the conditions that we hope to observe on the main line are different by the amount of the leak. Can we estimate the impact of a leak? Let’s say the intended flow velocity on the pipeline is V m/sec, the diameter is D meters, and the density is ρ kg/m3. The area of flow would be A=πD2/4 m2. The volumetric flow rate would be Q=V A m3/s. The mass flow rate would be M=Q ρ kg/s. The leak will flow through a hole with an effective (orifice) diameter of d meters which will have an effective area of a= πd2/4 m2.

At sonic velocity (vs) the volumetric flow rate would be q=vs a m3/s. The mass flow rate would depend on the density at sonic conditions but let’s call it ρ’; so m = q ρ’ kg/s. Depending on the instruments involved we may be interested in mass or volumetric flow changes, but it is generally easier to stay with mass flow since mass is conserved but volume depends on unknown differences in conditions.

The mass flow rate through the leak is q which is q/Q percent of the pipeline flow rate. In more interesting terms, assessing the percentage involves πd2/4*vs* ρ’ / πD2/4*V* ρ. This allows us to see the differences as a series of ratios:

  • ratio of the areas of flow, essentially the square of the ratio of the diameters: d/D)2
  • ratio of the velocities: (vs/V)
  • ratio of densities: (ρ’/ ρ) …
  • or more simply: (d vs ρ’) / (D V ρ)

Note that with fluids like crude oil:

  • the density of what’s leaking out (ρ’) is probably a tiny fraction of the main flow in the pipeline.
  • sonic velocity of the escaping fluid is probably near an order of magnitude greater than the line flow rate.
  • the effective diameter of a leak is hopefully very small compared to the pipeline diameter.

The reason this discussion is so long and understandably tedious is that the parameters that affect sensitivity are hard to generalize. When you work with particular fluids and common situations it all becomes much easier. To provide an illustrative example, I did some work a few years ago involving an 18mm effective leak size in pipelines running a heavy natural gas. Testing illustrated we could easily detect such a leak hole in pipe with diameters up to about 16 to 24 inches. In larger pipe, the main line flow was greater but flow through the leak mechanism was more or less unchanged, and as diameter increased the percent of mainline flow decreased as the square of the pipe diameter.

Conditions for these tests were that the effective leak was about 0.08 percent of the area of flow of the pipe. Leak flow velocity was about 25 times the velocity of flow in the pipe. Density was much lower in the leaking fluid than in the pipeline. It’s hard for the high (sonic) velocity to make up for the lower density and markedly smaller area-of flow.

In one instance we calculated the percentage of leak flow from a 48-inch pipe and found the expected changes to be over an order of magnitude smaller than anything that could be seen by the instruments in use.

Think about this. The leak was assumed equivalent to an 18mm hole. Under the conditions specified, pressure outside the pipe would be ambient. As gas exits the much higher-pressure pipeline and enters the leak (and the area immediately beyond it) pressure drops from the pipe pressure down toward and even temporarily below ambient pressure. Expansion of the compressed gas reduces the temperature, and the gas reaches sonic velocity somewhere in the conditions it encounters. That is the essential flow control situation that determines what can go out through the leak.

This can be a tricky situation because the shape of the leak hole has a profound effect on flow through it. If the hole is orifice-like with nice, square edges the flow coefficient for the hole might be 0.5, meaning half the area is effective for producing flow. If it’s a slit or crack the portion of the hole area useful for conveying leak flow could be much smaller. No matter what, though, the orifice coefficient will not be greater than 1 so let’s go with that. Good observation produces sensitive leak detection.

Suppose the pipe is bigger, let’s say 48 inches in diameter. While the leak mechanism will produce the same leak flow rate regardless of the pipe diameter that rate becomes a smaller percentage of mainline flow. The same leak on bigger pipe becomes undetectable because it becomes impossible to observe the changes it produces using instrument systems designed for greater flow in the larger pipe.

There may also be issues with noise due to process, measurement techniques, signal transmission, analysis resolution, and otherwise unavoidable sources of ambiguity and error. Of course, good engineers and application engineering could carry us through this impediment but inventing some hitherto unknown measurement equipment and monitoring technique may be involved, and that may or may not be economically or operationally attractive.

Performance specifications come from various points of view: some viable, some not. You can’t design a way around “impossible” and you may never get the price required to achieve “extremely difficult”. Depending on the market, an abundance of hope might make you a defendant in a damages suit resulting from an undetected accident. The same sort of thing can occur when the plaintiff’s “experts” determine that the instruments you used were delicate compared to the bedrock-like stuff other engineers use on pipelines. Of course, the other instruments wouldn’t detect the changes you need to see, but that’s subtle for a liability court jury.

In some contractual situations it is easy to speculate that some specifications are written requiring impossible to achieve performance to discourage competent vendors and encourage those with assurance that a share of the profits to the right person will eliminate blame regardless of how things work out. I’ve never found it a good idea to count on the situational integrity of a demonstrably dishonest person.

So, where is the frontier? We must start with what we can observe of the things that happen. This involves instrumentation with the necessary performance, appropriate observation points, and good engineering practice to make it all work. We also need a keen understanding of what must and does hydraulically happen and how it might be observed.

About the Author
Edward Farmer, author and ISA Fellow, has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. He has authored three books, including the ISA book Detecting Leaks in Pipelines, plus numerous articles, and has developed four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. During his long industry career, he established EFA Technologies, Inc., a manufacturer of pipeline leak detection technology.

Connect with Ed
48x48-linkedinEmail

 



Source: ISA News

Improved Process Control Strategy for Crude Oil Fractional Distillation [technical]

The post Improved Process Control Strategy for Crude Oil Fractional Distillation [technical] first appeared on the ISA Interchange blog site.

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.

Abstract: In recent years, interest for petrochemical processes has been increasing, especially in refinement area. However, the high variability in the dynamic characteristics present in the atmospheric distillation column poses a challenge to obtain quality products. To improve distillates quality in spite of the changes in the input crude oil composition, this paper details a new design of a control strategy in a conventional crude oil distillation plant defined using formal interaction analysis tools. The process dynamic and its control are simulated on Aspen HYSYS dynamic environment under real operating conditions. The simulation results are compared against a typical control strategy commonly used in crude oil atmospheric distillation columns.

Free Bonus! To read the full version of this ISA Transactions article, click here.

Enjoy this technical resource article? Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus professional networking and discounts on technical training, books, conferences, and professional certification.

Click here to join ISA … learn, advance, succeed!

Copyright © 2019 Elsevier Science Ltd. All rights reserved.



Source: ISA News