Thank You Sponsors!

CANCOPPAS.COM

CBAUTOMATION.COM

CONVALPSI.COM

DAVISCONTROLS.COM

ELECTROZAD.COM

EVERESTAUTOMATION.COM

HCS1.COM

MAC-WELD.COM

SWAGELOK.COM

THERMON.COM

VANKO.NET

WESTECH-IND.COM

WIKA.CA

Webinar Recording: Uncertainty in Calibration

The post Webinar Recording: Uncertainty in Calibration first appeared on the ISA Interchange blog site.

This ISA webinar on calibration considerations in hazardous areas was presented by Rich Kohl, global accounts team program manager at Honeywell,, and Ned Espy and Roy Tomalino of Beamex.

.videopopup.video__button:before {
border-left: 10px solid #ffffff !important;
}
a.popup-youtube:hover .videopopup.video__button {background: #8300e9 !important;}

Did you know you can calculate your confidence level in a measurement? Watch this webinar to understand how you can use statistics in measurement, AKA uncertainty analysis, to determine an appropriate uncertainty value, calibration intervals and tolerances without a PhD in mathematics. Essential components of uncertainty will be explained, and experts will discuss when traditional errors, like percent of span are appropriate and what to use with digital technology and smart instruments.

About the Presenter
Rich Kohl, global accounts team program manager at Honeywell, has focused on calibration and measurement for more than 25 years.

Connect with Cameron
LinkedIn

 

 

About the Presenter
Ned Espy has been promoting calibration management with Beamex for more than 20 years. He has directed field experience in instrumentation measurement application for over 27 years. Today, Ned provides technical & application support to Beamex clients and partners throughout North America.

Connect with Ned
LinkedIn

About the Presenter
Roy Tomalino has been teaching calibration management for 14 years. Throughout his career, he has taught on four different continents to people from over 40 countries. His previous roles include technical marketing engineer and worldwide trainer for Hewlett-Packard and application engineer with Honeywell. Today, Roy is responsible for all Beamex training activities in North America.

Connect with Roy:
48x48-linkedinEmail

 



Source: ISA News

Process Plants Need New Approaches to Leverage Actionable Information From Big Data

The post Process Plants Need New Approaches to Leverage Actionable Information From Big Data first appeared on the ISA Interchange blog site.

This guest blog post was authored by Michael Risse, vice president at Seeq Corporation

In May 2011, the management consulting firm McKinsey & Company released a research report titled Big data: The next frontier for innovation, competition, and productivity. This was a seminal report, but it certainly did not cause the big data explosion. The concept of big data is as old as computing and has long been used to describe data volumes exceeding the cost or capability of existing systems.

After 2004, the term “big data” began taking on its more modern meaning after Google released two papers on computing models for dealing with large data sets. It just happened that the McKinsey report was well timed to the explosion of awareness and interest in big data. After staying flat for many years, the number of Google searches for “big data” tripled in the 12 months after the report was released, and after 36 months increased by 10 times.

In a more recent McKinsey report, the figure 1 chart drills into data sources by industry. Manufacturing is far and away the leader, with 1,812 petabytes of data produced: 1,072 from discrete manufacturing and 740 from process manufacturing. These numbers have grown exponentially over the past five years or so, as the costs of creating, collecting, and storing data have decreased exponentially.

Note that the figure 1 data is presented in petabytes, a volume of data that was considered almost science fiction just a decade ago. Wired magazine wrote an article in 2006 surveying the explosion in data volumes and the innovative techniques available to gain insights from this data, declaring the arrival of the “Petabyte Age.” Less than a decade later a petabyte is, if not trivial, at least an unremarkable volume of data to store and process, with terabytes relegated to memory sticks handed out as trade show trinkets. But enough about the origins of big data, let’s look at how it is affecting manufacturing, specifically on the process side.

Big data in the process industries 

Process industries can reap substantial benefits from intelligent big data implementations. As figure 2 illustrates, McKinsey sees a $50 billion opportunity in upstream oil and gas alone, and other process industries can expect similar outcomes.

In addition to having the largest volume of data compared to other sectors of the economy, manufacturing organizations also have the longest history of generating and storing large volumes of data. The digitization efforts of plants implementing programmable logic controllers, distributed control systems, and supervisory control and data acquisition systems in the 1970s and ’80s gave the industry a head start over later data generators.

This is why some vendors refer to manufacturing sensor data as “the original big data,” or claim they have been supporting big data for years. These claims obscure some important facts about big data. It is not just about data volume, although it is certain that data volume in manufacturing will continue to grow. Pervasive sensor networks, low-cost wireless connectivity, and an insatiable demand for improved performance metrics will all continue to drive increased data volumes as the big data’s partner in hype, the Industrial Internet of Things (IIoT), continues its momentum.

Despite the long history and large volumes of data associated with process manufacturing, the reality is that manufacturing organizations are considered laggards in exploiting big data technologies. Big data solutions in other industries are easy enough to find: credit card companies with fraud-detection algorithms, phone companies with customer churn analytics, and websites with product recommendation engines.

In our lives as consumers, we interact with the implications of these big data solutions every day in experiences as simple as using Google. Yet in manufacturing plants, the experience with big data is a mixed result of slow adoption, limited accessibility, and confusion.

Here are some of the main reasons that process plants have often been slow to exploit the potential of big data:

  • Big data implementations can be expensive and resource intensive, and are therefore frequently an initiative led by information technology (IT), and the typical implementations of big data in manufacturing organizations follow this model. Typically, these solutions do not fit the needs of front-line engineers or analysts within manufacturing plants. So, for ad hoc investigations of asset, yield, and optimization-these heavyweight big data systems are a mismatch to front-line requirements.
  • In industries where expected plant and automation system lifetimes are measured in decades, new technologies that require substantial modifications to existing systems will not easily or quickly be introduced into brownfield operations. Instead, new technologies must be engineered to fit existing plants and context, which means vendors need to create bridges from innovative technologies to existing infrastructures. Only now are vendors beginning to offer software solutions bringing big data innovations to the employees as an application experience or to the plant floor with modern predictive analytics.
  • Confusion often blocks broader acceptance of big data innovations, for example, the assumption that “big data equals cloud.” The public cloud-an Amazon, Microsoft Azure, or SAP HANA platform-may offer the best price or performance model for data collection, storage, and processing, but there is nothing about big data that requires a cloud-based model. Other misunderstandings include, “I’m already doing big data, because I use statistical process control, advanced process control, principal component analysis [PCA], or some other analytics evaluation” or the assumption that big data is neither new nor relevant to the needs of process manufacturing plants, harkening back to the “we’ve been doing big data forever” claim noted earlier. 

 

Figure 1. When it comes to generating big data, no sector of the economy can match manufacturing.
Source: McKinsey

 

Framing the issues

To overcome the confusion and blocking issues associated with big data in manufacturing, a new approach is needed, as outlined below. First, it is helpful to frame the ways that “big data” is used as both an umbrella term for the big data phenomena, and in three distinct contexts.

  • Big data is the expansion in data volume spurred by ongoing reductions in the cost of data creation, collection, and storage. When data was expensive, less was collected. As generating and storing data has gotten cheaper, the quantity of data has grown. The numbers are staggering, with more data stored in just minutes now than during multiple-year periods in the 1960s.
  • Big data is the application of technologies, particularly the innovation in solutions to manage, store, and analyze data. This includes new hardware architectures like horizontal scaling, new computing models like the cloud, new algorithms like MapReduce (the core of the Hadoop ecosystem), new specialists like data scientists, and new software platforms like the 100-plus NoSQL database offerings. A brief reading of any technical publication will quickly show options and offerings greatly exceeding the grasp of all but the most committed observer.
  • Finally, big data is the promise, namely the expectation, of business executives that value will be created by combining the volume and technologies of big data to produce insights that improve business outcomes. This could also be the pressure of big data, the demand that business leaders “check the box” and show the organization is tapping big data to achieve better results. These expectations and market pressures cause many companies to store vast amounts of data without a clear idea of how to derive value from it, which often leaves them data rich but information poor (figure 3).

With these three points as a framework, we turn next to the question of what is really different today from the past. If process manufacturing firms have been storing vast amounts of data, what is new now or will be different in the future?

 

Figure 2. Process industries have much to gain from intelligent big data implementations,
with a $50 billion opportunity in upstream oil and gas alone.
Source: McKinsey

 

Big data innovations

The first innovation is how growth in data volumes and types will be directly correlated to the lower prices associated with data generation and collection, so new solutions powered by big data will cost less in aggregate than previous generations of solutions. This is not always apparent as the market transitions to this new model, but what is expensive now in either cost (such as data expertise) or in impact (such as data movement and architectures) will become less expensive. A partial list of factors driving price down is:

  • the hypercompetitive cloud computing market
  • open source software
  • commodity hardware and storage systems
  • the availability of software and expertise to derive insights from new data
  • the general proliferation of commercial off-the-shelf hardware and software

The second significant big data innovation is the range and depth of algorithms and approaches available to organizations to find meaning in their data sets. Just as big data is a neighbor to the IIoT phenomena, it is also tied to the advances in machine learning and artificial intelligence. Therefore advances in cognitive computing-a composite term including machine learning, artificial intelligence, and deep learning-will become available to process manufacturers to accelerate and focus their analytics efforts. And the use of algorithms and tools available today-including regression, PCA, and multivariate analysis-will be made easier and more accessible to engineers, accelerating their efforts via software that converts big data analytics into easy-to-use features and experiences.

The third big data innovation is a more flexible model for analytics across data sources. This could be required because the data is consolidated and indexed as a single unit or because disparate data silos need to be connected and accessed more easily. In either case, the desired outcome is the same: unfettered integration and access to disparate data sources and types. “Contextualization” is the typical term for this capability in manufacturing environments; other terms for this flexibility in data types and architectures include data fusion, data harmonization, and data blending.

 

Figure 3. Many process industry firms find themselves awash in
data but thirsting for information.

 

Data lakes

Big data solutions must include a flexible data model to accelerate and enable analytics across any set of data sources. There are as many models for accomplishing this as there are organizations, but the most basic types are a data lake and a distributed model.

A data lake is the modern instance of a data warehouse, except the data is of many types and typically indexed or architected for use by data scientists or developers. Data lakes are usually the domain of centralized IT departments that can afford the infrastructure and expertise-and manage the governance, security, and data models of these corporate-wide solutions.

Not every company needs or can afford this top-down approach, however, so an alternative is simply to enable connections across data silos in situ. This enables engineers to tap any resource on demand. The second approach is more bottom-up and user driven, because there is no data lake required.

Data into information

As ever more data is generated, there are often fewer experts and resources available to inform and interpret the data. The retirement of seasoned engineers and the squeezing of budgets mean the big data equation in many industries is “more data with increased demands for analysis and information with fewer resources.” Can the gap between engineers and data be closed, such that executives can start to see real results using the limited personnel resources on hand?

The software innovations required to deliver on this promise are similar to those already in place in numerous commercial software apps and web-based tools:

  • accessibility via a browser or app to provide a web-based interface
  • useable by process experts and manufacturing engineers
  • lightweight deployment not requiring data duplication, along with extract-transform-load operations
  • designed for time series data analysis in process plant and other manufacturing applications
  • features that apply machine learning and other advanced algorithms to simplify analysis
  • interactive, visual representation of data and results (figure 4)
  • ability to quickly iterate and to combine one result with another
  • ease of collaboration with colleagues within and across companies

 

Figure 4. Data analytics provide engineers with visual representations of data to help
them create actionable information.

 

Analytics in action

Asset optimization, overall equipment effectiveness, and uptime are not new concepts. There have been generations of preventative maintenance, enterprise asset management, asset performance management, computerized maintenance management, and other systems offered to ensure higher availability of critical resources in production facilities. What is changing with big data is that asset expertise is now available as a service from automation and equipment vendors.

There are examples of this already, but now the cost of data collection, storage, and analytics will make these offerings more accessible. In addition, the new services will have more advanced algorithms and be run across more data to improve the accuracy of the system.

For example, who knows the most about the performance of your turbine: the manufacturer, the local sales rep, or you? The easy answer is the manufacturer. There is a theorem in computer science that what best powers accuracy is greater amounts of data. Instead of relying on an on-site engineer with limited time and capacity to become an expert in an asset class and history, organizations can tap the specific expertise of vendors to manage their most critical assets worldwide.

And for organizations that do have the capacity and resources to develop in-house expertise, their engineers can take advantage of both vendor expertise and local process context to address asset optimization within their manufacturing facilities. Remote monitoring, predictive analytics, and field management systems will therefore become an increasing part of the budget and operational plans for asset-centric organizations, which of course describes many process industry firms.

Big data represents the present and future of data management for all industries. Given the quantity of data and the long history of data centricity, big data has particular relevance to the process industries. Being able to see through the hype and understand how data and analytics can improve outcomes is a critical step for engineers and plant managers alike to realize actionable insights and improve production.

About the Author
Michael Risse is a vice president at Seeq Corporation, a company building productivity applications for engineers and analysts that accelerate insights into industrial process data. He was formerly a consultant with big data platform and application companies, and prior to that worked with Microsoft for 20 years. Risse is a graduate of the University of Wisconsin at Madison and lives in Seattle.

Connect with Michael
LinkedInEmail

 

A version of this article also was published at InTech magazine



Source: ISA News

AutoQuiz: How to Troubleshoot a System Failure Due to a Bad Component

The post AutoQuiz: How to Troubleshoot a System Failure Due to a Bad Component first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

In troubleshooting a system failure, a suspected bad component is replaced with a known good component. This does not correct the problem. What is the next best course of action?

a) build software traps involving additional logic and code to detect the problem
b) further analyze the problem and collect additional data as necessary
c) set additional alarms to pinpoint the problem
d) retain a consultant who specializes in this type of repair
e) none of the above

Click Here to Reveal the Answer

Troubleshooting is often an iterative process. If the proposed solution is not the correct one, further analysis and data collection are warranted.

Answers A and C are not the best answers because not all component failures involve the process control system or its associated program code and alarms. Sensors, actuators, and other field devices all have replaceable components that can fail. Code and alarms cannot pinpoint a transmitter failure to the analog output circuit, for example.

Answer D is not the best answer because this approach would be very expensive, as there are dozens of specific failure types that may occur. Also, basic troubleshooting does not require a specialized consultant; it can be done effectively by the control technician using the ISA logical and analytical approach to troubleshooting.

The correct answer is B, “further analyze the problem and collect additional data as necessary.”

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

What Should a Plant Manager Do During the First 90 Days on the Job?

The post What Should a Plant Manager Do During the First 90 Days on the Job? first appeared on the ISA Interchange blog site.

This guest blog post was written by Bryan Christiansen, founder and CEO at Limble CMMS. Limble is a mobile first, modern computerized maintenance management system application, designed to help managers organize, automate and streamline their maintenance operations.

It’s often said, “You only get one chance to make a good first impression.” This adage may be a cliché, but it carries with it a great deal of truth. And, this is especially applicable to plant managers during their first three months on the job. Whatever impression a plant manager makes during the critical, initial 90-day period can easily spell the difference between success and failure of that person’s tenure.

Below are five “must-do” activities that new plant managers should build into their initial plans to ensure everything kicks off in the right direction. Of course, specifics for each task will vary according to the particular position, industry, business situation, and size. Nevertheless, completing these activities will go a long way to helping every new plant manager succeed.

1) Introduce yourself and be visible

Get started on the right foot by introducing yourself to as many personnel as is possible, and try to remember as many names as you can. One of your first tasks is to hold a company-wide meeting to introduce yourself to the entire plant, give a brief overview of your background, and present a broad outline of your strategy and objectives as you envision them.

These wide-ranging plans should be consistent with the directions given by your hiring committee or manager. This is also an excellent time to announce any changes in responsibilities that have already been made. Be sure to quell any anxiety about layoffs or personnel changes.

2) Gather first-hand information

During the first ninety days, it is vital to spend 70-80% of your time out on the plant floor. Be visible as much as possible and in all plant areas. This is the time to meet operators, learn about their jobs, listen to their complaints, and begin developing a list of problems and possible solutions.

During this time, it is crucial to project honesty, integrity, trust, and expectations. If you will have an open door policy discuss with the employees. Hold meetings with small groups by department or manufacturing lines, ask for suggestions, and reinforce the company’s strategy and objectives. If small, easily solved problems are identified, these should be fixed immediately.

As you circulate through the plant production areas and departments, consider creating a SWOT analysis ( strengths, weaknesses, opportunities, threats.) Above all, listen to your employees, who will appreciate that you care about their particular situations and concerns. Pay particular attention to equipment utilization due to equipment failure. In many plants, the lack of adequate maintenance planning and execution offers significant opportunity for both quick and long-term improvements which can significantly lower operating costs.

3) Study and understand plant financial data

Meet with accounting, finance, and sales personnel to review and understand the plant’s financial data. It is likely that during the hiring process you were advised of the plant’s overall economics, but now is an excellent time to look at the data itself, how it is collected, the components used, and overall financial KPI trends.

Data such as margins, costing, overhead, sales, customer satisfaction, WIP, inventory turns, and other vital measures should be reviewed. This is also a good time to delve into the details, as these figures will not only be a part of how you will be measured but also are critical indicators of plant performance. Understanding their composition and interpretation will help your decision making.

4) Study and understand plant operating data

 Ask each employee about their jobs, how it is done, what their goals are, what their problems are, and what suggestions they have for improvement. Develop an appreciation for the contribution each position makes to the overall objectives of the plant. For support departments such as maintenance, understand the processes used to identify and correct equipment malfunctions, what systems are in place to prevent breakdowns, and the kinds of records kept.

In conjunction with operating managers, review operational KPI’s to understand their construction, sources of data, how data is collected, current levels and trends, and history. Is schedule attainment declining or improving, is equipment downtime getting worse or better, and is quality at an acceptable level? These and other operations characteristics should be understood.

It is likely that during the interview process, the selection committee laid out some specific or general goals and objectives for the plant as a whole. Understanding plant operating data (such as OEE)  and KPI’s will help support your plans to reach the specific targets set by upper management. Visits to the lines will help you gain an understanding of where, how, and who collects this data and adds fresh insight into exactly what these numbers represent.

5) Initiate improvements

Before the ninety days is up, you should take action to address some of the problems or seize some of the opportunities identified earlier. It is likely that some small or easy to fix problems were brought to your attention. These should be corrected immediately. Doing so will demonstrate to your employees that not only were you actually listening but that their suggestions were taken seriously. This is the sort of first good impression you should strive to make.

There are many important plant operating measures that should be analyzed and tracked such as quality levels, inventory, equipment capacity utilization, and others. And, there are many great solutions which can be implemented to bring about improvements. For example, computerized maintenance management system (CMMS) tools can be installed to help a plant manager reach company goals. Predictive and preventative maintenance is especially important if the plant maintenance department is being reactive rather than proactive. A predictive maintenance program can make your problem list shorter and your plant financially stronger.

Final Thoughts

Getting off on the right foot as a plant manager is critical to long-term success. Making a good first impression is one of the most important steps a new plant manager can take. Employees may be apprehensive about a new ‘’boss” so it is critical to get to know people and to listen, listen, listen. Getting a good start means taking those steps necessary to communicate you’re serious about helping the plant and its employees prosper.

About the Author
Bryan Christiansen is founder and CEO at Limble CMMS. Limble is a mobile first, modern computerized maintenance management system application, designed to help managers organize, automate and streamline their maintenance operations.

Connect with Bryan
LinkedInEmailTwitter

 



Source: ISA News

Raw Beginnings: The Evolution of Offshore Oil Industry Pipeline Safety

The post Raw Beginnings: The Evolution of Offshore Oil Industry Pipeline Safety first appeared on the ISA Interchange blog site.

This guest blog post was written by Edward J. Farmer, PE, industrial process expert and author of the ISA book Detecting Leaks in Pipelines. This post includes a free PDF copy of Edward Farmer’s 188-poage book, Advanced Methods in Crude Oil Volume Correction plus the accompanying software, Volume Correction Factor Routines For Oil Water and Wet Oil. Click this link to download the book and software.

It was a long, long time ago, the early 1980s as I recall. I can still remember my huge smile when I was selected to consult for a consortium led by Chevron Corporation to develop a new project off the California coast in the Santa Barbara Channel. Considering the political obstacles following leakage from a previous subsea blowout, and a pervasive anti-oil attitude it was a risky project in a lot of ways.

It was also technically challenging. This crude oil had a substantial gas component and the liquid came up with a tremendous amount of brine in it. There were issues with the state of the fluid at the wellhead pressure, in the riser as it rose from the wellhead to the processing platforms, and in the pipelines, which began on the outer continental shelf and rose to an on-land processing and trans-shipment facility.

An early block of work involved developing a way to accurately measure the amount of oil in the flowing stream at custody transfer points – crude oil had economic value, but the oil-contaminated brine was a significant liability. I worked with a Chevron team that did the research and analysis that produced the measurement methodology.

From that work I published an ISA book and software package called Advanced Methods in Crude Oil Volume Correction. The U.S. Department of Transportation Minerals Management Service had primary regulatory jurisdiction for much of the project and imposed new regulations for, among other things, automatic leak detection. That resulted in another project for me, which ultimately resulted in the development of a very successful leak detection product.

Blog Bonus: Free Book and Software Download! Click this link to download a free PDF copy of Edward Farmer’s 188-poage book Advanced Methods in Crude Oil Volume Correction plus the accompanying software, Volume Correction Factor Routines for Oil Water and Wet Oil.

I have many fond memories of those times, the work, and especially the very fine people. The project manager was a wonderful and accomplished Chevron fellow named Dave Hylton, a dynamic leader who went on to become the chief engineer of Chevron Pipeline Company. It was my first project with a budget over $2 billion, a sobering thought in those days.

Over three decades later, here is the result: years of safe operation, lots of now-tested, then-new ideas, and a starting point for the next vision. Source: City of Santa Barbara Planning & Development Department

My mind was drawn to those days as I sorted some old files, and I was reminded of my feelings on the morning the platform and pipeline design began. As I drove to the project office that morning there were no drawings, specifications, criteria, or standard practices pertinent to much of what we were about to do. We had our research and analysis of the oil and an understanding of offshore conditions but even that wasn’t in an engineering document that fit into our familiar design practice.

I walked into the conference room and noticed a large, completely blank white board at the front of the room. In short order and without much introduction Hylton stepped up and began describing what we were about to begin. In those days I carried a small camera in my briefcase and it suddenly occurred to me this was a seminal moment – the ostensible beginning of the design of what had grown to a multi-partner $2.5 billion venture.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free 37-page excerpt from the book, click here.

The picture below is the first “engineering drawing” of the Point Arguello Project in its original form. The next morning there were prints of ink-on-velum drawings of these whiteboard creations. Over months they would expand to include all the details necessary to understand and build the project – details of huge issues, such as manifold arrangements, and crucial methodology for accurately measuring flows over the expected range of operating conditions.

It all began a few moments before this picture, at the Pt. Arguello Project Office in Concord, CA with an empty whiteboard and a box of colored pens.

Hours of thought and pages of calculations would support seemingly tiny details. Going to a meeting would involve carrying a role of drawings or a bundle of reduced-size prints. There would be animated discussion of the information on these drawings and lots of note taking on them that would result in even more intense and detailed discussions as the project moved along. Over a few months the project became sets of drawings of the various facilities and the special features, all spawned from that white board on that amazing morning in the conference room.

By the time we were finished, those lines on the whiteboard had become three off-shore platforms, two pipelines, an on-shore treating facility, and a trans-shipment plant. They had also spawned a very sophisticated (for its day) SCADA system with hot-standby, automatic fail-over capability. It supported the first pipeline leak detection system to meet the new regulations, incorporating three independent methods of detecting leaks within minutes of occurrence anywhere on the pipeline. We had accurate and automatic custody transfer instrumentation and a state-of-the-art control room.

A lot of insight and work goes into these projects and a lot of amazing things come out. It begins, though, with a dream forming a vision; followed by an idea; the development of opportunities, constraints, and concepts; a tremendous amount of knowledge and effort involved in figuring out how to build it; and ultimately a real manifestation of the dream. Along the path and into the future there may be good days and bad days – there were both over the history of this project; BUT the feelings it produces and the results we get to see certainly remind us why we like engineering.

About the Author
Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. His work has produced three books, numerous articles, and four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. He is the owner and president of EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.

Connect with Ed
48x48-linkedinEmail

 



Source: ISA News

AutoQuiz: What Is the Purpose of a Markov Model Computation?

The post AutoQuiz: What Is the Purpose of a Markov Model Computation? first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

A Markov model is used to determine successful system operation as a function of operating time interval. The resulting computation indicates system:

a) mission time
b) steady-state availability
c) reliability
d) probability of success
e) none of the above

Click Here to Reveal the Answer

Systems that exhibit a Markov property are ones in which the future does not depend on the past. Therefore, instantaneous availability will vary during the operating time interval, due to changes in failure probabilities and repair situations.

Availability is often calculated as an average over a long operating time interval. The result indicates that availability reaches a “steady state” after some period of time.

The correct answer is B, “steady-state availability.”

Reference: Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP., A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 



Source: ISA News

ISA Leaders Meet to Validate and Advance the Organization’s Strategic Direction

The post ISA Leaders Meet to Validate and Advance the Organization’s Strategic Direction first appeared on the ISA Interchange blog site.

This post is authored by Paul Gruhn, president of ISA 2019.

This past month, ISA leaders convened in North Carolina for the first of two in-person meetings. This was our Strategic Leader Meeting (SLM) and is intended for a relatively small group (around 50) of volunteer leaders who meet to discuss strategic issues and operational details.

Our second in-person meeting, the Annual Leadership Conference, will be in October. This event has a larger and broader audience (around 150 volunteer leaders) and includes the Council of Society Delegates business meeting, professional development training, and the Society’s annual Honors and Awards Gala. Many of our standards committees also meet prior to or after the Annual Conference.

I believe that all members have a voice in our future, and I am excited to share some of the work that happened during the Strategic Leader Meeting. I hope that you will get some sense of the excitement and optimism that I feel for where we are going based on the compelling conversations had by your leaders during this event.

The meeting was held in Charlotte, NC, USA, and was a bit of a departure from past formats. We spent most of the weekend as one large group engaging in dialogue about our strategic direction.

Before I give more details, let’s remember the journey we’ve been on as an organization. During the past year, we have revised our vision and mission statements. Our vision is to create a better world through automation. Our mission is to advance technical competence by connecting the automation community to achieve operational excellence. We have also developed five core values: excellence; integrity; diversity and inclusion; collaboration; and professionalism. This work can be reviewed in my previous post.

Leveraging these concepts, the Executive Board worked to develop strategic objectives that will move our mission forward over the next three to five years:   

  1. Establish and advance ISA’s relevance and credibility as the home of automation by anticipating industry needs, collaborating with stakeholders, and developing and delivering pertinent technical content.
  2. Enhance member value and expand engagement opportunities to nurture and grow a more diverse and global community to advance the automation profession.
  3. Become the recognized leader in automation and control education, providing training, certifications, and publications to prepare the workforce to address technology changes and industry challenges in the most flexible and relevant
  4. Create opportunities for members to improve critical leadership skills, to build a network of industry professionals, and to develop the next generation of automation professionals.

With the long-term focus of the objectives established, your Board also discussed possible goals (9-18 months), tactics (up to 6 months), and key performance indicators.

The Board also knew it was important to tap into the collective wisdom of the Society, and that became the purpose of the Strategic Leader Meeting. After a brief dialogue about each objective, the leaders worked in small groups and brainstormed ideas. They summarized their suggestions for the group, which were captured in an online mind-mapping tool. With all the ideas captured, each leader identified their top two priorities under each objective. There were so many great ideas – you could feel the energy in the room, and we came out of the sessions with great input.

At the conclusion of the event, the Board convened informally to review and discuss the results of the weekend. The Board will continue to meet in small work groups to refine the recommended priorities and work with various society groups on implementation plans.

We are thrilled to report that 100% of attendee survey responses confirmed “the strategic discussions were valuable to me.” Some comments on the overall meeting included:

“I really enjoyed the format, content and people. Definitely a valuable experience.”

“I found the people at the meeting intelligent, passionate and willing to do what it takes to improve the society.”

“I better appreciate the vision and challenges of ISA.”

“ISA is in a much better place, financially and strategically.”

I have personally been attending ISA leader meetings for close to 30 years. The positive vibe at this SLM was apparent to everyone. Many used the words ‘positive,’ ‘exciting,’ and ‘optimistic’ in their feedback. There was more levity and laughter than any other leader meeting I can recall. One leader stated it was the most positive meeting he’s been to in 15 years.

At the close of the meeting, leaders and staff were asked to pledge what they would do differently moving forward. Some of the responses were:

“Think collectively. Let others share ideas and listen carefully.”

“Keep an open mind to new opportunities and ideas.”

“Pitch in to help solve a problem that I have been waiting for others to solve.” (There were several variations of ‘stop complaining.’)

“I will encourage others to join and participate in leadership at my local section.”

If you care about the future direction, success, and health of your society, I strongly encourage you to get involved. If you have ideas on what we can be doing better, we want to hear from you! You’ll be seeing tools and resources soon that will make getting involved much easier. Exciting times are ahead! Thank you for being part of the ISA community.

About the Author
Paul Gruhn PE, CFSE, and ISA Life Fellow, is a global functional safety consultant with aeSolutions, a process safety, cybersecurity and automation consulting firm. As a globally recognized expert in process safety and safety instrumented systems, Paul has played a pivotal role in developing ISA safety standards, training courses and publications. He serves as a co-chair and long-time member of the ISA84 standard committee (on safety instrumented systems), and continues to develop and teach ISA courses on safety systems. He also developed the first commercial safety system modeling program. Paul has written two ISA textbooks, numerous chapters in other books and dozens of published articles. He is the primary author of the ISA book Safety Instrumented Systems: Design, Analysis, and Justification. He earned a bachelor of science degree in mechanical engineering from Illinois Institute of Technology, is a licensed Professional Engineer (PE) in Texas, and both a Certified Functional Safety Expert (CFSE) and an ISA84 safety instrumented systems expert.

Connect with Paul
48x48-linkedin Twitter48x48-email

 



Source: ISA News

Book Excerpt + Q&A: Security PHA Review for Consequence-Based Cybersecurity

The post Book Excerpt + Q&A: Security PHA Review for Consequence-Based Cybersecurity first appeared on the ISA Interchange blog site.

This ISA author Q&A was edited by Joel Don, ISA’s community manager. ISA recently published Security PHA Review for Consequence-Based Cybersecurity by Jim McGlone and Edward Marszal. In this Q&A feature, both authors highlight the focus, importance, and differentiating qualities of the book. Click this link to download a free 47-page excerpt from Security PHA Review for Consequence-Based Cybersecurity. To purchase a copy of this book, click here.

ISA recently published Security PHA Review for Consequence-Based Cybersecurity by Edward Marszal, PE, and James McGlone  – two globally recognized experts in process safety, industrial cybersecurity, and the ISA/IEC 62443 series of IACS security standards. In this Q&A feature, McGlone highlights the focus, importance, and differentiating qualities of the book.

Q. What is a Security PHA Review and how does it help ensure industrial cybersecurity?

A. The first step is applying a methodology for assessing the potential risks posed by a cyberattack on process plants. In the process industries, the most widely accepted process for identifying hazards and assessing risk is the Process Hazard Analysis (PHA) method, most commonly performed through hazard and operability studies (HAZOPs)

A Security Process Hazards Analysis (PHA) Review is a practical and inexpensive analysis method that can verify if critical industrial automation processes and machinery are protected or if they could be damaged through cyberattack.

By analyzing the cause of and safeguards for cybersecurity weaknesses, it’s possible to determine consequences that are potentially unaffected by the safeguards and those that could be caused by malicious intrusion, such as hacking.

This book reviews the most common methods for PHA of process industry plants and explains how to supplement those methods with an additional Security PHA Review (SPR) study to determine if there are any cyberattack vectors that can cause significant physical damage to the facility. If these attack vectors are present, then the study methodology makes one of two recommendations: (1) modify one or more of the safeguards so that they are not vulnerable to cyberattack or (2) prescribe the appropriate degree of cyberattack safeguarding through the assignment of an appropriate security level. SPR examples provide insight for implementing these recommendations.

Any consequence that is not protected by existing safeguards or that can be caused by a cybersecurity attack is assigned an ISA/IEC 62443-based Security Level Target to be implemented or it is assigned an alternative safeguard or redesign to eliminate all or some of the cybersecurity risk.

Blog Author Q&A Free Bonus! Click this link to download a free 47-page excerpt from Security PHA Review for Consequence-Based Cybersecurity.

Q. What makes this book different than other books on cybersecurity? Why were you compelled to write it?

A. We were prompted to write the book because the industry and cybersecurity practitioners are still unsure of what to do and why. The prevailing approach in industrial cybersecurity focuses on network devices such as computers, Level 3 switches, and firewalls instead of on the process and machines that could be damaged or cause damage if control is lost.

By focusing on hazard and operability studies (HAZOPs) designated scenarios, it is possible to identify hackable scenarios, rank them appropriately, and design non-hackable safeguards-such as relief valves and current overload relays-that are not vulnerable to the cybersecurity threat vector. Where inherently secure safeguard design is not feasible, the appropriate cybersecurity countermeasures must be deployed.

Q. What types of automation and process industry professionals would benefit most by reading the book?

A. The book will be useful to a wide range of automation and process industry professionals, including:

  • Instrumentation and control system engineers and technicians
  • Network engineers
  • Process safety, health and safety, cybersecurity, and maintenance personnel
  • Executives focused on risk reduction

Q. Why does the cover of your book depict springs and gears? How are they related to the content of the book?

A. The book shows how to evaluate each cause and safeguard in a “node” to discover if the consequence can be generated by a cyberattack. If a consequence is vulnerable to a cyberattack, then you can select a Security Level Target for the zone where the cause and safeguard reside or you can modify or redesign the cause and safeguard so they are not vulnerable to the cyberattack. The modifications or redesign involves choosing a different type of technology to remove the cyberattack vulnerability. In many cases, the redesign or modification might involve a device with a spring or gear instead of a microprocessor.

About the Author
Simon Lucchini, CFSE, MIEAust CPEng (Australia), serves as a Chief Controls Specialist and Fellow in Safety Systems at Fluor Canada. Through his more than 23 years in the petro-chemical industry, Lucchini has broad expertise and experience in operations/maintenance, corporate engineering, and project engineering. For the past 16 years, he has worked in the Control Systems Department at Fluor Canada. He is the Fluor Fellow in Safety Systems Design and also the chief controls specialist based at Fluor’s Calgary, Alberta Canada office. He has written papers on safety systems for various industry and academic venues, including two chapters in the 2017 Bela Liptak Instrument & Automation Engineers’ Handbook. Lucchini is currently the Safety Systems Committee chair of ISA’s Safety & Security Division, within which he produces web articles on matters of importance for the safety systems industry. He is also an active contributor to local control system networks that include a number of global oil & gas operators.

Connect with Simon
LinkedIn

About the Author
Edward M. Marszal, PE, is president and CEO of Kenexis. He has more than 20 years of experience in the design of instrumented safeguards, such as SIS and fire and gas systems. He is an ISA Fellow, former director of the ISA safety division. Edward is the co-author of two ISA books, Safety Integrity Level Selection and Security PHA Review for Consequence-Based Cybersecurity. He is an ISA84 expert.

Connect with Edward
48x48-linkedinTwitterEmail

 



Source: ISA News

Fault Detection in the Feed Water Treatment Process for Boiler-Turbine Power Generation [technical]

The post Fault Detection in the Feed Water Treatment Process for Boiler-Turbine Power Generation [technical] first appeared on the ISA Interchange blog site.

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.

Abstract: Feed water treatment process (FWTP) is an essential part of utility boilers; and fault detection is expected for its reliability improvement. Classical principal component analysis (PCA) has been applied to FWTPs in our previous work; however, the noises of T2 and SPE statistics result in false detections and missed detections. In this paper, Wavelet denoise (WD) is combined with PCA to form a new algorithm, (PCA- WD), where WD is intentionally employed to deal with the noises. The parameter selection of PCA-WD is further formulated as an optimization problem; and PSO is employed for optimization solution. A FWTP, sustaining two 1000 MW generation units in a coal-fired power plant, is taken as a study case. Its operation data is collected for following verification study. The results show that the optimized WD is effective to restrain the noises of T2 and SPE statistics, so as to improve the performance of PCA-WD algorithm. And, the parameter optimization enables PCA-WD to get its optimal parameters in an auto- matic way rather than on individual experience. The optimized PCA-WD is further compared with classical PCA and sliding window PCA (SWPCA), in terms of four cases as bias fault, drift fault, broken line fault and normal condition, respectively. The advantages of the optimized PCA-WD, against classical PCA and SWPCA, is finally convinced with the results.

Free Bonus! To read the full version of this ISA Transactions article, click here.

 

Enjoy this technical resource article? Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus professional networking and discounts on technical training, books, conferences, and professional certification.

Click here to join ISA … learn, advance, succeed!

 

 

 

2006-2019 Elsevier Science Ltd. All rights reserved.

 

 

 



Source: ISA News

AutoQuiz: Characteristics of a Loop Diagram

The post AutoQuiz: Characteristics of a Loop Diagram first appeared on the ISA Interchange blog site.

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

Which of the following is most typical regarding loop diagrams?

a) they are relatively inexpensive to produce
b) they are produced on an as-needed basis after the plant is running
c) they show both the minimum and optional items that are required
d) they are typically developed by a company’s engineering staff
e) none of the above

Click Here to Reveal the Answer

Some plant owners do not believe that loop diagrams are worth their cost (which can be considerable), and they are not typically included in a design package. Therefore, the diagrams are often included on an as-needed basis after the plant is running.

The correct answer is B, “They are produced on an as-needed basis after the plant is running.”

Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition

About the Editor
Joel Don is the community manager for ISA and is an independent content marketing, social media and public relations consultant. Prior to his work in marketing and PR, Joel served as an editor for regional newspapers and national magazines throughout the U.S. He earned a master’s degree from the Medill School at Northwestern University with a focus on science, engineering and biomedical marketing communications, and a bachelor of science degree from UC San Diego.

Connect with Joel
LinkedInTwitterEmail

 

Image Credit: Instrumentation Tools



Source: ISA News