The post An Introduction to Operations Research in the Process Industries first appeared on the ISA Interchange blog site.
This guest blog post is part of a series written by Edward J. Farmer, PE, ISA Fellow and author of the new ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to purchase the book, click this link. To read all the posts in this series, scroll to the bottom of this post for the link archive.
A long time ago (back in 480 BCE) King Leonidas of the Greek city-state Sparta confronted an invasion promulgated by the ambitious Persian King Xerxes who had an army estimated to be between 70,000 and 300,000 experienced warriors on its way across Greece to capture Sparta.
Limited by political issues in Sparta, Leonidas was forced to confront the Persian force with his personal guard: 300 of Sparta’s very best soldiers. Aided by militia contributed by a few other cities, the Greek force ended, at least temporarily, Xerxes’ ambitions. How could this happen?
Operations research.
Applying mathematical, scientific, and logical techniques to the management of a problem or process has become the field of operations research. In the case of the 300 Spartans it involved development of special shields, weapons, and tactics that made them effective against a force two or three orders-of-magnitude larger. In contemporary times, it is usually considered how the military debacle of World War I became the directed, fast-moving, and effective methods of World War II. It is also the basis of modern automation and process control theory.
Operations research concepts are well-known in the field of project management and are evident in Gantt charts and critical path method organization and planning. The basis of critical path method as well as the tools for using it flow from operations research.
In my early days I learned process control concepts from Benjamin Kuo’s book, Automatic Control Systems. It required a good understanding of differential equations applied to how processes were structured and operated. For optimization we often studied how things had to be organized versus how they were thought about in then-existing thinking. We wrote equations like the transfer function that mathematically described the relations among things that could cause changes in the things that mattered.
We tested and analyzed to develop a sensitivity function describing what happened to an output of a process when an input was tweaked. Many processes in those days had not been designed with those concepts in mind and many improvements in critical performance could be developed from better and more complete understanding of cause-and-effect along with quantifiable (mathematical) understanding of how changes affected issues of importance.
Automation engineering work usually involved experienced people with little engineering education who knew from years of experience how things worked helping fresh new people like me develop the equations and concepts involved in making the transition to intelligent automation and optimization. In one plant that was very good to me a team involved several people who were veterans of the “valve shop” working with instrumentation and control guys who could usually recognize a valve when they saw one.
If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free 37-page excerpt from the book, click here.
This needed more than just physical understanding of the chemistry or mechanism – we needed to include assessment of the effects of time. Lots of help was developed and made it into mostly statistics books – I like Mendenhall and Scheaffer’s Mathematical Statistics with Applications, but there are others. Over the years, ISA Fellow Ronald Dieck’s book Measurement Uncertainty has been useful, especially for problems common in process control.
Eventually the benefits of automation were exhibited, analyzed and proven to everyone’s satisfaction. This resulted in machines (what we would consider primitive computers today) analyzing situations and making decisions that were provably faster and closer to optimal than experienced operators could manage.
The number of control loops an operator could handle increased from “a few” to over a hundred, and process throughput improved because decisions were made more quickly and accurately a higher percentage of the time. Sometimes analysis would expose problem-causing areas and ways could be designed to circumvent or minimize them. It was often noticed that automatic control kept all variables closer to proper values and that overall stability improved the performance of even non-automated loops since they had less-extreme conditions to manage. Improvement efforts were focused on issues that sensitivity analysis exposed as the most critical in achieving intended results – improvement money went to the right places.
Knowing that something does work is comforting. Knowing how it works is satisfying, and useful. Is there a way to make it work better? Once a process is deeply understood a range of opportunities can open. Perhaps results would be better with higher quality inputs. On the other hand, perhaps there are ways to adjust processing that allows using less expensive input or producing additional outputs that improve the quality of a primary product while producing a lower value secondary product. Of course, it all depends on what’s involved but the opportunities can be mind-bending.
Operations research has been useful to me from my first experience with the U.S. Army in 1966 when it helped me develop methodology for getting out of bed, the bed made, dressed, with field gear, and in formation, all with time to help my squad do the same things, all in less than 15 minutes.
Whenever it occurs to you there may be a completely correct and perhaps optimal way to do something that others think through from scratch every time it is often useful to think through the steps involved, model or diagram the situation, and construct a solution method. It can mean the difference between “an answer” and the optimization for which everyone was hoping.
Think like a Spartan! The 300 Spartans would have been very pleased to observe improvement of several orders of magnitude in a dependable and predictable way.
Book Excerpt + Author Q&A: Detecting Leaks in Pipelines
How to Optimize Pipeline Leak Detection: Focus on Design, Equipment and Insightful Operating Practices
What You Can Learn About Pipeline Leaks From Government Statistics
Is Theft the New Frontier for Process Control Equipment?
What Is the Impact of Theft, Accidents, and Natural Losses From Pipelines?
Can Risk Analysis Really Be Reduced to a Simple Procedure?
Do Government Pipeline Regulations Improve Safety?
What Are the Performance Measures for Pipeline Leak Detection?
What Observations Improve Specificity in Pipeline Leak Detection?
Three Decades of Life with Pipeline Leak Detection
How to Test and Validate a Pipeline Leak Detection System
Does Instrument Placement Matter in Dynamic Process Control?
Condition-Dependent Conundrum: How to Obtain Accurate Measurement in the Process Industries
Are Pipeline Leaks Deterministic or Stochastic?
How Differing Conditions Impact the Validity of Industrial Pipeline Monitoring and Leak Detection Assumptions
How Does Heat Transfer Affect Operation of Your Natural Gas or Crude Oil Pipeline?
Why You Must Factor Maintenance Into the Cost of Any Industrial System
Raw Beginnings: The Evolution of Offshore Oil Industry Pipeline Safety
How Long Does It Take to Detect a Leak on an Oil or Gas Pipeline?
Pipeline Leak Size: If We Can’t See It, We Can’t Detect It
An Introduction to Operations Research in the Process Industries
Source: ISA News