Thank You Sponsors!













What Observations Improve Specificity in Pipeline Leak Detection?

The post What Observations Improve Specificity in Pipeline Leak Detection? first appeared on the ISA Interchange blog site.

This guest blog post is part of a series written by Edward J. Farmer, PE, author of the new ISA book Detecting Leaks in Pipelines. To download a free excerpt from Detecting Leaks in Pipelines, click here. If you would like more information on how to purchase the book, click this link. To read all the posts in this series, scroll to the bottom of this post for the links.


Assessing process conditions, such as whether a pipeline is leaking, begins with some core strategy, such as the law of conservation of mass. Whatever it is, the chosen strategy enables identification and definition of the process parameters that must be observed to reliably assess it. Each of these parameters must be accounted for, either by assumption or from measurements. Measurements must be sufficiently accurate and often to provide observability of the process including transient events of interest, such as a leak developing. These parameters are usually affected by local conditions at the leak site, other conditions in the pipeline, the location of the leak, and, of course, its size.



Generally, the size of the leak affects the relevant parameters and the observed changes in them help with determining of the state of the pipeline. The precision of those measurements (their accuracy) must be adequate to disclose information relevant to the requirements of the monitoring methodology applied.

Timing matters – a reading taken yesterday, or this morning, or an hour ago does not help with assessing an event that comes and goes over a period of a few minutes at the most. The set of parameter measurements must be related in time (coherent) to the degree they can reliably be assessed as resulting from the same initial cause. The inherent timing and accuracy of the observations must not be degraded by transmission, collection, storage, and data management.


If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free excerpt from the book, click here.


Assuming all those conditions are met, the observations can be submitted to an inference engine, e.g. for analysis, resulting in an assessment of the condition of the process:

  • The first assessment of the quality of the results from this analysis relates to sensitivity. Does the result produced by the inference engine reliability detect the occurrence of leaks? The size of the leaks that can be reliably detected indicates the sensitivity of the system. Often, this is expressed as some percentage of the line’s mass flow rate.
  • How quickly this is done, the speed of detection, is important. Sensitive systems often detect leaks along with other things that aren’t leaks, such as process disturbances or noise. Detection speed depends on the chosen methodology, and the location of the leak relative to observation points.
  • Detecting leaks is one thing, ONLY detecting leaks can be quite another. Detecting non-leak events is not useful – an alarm should be specific to a leak, not just a leak-like event; hence the specificity of the detection system matters. We hope for very good sensitivity coupled with high specificity – kind of a “Holy Grail” in process monitoring. Achieving specificity may require additional observations and additional analysis in which the various observations are analyzed as a group for their coherence and logical consistency. This is covered in detail in the book Detecting Leaks in Pipelines. Achieving specificity is usually much harder and requires better observability than simply detecting leaks (along with things that look like them in the data).

Some response plans benefit from an assessment of the leak’s location. The inference engine can estimate the location of the leak. Location speed and accuracy depend on observability. Almost any desired location accuracy is achievable, but the economic value of enhanced precision is determined from the direct benefit the investment required to achieve it provides.

Some situations also benefit from an estimate of the size of a detected leak. Categorizing into assessments like “tiny” or “huge” can usually be accomplished with training in data interpretation or immediate human inspection. When warranted, the inference engine can be set up to calculate the leakage rate as data from the pipeline accumulates and measurements stabilize in the “leaking” condition.

It’s worth remembering that observability of leak-related parameters involves a frequency of measurement that supports seeing the changes that come and go as the event proceeds and stabilizes.This can be as simple as a daily mass balance (a common goal in the 1960s) or it can involve detection in a fraction of a minute with location determination within a few meters. The equipment to meet these disparate goals is very different in design and cost – most of it didn’t even exist in the 1960s.

In the 1980s I had a four-inch pipeline running down my longest office hallway with three plugged holes in it. An in-line centrifugal fan produced air flow. A potential user was asked to walk down the hallway and pull out one of the plugs, and then I would immediately yell the number of the hole he’d opened. This was impressive and confidence-building but implementation benefit (as assessed from measurement and communication cost) often got in the way.

So, the performance characteristics of interest are:

  • Sensitivity
  • Detection speed
  • Specificity
  • Location accuracy
  • Size estimation accuracy

Testing – applicable and relevant testing – is crucial to assessing performance. It must be relevant, pertinent, and repeatable. It’s a great subject for a future post.


leak detection, process industries, industrial automation, process control


Want to read all the blogs in this series? Click these links to read the posts:

How to Optimize Pipeline Leak Detection: Focus on Design, Equipment and Insightful Operating Practices
What You Can Learn About Pipeline Leaks From Government Statistics
Is Theft the New Frontier for Process Control Equipment?
What Is the Impact of Theft, Accidents, and Natural Losses From Pipelines?
Can Risk Analysis Really Be Reduced to a Simple Procedure?
Do Government Pipeline Regulations Improve Safety?
What Are the Performance Measures for Pipeline Leak Detection?


About the Author

Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. His work has produced three books, numerous articles, and four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. He is the owner and president of EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free excerpt from the book, click here.

Connect with Ed:



Source: ISA News