The post AutoQuiz: What Industrial Troubleshooting Technique Can Help Replace a Bad Component? first appeared on the ISA Interchange blog site.
This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.
a) substitution method
b) consultation method
c) “remove and conquer” method
d) fault insertion method
e) none of the above
Consultation (answer B), also known as the “third head” technique, means you use a third person who has advanced knowledge about the system or the principles (perhaps someone from another department or an outside consultant) to help troubleshoot the problem.
The “remove and conquer” method (answer C) involves removing devices one at a time, which may help find certain types of problems, like overcapacity on an instrument bus.
The fault insertion method (answer D) is usually used during system testing, where faults are inserted into the system, so that system response can be observed.
The correct answer is A, substitution method. The substitution method substitutes a known good component for a suspected bad component. Substitution may reveal the component that is the cause of the problem.
Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition
Source: ISA News
The post Webinar Recording: Small Batch Manufacturing first appeared on the ISA Interchange blog site.
Danaca Jordan, the first protégée of the ISA Mentor Program, provides her experience moving from giant, complex manufacturing to a small plant – and key plant problems she has solved. The problems and solutions provide tips and lessons on the use of anti-reset windup limits, override control, isolation valves, smart alarms, mass flowmeters, and the enforcement of low-flow limits.
Danaca Jordan, CAP, is a Digital Manufacturing Center of Excellence engineer at Eastman Chemical Company.
Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars.
Source: ISA News
The post Ensuring the Free Flow of Information: Routing Approaches for Complex Industrial Networks first appeared on the ISA Interchange blog site.
Initiatives like smart manufacturing require the free flow of information across a network architecture—from the point where data is first collected, to where that data is analyzed and contextualized into information, and finally to where that information is presented to workers.
Ensuring this free flow of information, however, is no easy task. Every configured device added to a network acts as a barrier to getting information to where it needs to be. And information increasingly needs to be sent across not only one network but multiple networks.
Historically, two types of devices have been used to manage the flow of network traffic: switches and routers. Switches operate at layer 2 of the open systems interconnection (OSI) model. They primarily interact with information packets using MAC addresses. Routers operate at layer 3. They use IP addresses and subnets to move information from one network to another. But as the boundaries of layer-2 switches and layer-3 routers began to blur, a new solution emerged: the layer-3 switch. With the combined functions of a switch and a router in one device, the layer-3 switch allows end users to logically segment their traffic into virtual local area networks (VLANs). The layer-3 switch not only can operate one or multiple VLANs on layer 2, but it also can route data between those VLANs across layer 3.
This is increasingly important for the process industry, where distributed control systems continue to become more virtualized, with centralized servers, distributed clients, and business logic abstracted from presentation. The routing capabilities that a layer-3 switch can deliver are essential to ensuring security, isolation, and resiliency in plantwide networking.
Inter-VLAN routing can be configured in three different ways: connected, static, and dynamic.
Connected routing involves two VLANs automatically routing traffic between each other using local routes. It can only occur when both VLANs are connected to a layer-3 switch that is configured as the gateway address for both of them. The switch’s configuration looks something like this: L3switch(config)#ip routing.
Think of connected routing as two adjacent hotel rooms with a connecting door between them. The two rooms are separate, and the connecting door can be locked, but ultimately, someone can move from one room to the other via the connecting doorway.
Connected routing is particularly useful if a machine’s I/O adapters always have the same IP addresses across many machines, but the controller has an IP address that is allocated to the production line. In this scenario, the end user can use connected routing to route traffic from the line-level network to any I/O modules on the machine-level network that are connected to the same switch.
The static routing approach is commonly preferred in small networks that have only a very limited number of layer-3 switches with IP routing enabled. It involves manually configuring the exact path that packets will follow as they travel through the network.
A sample command-line configuration is:
L3switch(config)#ip route 10.10.240.0 255.255. 255.0 10.10.100.1. In this instance, packets that arrive at the layer-3 switch with the destination address of the 10.10.240.0 255.255.255.0 network would be sent to an adjacent layer-3 switch with the address of 10.10.100.1.
Static routing is similar to planning a commute. There could be several different route options, but drivers will most likely pick the one that avoids impediments like heavy traffic, construction, or frequent stops, so they can reach their destination as quickly as possible.
Static routes are simple to implement, but they are not scalable. Also, the routes must be updated any time the network changes or if more network devices are added.
In large networks, manually configuring not only every immediate route, but also all the possible and allowable routes is simply too much work. More than that, it is enormously difficult to manage and maintain in the long term. This is where dynamic routing is used. It automates the process of selecting the paths that data will follow through the networks.
There are two recommended dynamic routing protocols. The first is Open Shortest Path First (OSPF), which operates on the basis that all routers and switches within the same area have an identical map of the network topology. The second is the Enhanced Interior Gateway Routing Protocol (EIGRP), which only shares routing information with immediate neighbors, making it less memory intensive.
When choosing between these two dynamic routing protocols, there is almost no difference in their implementation and outcome. However, information technology (IT) departments often prefer EIGRP when integrating multiple plants into an enterprise-level network, because it offers more efficient route storage.
An example of an EIGRP configuration is:
L3switch (config)#router eigrp 100
L3switch(config-router)#network 10.10.100.0 0.0.0.3
L3switch(config-router)#network 10.10.210.0 0.0.0.255
L3switch(config-router)#network 10.10.220.0 0.0.0.255
L3switch(config-router)#network 10.10.230.0 0.0.0.255
Here, EIGRP is enabled by “router eigrp 100.” The arbitrary “100” is known as the autonomous-system number. It must be consistent across the layer-3 switches and routers that are considered to be in the same autonomous system. The “network” statements are the networks that are advertised to adjacent layer-3 switches or routers, within the same autonomous system. Following the 10.10.230.0, for example, is the “wild card” mask of 0.0.0.255, which is the inverse of the subnet mask of 255.255.255.0.
When evaluating routing options, it is important to remember that routing does not need to be an either-or decision. Any two or even all three routing approaches can be used in a single, well-designed system. Most commonly, connected routing is used within the cell/area zone. At the site level, static routing is used between automation devices, and dynamic routing is used through the software infrastructure to support the servers, clients, and manufacturing execution software. Another thing to keep in mind is how routing can help create more efficient networks. One of the most common mistakes organizations make, for example, is trying to implement an entire control system in a single, flat layer-2 network. This can lead to several hundreds or even thousands of devices existing on a single network, creating network sprawl.
A control system that contains more than 200 Ethernet devices should be segmented into multiple VLANs. Each VLAN should be limited to a maximum of 253 IP addresses and 200 Ethernet devices. Routing should not only be considered from control systems to software systems, but also from control systems to control systems.
More information on routing approaches and considerations is available in the free design and implementation guide, Migrating Legacy IACS Networks to a Converged Plantwide Ethernet Architecture. The document, jointly developed by Rockwell Automation and Cisco, covers requirements and solutions for migrating a traditional industrial network architecture to standard Ethernet and IP network technologies. IT and operations personnel also can utilize industry training to learn more about routing in industrial networks.
A version of this article also was published at InTech magazine.
Source: ISA News
The post AutoQuiz: What is a Database Record? first appeared on the ISA Interchange blog site.
This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.
a) all data related to a particular subject kept in a group
b) a single row of information in a table
c) a single piece of data in a single row of a table
d) an electronic filing system
e) none of the above
The answer is B, a single row of information in a table. Regarding answer C, single pieces of data (cells or fields) that describe related data entities can be combined into a single row, or database record. For example, the values in the single data fields FIRST NAME, MIDDLE NAME, and LAST NAME in a single row could define a record called EMPLOYEE NAME.
Multiple records make up answer A, which defines a table, or group of records, which defines EMPLOYEES.
All of the tables, records, and data are managed by answer D, an electronic filing system.
Reference: Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP., A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.
Source: ISA News
The post ISA Extends Reach Through Organization Partnerships first appeared on the ISA Interchange blog site.
In their heyday, ISA conferences drew 30,000 attendees. Many changes have occurred since then, resulting in smaller conference sizes. In a concerted effort to reach a broader audience, ISA is actively exploring new partnerships with other organizations and society organizers. Conferences are an essential way that we serve our members and our industry.
These events offer valuable facetime with subject matter experts and provide us with the opportunity to live out our mission statement and further “advance technical competence by connecting the automation community to achieve operational excellence.”
This past June, we held a joint meeting with the Permian Basin Section of the Society of Petroleum Engineers (SPE) in Midland, Texas. Attendance was free but limited to ISA and SPE members. While attendance was originally capped at 50, over 60 participated and almost half of the attendees were ISA members.
The event featured brief remarks about the value of SPE and ISA membership from respective section leaders. There was also a series of informal presentations on automation and cybersecurity on oil and gas production facilities. The event marked an important first step for our two organizations to work together in the Midland market. Based on our success, ISA and the SPE Permian Basin Section plan to co-host a joint technical symposium next year.
Want to attend an ISA conference? Click this link to get the current upcoming schedule.
There are exciting conference presentations scheduled through the remainder of the year. The Society of Underwater Technology (SUT) has agreed to program a session on remotely operated vehicles and automation in deepwater exploration and production at our upcoming Process Industry Conference (PIC) in Houston this November. Eric Cosman (the 2020 ISA President) will deliver the keynote address at their Process Technologies Conference in Sugar Land, Texas this October. In addition to sharing content, we are also working on the cross-promotion of our society and its events.
In October, Steve Mustard (the 2021 ISA President) will present at the Louisiana Oil and Gas Conference and Expo (LAGCOE) in New Orleans, Louisiana on the modernization and transformation of the oil and gas industry. We are also very excited to partner with Hanover Fairs for their Digital Industry USA event in Louisville, Kentucky in September and with Texas A&M University on their Instrumentation and Automation Symposium for the Process Industries in College Station, Texas in January.
ISA has been working with DMS Global (a conference organizer) for conferences outside the United States. Events have been held in Saudi Arabia, the United Arab Emirates, Thailand, Vietnam, and Singapore. Attendance at these events has often been around 300 people.
Automation technology is incredibly transformative with applications across a wide range of industries. We are on the cusp of a new era with exciting growth potential. The diversity of our event partners is only one facet of this ever-expanding role. Opportunities abound in training, certification, standards, and editorial. Partnering with other organizations opens the door to cooperation, connection, and further collaboration. Plus, it sets the stage for future expansion.
Stay tuned—exciting times are ahead!
Source: ISA News
The post How to Leverage Process Data to Improve Industrial Operations first appeared on the ISA Interchange blog site.
One certainty is that manufacturing processes continue to get better at producing data, primarily due to rapid cost reductions and improvements in data collection, communication, and storage technologies. The challenge, however, is that the ability to exploit this data for meaningful operational benefits is not keeping pace.
There is real potential to use manufacturing data to improve process yields, asset performance optimization, and operational equipment efficiency, and to meet other needs.
The problem is that most current data analysis approaches do not scale well:
Figure 1. A flood of Figure 1. A flood of process data
In many circumstances, the flood of data is actually making things worse by overwhelming in-place human-centric analysis mechanisms, or through distracting “big data” projects that do not produce results. The solution is to find new approaches to using this data that fit within the pragmatic constraints of most industrial and manufacturing settings. Approaches that:
Control systems
Control systems are the starting point and center of attention when it comes to process data. Advanced process control approaches, and more specifically multivariate model predictive control approaches, are viable ways to take advantage of expanded process data availability to improve process performance. These techniques require process-specific control system expertise and significant upfront and continuing investments. They are only feasible where they can be applied to specific problems with substantial and predictable payoff.
Ad hoc monitoring capabilities
Outside of the process control systems, a variety of ad hoc approaches are employed to use process data for operational benefit. These approaches center around a process historian or other data store and include:
These ad hoc approaches are essential and support many key operational needs, but they are also severely limited in their ability to scale with growing data volumes. Operational dashboards rely on people to interpret them. The more data presented, the more difficult the interpretations. Writing effective rules and thresholds requires expert understanding of the system, and their applicability is often brittle with respect to the system state. Incremental addition of thresholds and alerts can quickly lead to alarm fatigue. Similarly, creating formulas and theory-based models requires domain expertise in addition to controlled, well-understood environments and systems.
In many situations, retained process data is used primarily in historical analyses. For example, a root-cause analysis is undertaken after an unexpected downtime event or drop in product quality. Periodic process improvement projects examine historical data to benchmark key performance indicators and to identify systemic issues and opportunities for change. This type of data analysis is essential, but it is also expertise and time intensive, and limited in its scope of applicability. Because of the time and human capital required, these types of analyses can only be employed when the benefits are clear, and even then only infrequently. In addition, backward-looking analyses cannot identify arising problems.
Industrial and manufacturing operations are the ultimate producers of big data volumes, far exceeding the e-commerce and search domains that put big data on the map. Machine learning and other technologies associated with big data are clearly applicable to process data analysis, but the “big project” approach used in other application areas has not worked well for industrial and manufacturing operations applications.
A typical project takes several months to complete and requires machine-learning experts, frequent interactions with subject-matter experts, and custom software development. In most cases, periodic follow-up projects are required to keep the models up to date with evolving process and equipment conditions.
Successful big data application examples like speech recognition, fraud detection, or recommendation engines generally provide large, enduring paybacks from vast troves of data that justify the initial investments. Industrial and manufacturing operations applications are extremely numerous, but are much smaller and more context-specific in their applicability. Big data projects have achieved very limited success in industrial and manufacturing operations, and many organizations have learned to distrust the approach altogether.
If current approaches fail to provide a scalable path, what are the viable alternatives? One option that is gaining traction is using prepackaged machine-learning (ML) technologies that extend existing data storage infrastructure, like process historians. By narrowing the capability focus, an embedded machine-learning capability can eliminate the need for data science expertise or for custom software development.
Figure 2. Time series data patterns
A specific example of an embedded ML capability is an engine that performs pattern recognition and classification on multivariate time series data, which includes continuously recorded sensor and parametric data, as well as intermittently collected inspection measurements. Process data is largely time series data, and a real-time pattern-recognition engine is a practical way for operations teams to better understand the state of process machinery or the process itself from process data streams.
Pattern recognition and classification could, for example, be used to:
Figure 3 shows the use of a pattern-recognition engine with a process historian. Key elements of this approach include:
Figure 3. Typical pattern-recognition engine
Use of a pattern-recognition engine can be simple and straightforward. A user needs to:
A properly embedded pattern-recognition engine can be used in the same way that historian features, such as calculated fields or attributes, are used to augment raw data streams. For example, the output from the engine can be:
One advantage of a pattern-recognition-based approach is the flexibility. The models produced are purely data driven, and do not require an understanding of the causal relationships, or a detailed understanding of the signal origin. Large numbers of signals from disparate sources could be speculatively thrown into a pattern-recognition engine to identify conditions. New sensors could be added to attempt to capture phenomena of interest. A simple example is the combination of process execution and quality data with machine trace data from a manufacturing execution system. As long as the data is correlated in time, a pattern-recognition engine can extract useful characterizations of state. This type of data-driven model does not replace the need for theory-based models that offer a more precise characterization of behavior, but they offer a powerful additional tool to the operations team.
If classification is simply a way to characterize the state of some entity at a particular time, how can it ever be predictive? It is true that an individual classification of a condition state is not a prediction. Some conditions, however, are precursors of other states that are yet to come. The classic example is a downtime condition in a machine. In almost every case, a machine will start exhibiting some changes in behavior before it erodes into a condition that requires downtime. Identifying these early states is how classification can be predictive.
A global leader in mineral production faced a situation like the one described in this article. Investments in instrumentation and data collection were producing large volumes of operational data, but efforts to turn this data into meaningful improvements in operational efficiency were falling short.
The production line experienced frequent, unexpected downtime due to variations in raw material that adversely affected a critical process line machine. These downtime events lasted anywhere from two to 24 hours per occurrence, and they negatively impacted revenue and increased the cost of production.
Data, in the form of motor currents, temperatures, valve settings, and stoichiometric measurements, was collected from the process line, stored in a process historian, and made available to the operations team through dashboards and other means. The thresholds, rules, and engineering-based models in use were, however, unable to reliably identify conditions leading to the downtime events.
To solve this problem, a pattern-recognition engine was installed and integrated with the plant’s process historian. Members of the process operations team completed the following tasks in approximately three weeks:
The condition stream produced by the pattern-recognition engine could provide very early warnings of a previously hidden bad raw material condition. This awareness enabled the operations team to take corrective actions and avoid many of the costly downtime events that had plagued them previously.
Industrial and manufacturing operations data analysis represents a different type of “big data” challenge than those faced in e-commerce, social media, search, or other domains. Process data analysis is a long-tail situation: Data volumes are extremely large, but there are many focused, “small” problems that need to be solved as opposed to a short list of “big” problems. Process data analysis requires a highly scalable approach that puts capabilities in the hands of subject-matter experts and that facilitates quick wins and incremental growth. Pattern recognition proves to be a reliable method of analyzing big data by leveraging existing assets (i.e., tribal knowledge, operational data stores), providing context to events, and uncovering paths for process optimization.
A version of this article also was published at InTech magazine.
Source: ISA News
The post Why Industrial Cybersecurity Needs to Start at the Top and Be Embraced by All first appeared on the ISA Interchange blog site.
Cybersecurity should be a top-of-mind issue with automation professionals and people throughout their companies. Information technology systems are not the sole targets of cyberattack. Operational technology systems, including supervisory control and data acquisition systems, programmable logic controllers (PLCs), robotics, factory automation, distributed control systems (DCSs), and other manufacturing systems are also at risk for cybersecurity attacks.
The consequences of cyberattacks on automation systems can be far more serious than financial loss, including physical damage. Certainly, the source of threats can be part of the discussion, but more importantly, cybersecurity is an inside job. The only thing companies can control is developing, fortifying, and continually improving cybersecurity protection and programs inside the organization. This can include contracted outside resources as part of an overall cybersecurity protection development strategy, but at the end of the day, the primary responsibility rests on the shoulders of the manufacturing organization. Cybersecurity includes a range of hardware and software and the development of a cybersecurity-conscious culture inside the company.
There are similarities and important differences between plant safety and cybersecurity. Plant safety needs to be redefined as equipment and manufacturing processes are added and modified. Cybersecurity, however, requires an ongoing effort, since cybersecurity threats change at a much higher rate than production systems and equipment. Some of the same planning process safety principles apply, and both require an ongoing process of continual review, awareness, and updates.
Cybersecurity needs to start at the top and be embraced by everyone. A successful culture is developed by personnel seeing meaningful action to protect systems and information. Without building the culture, it is easy for people to take shortcuts around cybersecurity methods and procedures for expediency to solve production issues. Achieving a cybersecurity culture where everyone understands the value of the program is the goal.
Because cybersecurity threats can directly affect the manufacturing company’s operations, the people on staff need to understand the technologies and processes for protection. This is the case even if the majority of cybersecurity protection is going to be outsourced. This really is not any different from doing an automation project using an in-house project manager and outside contracted resources. In either case, personnel need to become knowledgeable.
An excellent source for training is ISA, which offers a set of industrial cybersecurity certificate programs and aligned training courses in the market covering the complete life cycle of industrial automation and control system (IACS) assessment, design, implementation, operations, and maintenance. Each certificate program and training course is based on ISA/IEC 62443, the world’s only consensus-based series of IACS standards and a key component of the U.S. government’s cybersecurity plan.
Organizations that invest in a cybersecurity culture that proactively identifies vulnerabilities and protects the plant’s critical infrastructure, operational performance, and profitability are unlikely to be a cybersecurity disaster news headline.
ISA offers standards-based industrial cybersecurity training, certificate programs, conformity assessment programs, and technical resources. Please visit the following ISA links for more information:
A version of this article also was published at InTech magazine.
Source: ISA News
The post AutoQuiz: Characteristics of an Industrial Controller first appeared on the ISA Interchange blog site.
This automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.
a) reverse acting
b) direct acting
c) nonlinear
d) fail safe
e) none of the above
The correct answer is B, direct acting. For a direct-acting controller, the resulting output movement is in the same direction as the movement of the process variable.
Here is an example of a direct-acting controller. If we have an air-to-open tank outlet valve, as the level rises (PV) in the tank, we would want the outlet valve to open (controller output increases).
Reference: Goettsche, L.D. (Editor), Maintenance of Instruments and Systems, 2nd Edition
Source: ISA News
The post The Industrial Internet of Things Delivers on the Demand for Manufacturing ROI first appeared on the ISA Interchange blog site.
Now more than ever, manufacturers are under pressure to do more with less to improve financial results. Faced with a global contraction in capital spending, manufacturers in the oil and gas, petrochemical, refining, and other process industries need to boost profitability from existing assets. The good news is that not only is it possible, but that the current opportunity is unprecedented in manufacturing history.
Here is why: Every year process manufacturers lose a whopping $1 trillion due to inefficient operations, according to Refining and Petrochemical Benchmarks, API, Solomon, the Occupational Safety and Health Administration, IHS Markit, and company reports. Ensuring profitability begins with finding a way to improve these operations and recapture those losses. Yet many otherwise successful manufacturers lack the confidence to move away from decades-old work practices. They are uncertain about which approaches will yield the greatest operational improvements.
The answer lies in embracing the Industrial Internet of Things (IIoT). Despite the hype, it is true; the right IIoT strategy can dramatically improve a plant’s operations. But to many in industry, the IIoT is only a promise. One survey revealed that 75 percent of 500 senior industry executives acknowledge the need for IIoT investment, but only 5 percent of those same companies have an in-depth IIoT strategy.
What is holding them back? Many do not understand the business case for investment—which technology investments will move the financial performance needle. And of course, with greater connectivity comes greater security risks for plants to keep data safeguarded. That is why we have been focused on innovation using advanced encryption, private company cloud environments, and advanced new routers with physical, one-way data paths to transform data management practices on site and in the cloud.
With robust security measures in place to protect data and mitigate risk, manufacturers can chart a path to operational excellence through IIoT. The return on this IIoT investment is measurable and significant. With the technology, tools, and a culture willing to adapt, companies can generate measurable business results among four areas: production, reliability, safety, and energy and emissions. Within these areas, per a benchmark analysis of the industry’s top quartile performers compared to the industry average, top performers experience:
It is proof that companies that want to boost profitability to top quartile levels need to generate real gains in these key areas and use IIoT strategies to get there. But you need to know how to get there. All signs point to automation. Automation empowers us with new ways to solve problems by leveraging expertise across an organization: local experts are more efficient, centralized experts can manage fleets of assets across the globe, and third-party experts can safely and cost effectively manage critical assets outside of a company’s core competency.
And this does not have to be a “big bang” infrastructure investment or organizational overhaul. Businesses can invest in small pilots, often less than $50,000, ensure return on investment (ROI), and take the next step, knowing that each investment can be leveraged toward an organizational top-quartile program. This is why automation is the highest impact investment organizations can make for ROI on their operational excellence initiatives.
For example, a petrochemical company with hundreds of steam traps employed an IIoT strategy that reduced steam consumption by 7 percent. It uses wireless acoustic steam trap transmitters to monitor noise and temperature, then transfers the data to a Microsoft Azure cloud virtual server, so the analytics software can analyze the data and generate alerts. Then, remote access monitoring by experts provides actionable reports to maintenance for repairing or replacing failed steam traps long before they fail.
Operational excellence begins with pinpointing the causes of poor performance, prioritizing actions that can yield the greatest improvement, and establishing a scalable work plan. It also takes management leadership to embrace and deploy the right technology and solutions, a commitment to break down silos and encourage expert collaboration across the enterprise, and courage to drive a culture where it is safe to advocate change.
When this happens, process manufacturers experience a return of greater productivity, less downtime, safer operations, and reduced energy costs. The once-muddled path to getting the most from the IIoT becomes clear, and the journey to becoming a top industry performer begins. Ultimately, harnessing the power of the IIoT produces measurable and sustainable benefits that improve a company’s bottom line and justify its future direction. With the right roadmap, IIoT’s promise can be fulfilled.
A version of this article also was published at InTech magazine.
Source: ISA News
The post Benefits of Connecting Manufacturing Process Management to the Product Record first appeared on the ISA Interchange blog site.
It is well known that getting a product to market quickly is critical to a company’s success. There are many steps involved in this process, and manufacturers need to continuously find areas to improve in order to meet product goals. In the ongoing challenge to continually decrease time to market, manufacturers must examine every segment of the product development process for potential improvement.
One area that manufacturers often overlook is product test and assembly. Even automated test and assembly processes can take considerable time and effort to prepare, document, and describe all the required steps and procedures. Because many manufacturers also rely on outsourced partners for test and assembly, inherent problems—such as lack of access to product data, time zone or availability issues, and language barriers—often cause delays in product release schedules.
Without centralized access to information, an event like a product change that was not communicated to a test and assembly partner due to one of the challenges listed above can result in costly scrap and rework.
Describing and documenting the procedures involved in a product’s test and assembly process are often referred to as manufacturing process management (MPM)/bill of material (BOM) routing. Manufacturing process management defines how a product is to be manufactured and usually involves the process of segmenting a product BOM into a series of operations and sequences. These sequences/routings describe how a particular assembly process or step is to be performed and the materials each step consumes. Some refer to such descriptions as recipes, because there are many parallels to the culinary world.
Today, many manufacturers create and manage routing information in enterprise resource planning (ERP) or material requirement planning (MRP) systems, while others use custom applications or spreadsheets. The majority of these legacy systems cannot link the routing data to engineering information, such as computer-aided design (CAD) drawings, behavioral parameters, and vendor specifications and datasheets. This information is typically stored in a product lifecycle management (PLM) system.
PLM facilitates the secure sharing of product information among internal and external team members, streamlines the communication of information (i.e., new products, changes, revisions, and configurations), and provides automated alerting and approval tracking processes. PLM is also a central source to access and share information with engineering design environments and manufacturing systems. Therefore, routing information managed within the PLM system can be easily linked to engineering data. With a PLM system, accurate product information is available in real time for all necessary parties, and outsource partners can truly function as a seamless extension of the product development team.
PLM systems have traditionally focused on engineering and product data management processes. PLM automates processes. It also provides a central location to manage all the information associated with a product and tracking capabilities to easily capture and resolve issues. As PLM has evolved and its functionality has grown to encompass more information management across organizations, there is an obvious fit for PLM to support downstream processes, such as manufacturing process management/BOM routing, to further streamline information synchronization and reduce manufacturing costs.
Typically, the PLM system is where BOMs are created and revisions to the BOMs are managed. The natural evolution is for the PLM system to provide the BOM routing functionality to define test and assembly operations and sequences to link these processes back to the engineering data. This lets test and assembly personnel easily view documents, drawings, and pictures directly from the PLM vault. Using PLM as the source for BOM routing also allows them to validate all engineering change orders (ECOs) and new BOM revisions with the routings/work instructions. Because the PLM system manages the BOM, it can provide instantaneous feedback on required and consumed material quantities to eliminate waste and shortages.
Moreover, manufacturers use graphical depictions (photos, images, and drawings) to further describe complex test and assembly procedures and assist with language barrier and translation issues. PLM allows manufacturers to associate documents and images with manufacturing procedures. Being able to view a picture of a particular procedure along with (or in lieu of) written instructions—a capability not commonly available in legacy systems—can help eliminate mistakes and ensure a higher level of quality. Because PLM manages information electronically, paper-based, error-prone processes can be eliminated and help to reduce manufacturing costs.
Shop floor personnel access routing information in PLM software through terminals at their job site.
Even with PLM managing BOM routing, integration between PLM and ERP/MRP is still important to effectively manage routings, because cost and timeline information is driven from the ERP/MRP systems. Most companies with both PLM and ERP/MRP systems have established an integration process that passes new and updated BOMs and revisions from PLM to ERP/MRP. Passing routing information is a simple extension of that integration. Doing so allows both systems to contain synchronized BOM and routing information.
The result is a PLM system that gives the manufacturing group all the necessary data to successfully build and test products and an ERP/MRP system that contains automatically generated, up-to-date, and more accurate routing information to track costs and delivery dates. Sharing data between PLM and ERP/MRP through an automated process avoids errors introduced from manually entering BOMs and ensures manufacturing groups have correct and current information within the ERP/MRP system.
A great example of this process and its benefits comes from a supplier of high-power amplifiers for satellite communications. The company uses a PLM system to automate paper processes for faster, more accurate product development. Because the master record for all product data includes BOMs, specifications, ECOs, and quality processes, managing BOM routing within its PLM environment was a natural extension of the BOM management processes.
Managing routings within the PLM system let the company easily connect the engineering BOM with the manufacturing BOM and eliminated the prior disconnect it experienced whenever there was a change. This also let the company link the 2D and 3D CAD drawings and captured images of assembly steps (via a digital camera) with the routing data. Personnel on the shop floor can display the routing information on touch-screen terminals loaded only with a browser and 3D CAD viewer and view all assembly and test information, a graphical depiction (digital images), and a 3D model of the products.
Because the PLM system also provides quality and corrective-action, preventative-action tracking, assembly and test issues can be raised automatically along with the associated offending product or material from these same terminals. Due to the enhanced routing data and streamlined information, the company has had radically fewer manufacturing (assembly and test) issues and reduced overall manufacturing time.
As enterprise applications evolve, it is important to take a step back and evaluate what processes can be improved upon and where certain data should or could be managed to most effectively support the manufacturer’s needs. In this case, recognize the functionality offered by PLM. It can create a better environment and improve overall processes for managing the BOM routing process.
It also has the benefits of creating and maintaining an integrated environment with the systems that traditionally manage this process, giving manufacturers an opportunity to enhance their product design and manufacturing practices even further to help eliminate inefficiencies, drive down manufacturing costs, and maintain a competitive advantage in the marketplace.
A version of this article also was published at InTech magazine.
Source: ISA News