The oil and gas industry (upstream, midstream, and downstream) is a complex, operations-focused industry. From field operations upstream to storage & distribution midstream to refinery operations and retail downstream, not only is the operational landscape wide and diverse, it is also a boiling pot of different technologies covering all business areas. Different technologies interact with each other where business processes connect or overlap. Operational data travels through different applications to the end-user, where it is used for decision-making by the business. To add to this complexity, the technology landscape is not standardized and instead full of custom-created point solutions that are difficult to manage.
There is a lag in capturing information and finding actionable insights from the data. This delay or lag in information processing prevents the operational decisions from becoming more efficient and reliable. As a result, the industry is still highly reactive than proactive (or predictive) in managing operations.
Post-COVID, the industry is struggling financially and has reacted by cutting CAPEX and reducing the workforce across different functions. The focus now is on doing more with less, without compromising operational efficiency and safety. Operating with a smaller workforce and having lost a lot of talent to this downturn, operational reliability is a bigger challenge.
What is Reliability?
Reliability can be defined as providing high-quality data for decision-making with a high degree of certainty, leading to a lower failure rate. It can be measured through indicators such as average time between failures, availability, and average time to failure. Oil and gas companies usually have a high turnaround time for root-cause analysis of operational failures or deviations of performances from operational excellence benchmarks. This has a direct impact on effective operational reporting and decision making.
There is a greater need today to monitor and control business processes as close to real-time as possible in order to decrease the probability of failure and improve reliability. Hence, it is imperative to take a business process view of operations, rather than a siloed view of activities, and manage the processes instead of individual activities.
Why Business Process Monitoring?
Traditionally, organizations tend to monitor only applications and infrastructure without much thought given to process alignment. Even though IT has some view on the interdependency and flow of information among applications, it often lacks any business context. An organization’s IT may proactively manage infrastructure or monitor network utilization, latency, etc., but they are all tied up with IT metrics. This is very different from how a business operates. For business, the application is only a means to an end. It is the impact on the real-life operations that it wants to understand. Most organizations cannot answer, what is the impact on operations if data from few wells get delayed because of a glitch in the historian system? Or, what is the impact of a transaction failing which was significant from a regulatory or compliance point of view?
Hence, the need of the hour is to switch to ‘business-aware’ operations. It’s important to map and roll up IT metrics to the corresponding business metrics and understand the business impact from business KPIs. Plain application monitoring lacks this level of comprehension and hence does not translate to an improvement in reliability of operations.
Improving Reliability of OT Systems
For long, OT applications have been managed in silos with a reactive approach. This is a sub-optimal way to achieve operational excellence. Instead, the focus should be on improving the accuracy of the entire business process, identify weak links and predict failure. The process should be monitored through well-defined business and technical KPIs, mapped together to define process outcomes. Such a monitoring set-up will enable users to assess the impact of failure on one point to the entire value chain, predict future failures and reduce the effort to manage large processes.
A combination of application-focused service view and business-focused process view provides the ability to move from point monitoring of siloed applications, infrastructure, and processes to continuously monitoring a single, end-to-end process.
Let us look at upstream production reporting as an example. DPRs or Daily Production Reports are released every morning to the production as well as the management team to provide a synopsis of the previous day’s performance. The reports have crucial operations data such as production at different nodes in the production network (well, platform, and field), which is required for any course correction or other operational decisions on the field. But managing production reports in isolation is a mistake. The process starts from gathering data at the field level, running different validations, going through the production allocation process and report building, finally ending at the inbox of the end-user. The process repeats every day, collecting, allocating, and reporting production data both at a granular level as well as an aggregated level for management.
Any failure in the value chain at either the SCADA, historian, or allocation level will lead to either delay in the release of these morning reports or a compromise in data accuracy. The failures could be due to manual interventions, application failures, or infrastructure failures. Such incidents are common and are usually managed after the event has occurred. The delay in identifying the root cause of failure and addressing it varies from several hours to few days. This delay has a direct impact on the operational efficiency of the process. Besides, it costs a significant amount of effort in performing any root cause analysis.
KPIs such as reporting batch job failures, validations at historian, availability of different OT systems, production allocation errors, and report building errors can be monitored continuously. Active business process monitoring can reduce the time taken to identify the issue to almost real-time, thus reducing resolution time significantly.
Beneficial Use Cases
Oil and gas companies can adopt business process monitoring to improve their overall efficiency in a number of areas (both OT and enterprise processes), like:
- Regulatory reporting
- Production reporting
- Partner reporting
- Joint venture accounting
- Production revenue accounting
- Pipeline monitoring
- Trading and risk management
All of the above-mentioned areas are large processes implemented with a mix of different applications and have significant risks. Here, any information delays can cause a significant impact on the operations of the organization. Business process monitoring is an effective way of addressing these challenges.
The Way Forward
Following are the four steps required to use business process monitoring effectively:
- Define the process: Define the process with details of what the process does, what is the expected outcome, and who the actors are. It is important to be able to measure the performance of the process. SIPOC analysis provides a good framework to start defining the processes in detail.
- Assign a timeline: Assign a time to different activities in the process. Most of the organizations fail to assign the expected time of completion to activities.
- Define the risk: Identify the business impact any deviation in the process will make. Delay in providing operations data to management is an operational risk. Similarly, there could be financial, compliance, or regulatory risks attached to the process outcome.
- Manage the risk: Introduce checks and balances to manage the risks. Define business and technical KPIs to monitor the process, provide timely alerts to stakeholders and define SLAs
How HCL can Help Improve Operational Reliability Through Business Process Monitoring
HCL has extensive experience in providing end-to-end business management services to oil and gas clients. HCL’s iControl is an advanced and mature business process monitoring platform with proven industry credentials. iControl lets you define the process flow digitally and integrates with underlying applications to monitor and raise alerts in case of a threshold breach. It enables real-time monitoring of business processes, reducing the time to identify and resolve the incidents significantly. It also learns from past monitoring data and predicts the failure points in advance using in-built ML/AI capabilities. iControl has a rich process library for oil- and gas-specific processes across the value chain.
Case in Point
HCL helped an integrated oil and gas company to improve the reliability of their retail pricing process while making their quarterly financial and operational reporting processes more robust. The selected processes have a direct impact on regulatory compliance, and using HCL’s iControl, the company was able to lower the compliance risk.
Our deep domain capabilities combined with a strong product portfolio can help oil and gas companies improve operational reporting, efficiency, and reliability at a reduced cost in this post-COVID era.
For more details, please visit https://www.dryice.ai/industry-solution/dryice-icontrol-oil-gas-solution