Back to Insights

Your RTOC Doesn't Need More Screens. It Needs an AI Copilot.

| Dr. Mehrdad Shirangi
Editorial disclosure

This article reflects the independent analysis and professional opinion of the author, informed by published research and hands-on experience building AI tools for upstream oil and gas. No vendor or operator reviewed or influenced this content prior to publication.

It's 2:14 AM in West Texas. The RTOC is running four wells in a zipper configuration — stages 22 through 28 across the pad, spreads staggered, crews rotating. The coffee machine ran out of dark roast two hours ago. Somebody's making do with the medium.

The frac engineer on watch is staring at six monitors when treating pressure on Well B spikes 800 psi in eleven seconds. Slurry rate is nominal. Sand is at 2.0 ppg and climbing toward the design schedule of 2.5. The ISIP from the last stage was 6,100 psi. None of that tells her whether this is a screenout starting, a valve restriction upstream, a sensor hiccup on the high-pressure transducer, or a frac hit propagating from the offset well they've been nervously watching since Tuesday.

She has eight seconds before the decision point — pump to ISIP and call it, flush and evaluate, or hold rate and watch.

She makes the call. She usually makes the right one. She's been doing this for eleven years.

But she's also been awake for six hours, is monitoring 47 active channels across the four wells, got pinged 23 times in the last hour by threshold alarms (nine of which were noise), and is simultaneously trying to remember whether the anomaly on Stage 19 of Well C was the same signature she saw in the Midland Basin last year that turned into a near-miss screenout.

This is the job. It has not materially changed in a decade, despite an estimated $50B+ in frac technology investment since 2015.

The pumps got smarter. The RTOC operator did not.


The Problem Nobody Talks About Directly

There is a widespread and underacknowledged operational reality in completions: real-time frac monitoring is a cognitive endurance sport that the industry is losing by design.

The human watching time-series data at 1-second scan rates for 12-hour shifts is not a system that scales. Alarm fatigue is documented and chronic — when engineers see 200+ threshold breaches per shift, they start pattern-matching to suppress rather than diagnose. Stage-to-stage context gets lost at shift handoff because the knowledge is in someone's head, not the system. Vendor-specific data silos mean that the treating pressure from the pump skid, the downhole tool data from the wireline vendor, and the surface flowback measurements live in three different systems with three different logins and three different timestamp formats.

The data density at a modern simul-frac pad is extraordinary: a two-well simultaneous frac operation running at 120 bpm aggregate rate, 100-mesh and 30/50 proppant stages, with real-time downhole gauges on both wells, is generating somewhere north of 300 channels of 1-second data. That is roughly 18,000 data points per minute, per pad. Run two pads simultaneously from the same RTOC — now common for operators running high-volume completions programs in the Permian or the Eagle Ford — and you have a human monitoring problem, not a technology problem.

The technology problem was solved. The technology is generating more signal than humans can consume.


Auto-Frac Is Real, But It's the Wrong Half of the Problem

To be clear about what does exist: closed-loop frac automation is not vaporware. Halliburton's OCTIV platform and ZEUS IQ execution system have been in commercial deployment. SLB's StimCommander has demonstrated automated rate and pressure control. Liberty Oilfield Services has been running intelligent frac automation on Permian pads. Halliburton and Coterra Energy reported a 17% efficiency gain from closed-loop frac automation in Permian Basin operations — a number that has been widely cited because it's real and material.

These systems do something genuinely impressive: they close the loop between real-time treating parameters and pump execution. Rate adjustments, pressure targets, flush initiation — tasks that used to require a human to key in a command now happen algorithmically, faster than any operator can react.

But here is the distinction that the industry has glossed over: execution automation is not monitoring automation.

ZEUS IQ can tell the pump to back off rate. It cannot tell you why it backed off rate in a way that gets captured as institutional knowledge, surfaces as a comparable event three stages later, or gets incorporated into the morning report in language a VP of Completions can act on.

The pump is automated. The interpretation layer is not.

The RTOC engineer is still staring at the same six monitors, still carrying the cognitive load of 18,000 data points per minute, still writing shift handoff notes by hand. The automation saved her a keypress. It didn't save her brain.


What an AI Copilot Actually Does

An AI copilot for RTOC operations is not a better alarm system. It's a different category of tool entirely, and it's worth being specific about what that means in practice.

Vendor-neutral data ingestion. A real copilot doesn't care whether your treating pressure comes from a Corva stream, a WellData Labs ingest, a Peloton WITSML endpoint, or a CSV from the pump skid operator. WITSML 2.0, REST APIs, and direct CSV injection should all be first-class citizens. The industry's data fragmentation problem is not going away. The tool has to absorb it.

Automated stage labeling. Stage boundaries are still identified manually or semi-manually in most RTOC workflows. A copilot identifies pad-down, rate ramp, flush, ISIP, and next-stage-start automatically from the treating pressure and slurry rate signals. This is not a hard ML problem — it's a solved sequence classification task — but almost nobody ships it cleanly across operators and completions designs.

Design-versus-actual overlay. Every frac stage has a designed rate schedule, a proppant ramp, a designed BHTP, a target net pressure. The copilot renders the actual stage against the design in real time, flags deviation, and quantifies it. "Stage 24 reached peak proppant concentration 4.2 minutes behind schedule; net pressure 210 psi below design at peak rate." That sentence, generated automatically, contains more actionable information than 47 threshold alarms.

Anomaly detection with root cause classification. This is the hard part. When treating pressure spikes 800 psi in eleven seconds, the copilot's job is not to fire an alarm. Its job is to output: "High confidence screenout signature — pressure rise rate and shape match 73% of historical screenout events in this formation; recommend evaluate for flush or ISIP call. Alternative hypotheses: upstream valve restriction (15%), sensor spike (12%)." The engineer still makes the call. But she makes it with a structured differential, not a flashing red box.

Explainable alarms. Every alarm should carry a plain-language explanation and a confidence level. "SPP elevated 12% above baseline for 4 consecutive minutes — similar pattern preceded screenout on Stage 18 of this pad" is an alarm. "ALARM: SPP HIGH" is noise.

Automated shift reports and morning reports. The morning report for a Permian completions manager running 20+ stages per night is currently assembled by a human from notes, screenshots, and memory at 6 AM. A copilot generates it from the stage data: stages pumped, notable events with timestamps and classifications, design adherence by stage, proppant placed versus design, anomalies flagged and resolved. It takes eight minutes to generate manually. It should take eight seconds.


Surface Data Is Enough (This Is the Controversial Part)

Distributed fiber, DAS, DTS, downhole gauges — the industry loves to cite these as the gold standard for completions intelligence. And they are. In the well. For the wells that have them.

But here is the reality: the majority of stages pumped in North American land completions today have no downhole sensors. Fiber deployment is growing but is still a minority practice. Downhole gauges add cost and operational complexity. On a 50-well Permian program running 3,000 stages at a cost of $200,000 per stage, the budget math on fiber monitoring is not always clean.

Surface data — treating pressure, slurry rate, proppant concentration, surface pump pressure, casing pressure — is available on every stage. And it is not nearly as limited as the fiber evangelists suggest.

Published work from SPE and multiple operators has demonstrated that EDR-derived features — mechanical specific energy, ROP, torque, surface pump pressure — carry meaningful signal about lateral heterogeneity. Pressure decline analysis from ISIP gives net pressure, closure stress, and fracture geometry indicators. Treating pressure curvature during proppant ramp correlates with near-wellbore complexity in ways that are statistically predictable across analogous wells in the same formation.

This is not a claim that surface data replaces downhole fiber. It does not. DAS gives you perforation cluster efficiency. DTS gives you temperature profiling. Microseismic gives you fracture geometry in three dimensions. These are genuinely irreplaceable.

But surface-only models can deliver 80% of the monitoring value at 10% of the cost. For an operator running 3,000 stages a year without fiber, a credible surface-based anomaly detection and stage characterization system is not a downgrade from fiber — it's an upgrade from nothing, and nothing is what most operations are running today.

The honest limitation: surface models will not tell you which clusters are taking fluid. They will not give you fracture height. They will not catch a near-wellbore screen that manifests downhole before the surface pressure responds. Those limitations are real and should be stated clearly in any product that makes surface-based claims.


What Exists Today (An Honest Survey)

Several companies are operating in this space. None of them has the complete picture.

Corva is the most widely deployed data platform for completions and drilling. The app marketplace model is genuinely useful, and Corva's data ingestion infrastructure is solid. The limitation is that Corva is a platform, not a copilot — the monitoring intelligence lives in third-party apps of variable quality, and the alarm logic is largely engineer-configured threshold rules.

Peloton (now Quorum) has strong WITSML and EDR data management, particularly on the drilling side. Completions-specific intelligence is not their primary value proposition.

Well Data Labs (now acquired by SLB) built genuinely good stage-labeling and completions analytics capability. The concern post-acquisition is vendor lock — WDL inside SLB is increasingly incentivized to plug into SLB's product ecosystem rather than remain neutral.

NOV Max is serious production operations software with good historian capability. Real-time frac monitoring is not the core use case.

ShearFRAC FracBRAIN is specifically positioned for frac monitoring analytics. More sophisticated than pure threshold alarming, and worth watching. Still relatively early in commercial deployment as of this writing.

Seismos uses acoustic pulse technology for near-real-time fracture diagnostics from surface. Genuinely interesting physics approach to the surface-downhole correlation problem. The toolstring adds operational complexity that not all operators want.

Tally has been building toward automated completions reporting and stage classification. The workflow automation angle is credible. The monitoring intelligence layer is less developed.

FracIQ (Ambyint) targets automated parameter optimization and has roots in production optimization. The completions monitoring use case is adjacent to their core strength rather than central.

The consistent gap across all of them: vendor neutrality, explainable alarms, and GenAI-grounded reporting that engineers actually trust.


What's Still Missing

After surveying the landscape honestly, four things are not built well anywhere:

1. Vendor-neutral stage labeling at scale. Most stage labeling tools are calibrated to specific pump skid vendor outputs and fall apart when ingesting mixed-vendor pad configurations. A three-well pad running two different pump contractors with different data formats is a common real-world scenario. No tool handles it gracefully without custom engineering work.

2. Explainable alarms that engineers trust. The alarm fatigue problem will not be solved by better thresholds. It requires a system that explains why an event is being flagged, what it looks like compared to historical events, and what the competing hypotheses are. Every system surveyed above fires alerts. None of them communicate differential diagnosis.

3. Surface-only models credible for completions engineers. The surface-to-downhole correlation literature exists. The modeling techniques exist. No commercial product has packaged this in a way that a skeptical completions engineer at a major will stake their reputation on. The gap is not the science — it's the validation, documentation, and honest communication of model confidence intervals.

4. GenAI reports grounded in actual stage data. Automated report generation is table stakes. What is missing is a report generator that is grounded — meaning every sentence cites the actual stage data that supports it, with timestamps and channel references. A report that says "Stage 23 showed elevated near-wellbore complexity" and links to the treating pressure curve that supports that conclusion is a different category of output than a paragraph generated from a language model hallucinating plausible frac commentary.


What We're Building

Groundwork Analytics is building the vendor-neutral RTOC copilot that the completions industry doesn't have yet.

The design premise: the completions engineer at 2 AM deserves better than a wall of Trend 4 plots and 200 threshold alarms. She deserves a system that ingests whatever data she has — from whatever vendors, in whatever format — labels her stages automatically, overlays design versus actual in real time, classifies anomalies with competing hypotheses and confidence levels, and hands off a grounded, cite-able morning report to the day team without anyone opening a spreadsheet.

We are not building another alarm system. We are not building another frac software platform. We are building the interpretation layer that closes the gap between execution automation and monitoring intelligence.

The approach: surface-first, honest about limitations, vendor-neutral by design, and engineered for the completions engineer who is going to stress-test every anomaly classification against their own operational experience. If it doesn't survive that test, it doesn't ship.


The Next Step

If you run an RTOC, manage a completions operations program, or are responsible for the technical stack that your frac engineers work with — we want to talk.

Not to pitch. To understand what the job actually looks like in your shop, on your data, with your vendors and your stage counts and your alarm configuration. The completions industry has been sold a lot of software that looked good in a demo and fell apart at 2 AM. We intend to build the opposite.

Reach out at mehrdad@petropt.co.

The coffee situation at your RTOC is your problem. The monitoring situation is ours.


Mehrdad Shirangi is the founder of Groundwork Analytics and holds a PhD from Stanford. He has worked in upstream data infrastructure, completions analytics, and AI-driven operational intelligence. Groundwork builds vendor-neutral AI tools for completions and drilling operations.


Tags: completions engineering, frac monitoring, RTOC, real-time operations, AI copilot, Permian Basin, frac automation, EDR analytics, well data

Talk to an Expert

Book a Free 30-Min Consultation

Discuss your operational challenges with our team of petroleum engineers and data scientists. No sales pitch — just honest technical guidance.

Book Your Free Consultation →