Back to Insights

MCP Servers for Oilfield Data: Connecting LLMs to Well Logs, Production Data, and Reservoir Models

Feb 7, 2026 Dr. Mehrdad Shirangi

Editorial disclosure: This article reflects Groundwork Analytics' technical perspective on the Model Context Protocol and its application to petroleum engineering.


Large language models can write code, summarize research papers, and draft regulatory filings. But ask Claude or GPT-4 to pull the production history for well PAD-7A, read the porosity log from an LAS file, or compare completion designs across your last 20 Wolfcamp wells, and you get nothing useful. The model does not know your wells exist, has no access to your data, and cannot parse the file formats that petroleum engineers work with every day.

This is not a limitation of the models. It is a data connectivity problem. And a protocol called MCP -- the Model Context Protocol -- is how the rest of the software industry is solving it.

The energy industry has not caught up yet. As of February 2026, there is essentially no MCP infrastructure for petroleum engineering data. No way to connect an LLM to LAS files, WITSML streams, production databases, or reservoir simulation decks through a standardized interface.

That gap is about to close. This article explains what MCP is, why it matters for oil and gas, what an oilfield MCP server would look like architecturally, and what Groundwork Analytics is building to fill the void.

What Is the Model Context Protocol?

The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 for connecting AI models to external data sources and tools. Think of it as a USB port for AI: a universal interface that lets any LLM-powered application -- Claude, ChatGPT, Copilot, or a custom agent -- plug into structured data without custom integration work.

MCP defines three core primitives:

The protocol is transported over JSON-RPC 2.0, uses a client-server architecture similar to the Language Server Protocol (LSP) that powers code editors, and has SDKs available in Python, TypeScript, Java, and C#. The specification is open, versioned, and maintained on GitHub.

Since its release, MCP has been adopted by OpenAI, Google DeepMind, and dozens of tooling companies. There are now MCP servers for databases (PostgreSQL, MySQL), file systems, cloud platforms (AWS, GCP), developer tools (GitHub, Jira), and hundreds of SaaS applications.

In plain terms: MCP is how you give an AI agent the ability to actually do things with real data, instead of just generating text about hypothetical data.

Why This Matters for Oil and Gas

Petroleum engineering runs on heterogeneous, specialized data formats that no general-purpose AI tool understands:

None of these data sources have MCP servers. An LLM cannot read an LAS file. It cannot query a WITSML server. It cannot parse an Eclipse simulation deck. It does not know what a gamma ray log is, let alone how to cross-plot porosity versus depth across a wellbore interval.

This means every attempt to use AI in petroleum engineering workflows hits the same wall: you either manually copy-paste data into a chat window (error-prone, slow, limited to small datasets) or you build a custom integration from scratch (expensive, one-off, brittle).

MCP eliminates that wall by providing a standardized way for AI to access domain-specific data programmatically.

The Current State of MCP in the Energy Sector

We searched extensively for existing MCP implementations in oil and gas, energy, and industrial operations. Here is what exists as of February 2026:

What Exists

PowerMCP -- An open-source collection of MCP servers for power systems software (PowerWorld, OpenDSS). Built by the Power-Agent community, it enables LLMs to run power flow analyses, perform contingency evaluations, and generate reports from power system simulations. This is the closest analog to what the petroleum engineering community needs -- but it is for electrical power grids, not oil and gas.

EnergyPlus-MCP -- An MCP server for building energy simulation using EnergyPlus, developed by Lawrence Berkeley National Laboratory. It lets AI models interact with building energy models. Published in SoftwareX with a peer-reviewed paper. Impressive work, but entirely focused on building science, not subsurface energy.

Emporia Energy MCP -- An MCP server for accessing home energy monitoring device data from Emporia Energy. Consumer-grade, residential energy monitoring. Not relevant to upstream oil and gas, but notable as an example of energy device data exposed through MCP.

SkyWork AI's Oil & Gas RAG MCP Server -- A guide published by SkyWork AI describes an MCP server architecture for oil and gas that includes tools like get_production (daily production data retrieval) and get_drilling_events (drilling event logs). This appears to be a proof-of-concept or tutorial rather than a production-ready open-source tool. It is the only content we found specifically addressing MCP for petroleum engineering data.

Inductive Automation's Ignition MCP Module -- Announced at ICC 2025, this module will expose Ignition's SCADA capabilities (tags, UDTs, alarms, scripting engine) through MCP. Currently in proof of concept with a planned release later in 2026. This is significant because Ignition is widely deployed in oil and gas for SCADA and HMI. When this module ships, it will provide a potential bridge between MCP-enabled AI agents and live operational data. However, it is a general industrial automation tool, not petroleum engineering-specific.

WWT's Manufacturing Edge MCP Architecture -- World Wide Technology published a detailed blog post on combining SCADA instrumentation with MCP and agentic design patterns for manufacturing. Conceptually relevant to oilfield operations, but focused on discrete manufacturing, not upstream oil and gas.

What Does Not Exist

The gap is total. Power systems have PowerMCP. Building energy has EnergyPlus-MCP. Even home energy monitors have an MCP server. Upstream oil and gas -- an industry that spends billions on data infrastructure -- has nothing.

What an Oilfield MCP Server Would Enable

Imagine this workflow. A production engineer sits down at 7 AM and opens an AI assistant connected to an oilfield MCP server. The conversation goes like this:

Engineer: "Show me the production history for well PAD-7A over the last 90 days."

The AI agent calls the get_production_history tool, queries the production database, and returns a time series of daily oil, gas, and water volumes, along with a decline trend and any anomaly flags.

Engineer: "What is the average porosity in the Wolfcamp A across our Midland Basin acreage? Use the most recent logs."

The agent calls search_las_files to find LAS files tagged to Wolfcamp A wells, then calls read_las_curve on each file to extract the porosity curve over the target interval, computes the average, and returns the result with statistical context (standard deviation, min, max, well-by-well breakdown).

Engineer: "Compare the completion designs and 12-month cumulative production for our last 20 wells in the Delaware Basin."

The agent calls get_completion_records to pull stage counts, proppant loading, fluid volumes, and cluster spacing for 20 wells, joins the results with get_production_history to get 12-month cumulatives, and presents a comparison table with correlations between design parameters and production outcomes.

Engineer: "Pull the reservoir simulation results for Case 3 and compare predicted versus actual watercut for well group WG-NORTH."

The agent calls read_simulation_output to extract Case 3 results from an Eclipse or CMG output file, calls get_production_history for the WG-NORTH well group, aligns the time series, and plots predicted versus actual watercut with residual analysis.

None of these interactions are possible today without custom code. With an MCP server, they become standard tool calls that any MCP-compatible AI client can execute.

Beyond Individual Queries: Agent Workflows

The real power of MCP is not individual queries -- it is enabling autonomous agent workflows that chain multiple operations together. Consider:

Daily production surveillance agent: Every morning at 5 AM, an AI agent connects via MCP, pulls overnight production data from the SCADA historian, compares it to decline forecasts, flags wells producing below expectation, cross-references flagged wells with recent workover records, and generates a prioritized exception report that is waiting in the production engineer's inbox when they arrive.

Completion design optimization agent: Before a new well is drilled, an agent uses MCP to pull completion records and production outcomes for all analog wells within a specified radius, identifies correlations between design parameters and production performance, and generates a recommended completion design with uncertainty ranges.

Regulatory compliance agent: An agent monitors state commission databases through MCP, pulls newly posted production reports for your leases, compares them against your internal records, flags discrepancies, and generates exception reports before filing deadlines.

Each of these agents is a composition of MCP tool calls -- the same tools, used in different sequences, to solve different problems. Build the MCP server once, and you unlock all of these workflows.

Technical Architecture

An oilfield MCP server is not a single monolithic application. It is a collection of data adapters behind a standardized MCP interface. Here is how we think about the architecture:

┌─────────────────────────────────────────────────┐ │ MCP Client (AI Agent) │ │ Claude / GPT / Copilot / Custom Agent │ └─────────────────┬───────────────────────────────┘ │ JSON-RPC 2.0 (MCP Protocol) │ ┌─────────────────▼───────────────────────────────┐ │ Oilfield MCP Server │ │ │ │ ┌───────────┐ ┌───────────┐ ┌───────────────┐ │ │ │ Tools │ │ Resources │ │ Prompts │ │ │ └─────┬─────┘ └─────┬─────┘ └───────┬───────┘ │ │ │ │ │ │ │ ┌─────▼──────────────▼───────────────▼───────┐ │ │ │ Data Adapter Layer │ │ │ │ │ │ │ │ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │ │ │ │ LAS │ │ WITSML │ │Production │ │ │ │ │ │ Adapter │ │ Adapter │ │DB Adapter│ │ │ │ │ └──────────┘ └──────────┘ └───────────┘ │ │ │ │ │ │ │ │ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │ │ │ │Reservoir │ │Completion│ │Regulatory │ │ │ │ │ │Sim Adapt.│ │ Adapter │ │ Adapter │ │ │ │ │ └──────────┘ └──────────┘ └───────────┘ │ │ │ └────────────────────────────────────────────┘ │ │ │ │ ┌────────────────────────────────────────────┐ │ │ │ Unit Conversion & Validation │ │ │ │ (field/SI units, datum corrections, QC) │ │ │ └────────────────────────────────────────────┘ │ └──────────────────────────────────────────────────┘ │ ┌─────────────┼─────────────────┐ │ │ │ ▼ ▼ ▼ ┌────────┐ ┌─────────┐ ┌──────────────┐ │LAS │ │SCADA / │ │Eclipse / CMG │ │Files │ │Historian│ │Sim Decks │ │(.las) │ │(SQL/API)│ │(.DATA/.dat) │ └────────┘ └─────────┘ └──────────────┘

Layer 1: Data Adapters

Each adapter handles one data source type:

LAS Adapter -- Wraps the lasio Python library to read LAS 1.2, 2.0, and 3.0 files. Exposes tools like list_las_files, read_las_header, read_las_curve, and search_wells_by_formation. Handles common LAS quality issues: wrapped lines, missing null values, non-standard mnemonics.

WITSML Adapter -- Connects to WITSML servers using the Energistics Transfer Protocol (ETP) for near-real-time data. Exposes drilling parameters, mud logs, trajectories, and formation evaluation data. Supports WITSML 1.4.1 and 2.0 schemas.

Production Database Adapter -- Connects to SQL databases (PostgreSQL, SQL Server, Oracle) or accepts CSV/Excel uploads. Exposes tools like get_production_history, get_well_header, get_well_test, and get_injection_history. Handles the reality that every operator's schema is different by using a configurable field mapping layer.

Reservoir Simulation Adapter -- Parses Eclipse (.DATA input, .UNRST/.UNSMRY output), CMG (IMEX/GEM/STARS), and OPM Flow formats. Exposes tools like read_simulation_input, read_simulation_output, compare_simulation_cases, and extract_well_results.

Completion Adapter -- Reads completion records from databases or structured files. Exposes tools like get_completion_design, compare_completions, and search_by_design_parameter. Handles the heterogeneity of completion data formats across operators.

Regulatory Adapter -- Connects to public APIs and data portals for state regulatory commissions (Texas RRC, COGCC, NDIC, OCC). Exposes tools like get_regulatory_production, get_permit_data, and get_well_status.

Layer 2: Unit Conversion and Validation

This is where petroleum engineering domain knowledge becomes essential. An MCP server for oilfield data must understand:

Without this layer, an AI agent might mix up measured depth and true vertical depth, compare oil rates in different units, or treat prorated allocation volumes as measured production. Domain-aware validation is not optional -- it is the difference between a useful tool and a dangerous one.

Layer 3: MCP Interface

The top layer exposes everything through the standard MCP protocol. Any MCP-compatible client -- Claude Desktop, a custom Python agent using the MCP SDK, a web application -- can connect and use the tools without knowing anything about the underlying data formats.

This is the key advantage of MCP over custom API integrations: build once, use everywhere. An MCP server built for Anthropic's Claude works identically with OpenAI's agents, Google's Gemini, or any future LLM platform that supports the protocol.

Implementation Considerations

Building an MCP server for oilfield data raises several practical questions that are worth addressing directly.

Security and Data Access Control

Production data is sensitive. Well logs can reveal competitive intelligence. Simulation models represent millions of dollars in engineering work. An oilfield MCP server must support:

MCP already includes basic security primitives. The Ignition MCP Module, for example, is being built with access controls and audit trails because industrial environments demand them. An oilfield MCP server should meet the same standard.

Handling Messy Data

Anyone who has worked with oilfield data knows the reality: LAS files have formatting errors, production databases have gaps, completion records are inconsistent, and different wells use different naming conventions. An MCP server cannot pretend the data is clean.

The approach we advocate: report data quality honestly. Every response from the MCP server should include quality metadata -- completeness percentage, flagged anomalies, known gaps, unit verification status. Let the AI agent (and ultimately the engineer) decide how to handle data quality issues rather than silently hiding them.

Performance at Scale

A mid-size operator with 2,000 wells might have 50,000 LAS files, 10 years of daily production data (7 million rows), and hundreds of simulation cases. The MCP server needs to handle queries efficiently:

What the Industry Needs to Build

An oilfield MCP server should start with the most common use cases and the most widely used data formats:

This infrastructure should be open source. The petroleum engineering community benefits most when fundamental data access tooling is open -- the value layer is in what you build on top of it, not in the connectors themselves. The power systems community understood this and built PowerMCP as open source. Petroleum engineering should follow the same playbook.

What You Can Do Right Now

You do not need to wait for someone else to build this. Here is how to get involved:

If You Are a Petroleum Engineer Who Codes

If You Manage an Engineering or Data Team

If You Want to Build or Contribute

The petroleum engineering community needs people with expertise in WITSML, reservoir simulation file formats (especially CMG), state regulatory data APIs (Texas RRC, COGCC), SCADA historian integration, and petrophysical data processing. If you are building in this space, we would love to hear from you.

The Bigger Picture

MCP is not just a data access protocol. It is the infrastructure layer that makes agentic AI possible in petroleum engineering.

Without MCP, building an AI agent for oilfield operations means writing custom data integrations for every data source, every operator, every deployment. That is why most "AI in oil and gas" projects take 6-12 months and cost hundreds of thousands of dollars.

With MCP, the data access layer is standardized and reusable. Building an AI agent becomes a matter of composing existing tools into workflows, not re-engineering data pipelines from scratch. The cost and timeline drop by an order of magnitude.

The power systems community understood this and built PowerMCP. The building energy community understood this and built EnergyPlus-MCP. Inductive Automation understood this and is building an MCP module for Ignition.

It is time for petroleum engineering to catch up. The data formats are more complex, the stakes are higher, and the potential impact is larger. An upstream operator with 2,000 wells and a functioning MCP server will be able to deploy AI agents that a competitor without one simply cannot match.

The foundation needs to be built. The question is who builds it first.

Interested in MCP for petroleum engineering? Get in touch.

Back to Insights