Editorial disclosure
This article is published by Groundwork Analytics LLC. The author, Dr. Mehrdad Shirangi, is the founder of Groundwork Analytics and holds a PhD from Stanford University in Energy Systems Optimization. This workshop references Groundwork's open-source tools, including petro-mcp. All code and materials are freely available. No SPE chapter or university has paid for placement in this article.
This article is not about AI in petroleum engineering. This article is the workshop.
What follows is a complete, ready-to-deliver 2-hour workshop that any SPE student chapter can run as a guest lecture, technical seminar, or chapter event. The structure is detailed enough that a presenter can walk through it without additional materials. The code is real and runnable. The career advice is honest.
If you are an SPE student chapter officer looking for a technical event that goes beyond recruiter presentations and company info sessions, this is it. If you are a PE faculty member looking for a guest lecture on applied AI, everything you need is below. And if you are a student who just wants to learn -- read on, open a Jupyter notebook, and follow along.
Workshop duration: 2 hours Audience: Petroleum engineering students (junior/senior undergrad, graduate), related engineering disciplines Prerequisites: Basic Python familiarity (loops, functions, pandas). No ML experience required. Materials needed: Laptop with Python, Jupyter, and a few libraries (setup instructions at the end)
Workshop Agenda at a Glance
| Time | Section | Duration |
|---|---|---|
| 0:00 | Opening: The AI Landscape in Oil & Gas | 10 min |
| 0:10 | Part 1: What AI Actually Does in Oil & Gas | 25 min |
| 0:35 | Part 2: Hands-On Demo -- Python for PE | 25 min |
| 1:00 | Break | 10 min |
| 1:10 | Part 3: The MCP Revolution -- Connecting AI to Oilfield Data | 25 min |
| 1:35 | Part 4: Career Panel / Q&A | 15 min |
| 1:50 | Closing: Resources and Next Steps | 10 min |
Opening: The AI Landscape in Oil & Gas (10 minutes)
Start with the numbers. Not hype. Not predictions from consulting firms trying to sell reports. The actual state of AI adoption in upstream oil and gas as of 2026.
The Numbers That Matter
- $4.28 billion -- the size of the AI in oil and gas market in 2026, growing at a 13% CAGR (MarketsandMarkets, 2025)
- 49% of oil and gas organizations plan to deploy some form of agentic AI by end of 2026 (World Economic Forum / Accenture, 2025)
- 13% have already deployed agentic AI in some capacity
- 45% of petroleum engineering professionals report receiving zero formal AI training from their employers (SPE Digital Energy Conference, 2025)
- $130 million -- AI-generated cost savings reported by Equinor in 2025 alone
- $1.8 billion -- value attributed to AI initiatives at Saudi Aramco in 2024
The Framing
Here is the presenter's key message for the opening:
"The oil and gas industry is spending billions on AI. Nearly half the workforce has received zero training. That gap is your opportunity. This workshop will not make you an AI expert in two hours. It will show you exactly what AI does in petroleum engineering, let you write real code that solves real PE problems, and point you toward the skills that will make you the most valuable engineer in the room."
Do not spend time on AI definitions or explaining what a neural network is. These students have heard it. Get to the specifics of what AI does in their discipline.
Part 1: What AI Actually Does in Oil & Gas (25 minutes)
This section covers six real engineering applications. No ChatGPT demos. No image generation gimmicks. Every example here is something that is running in production at an operator or service company right now.
1. Production Surveillance and Anomaly Detection
The problem: A mid-size Permian operator runs 500 wells. Each well produces daily oil, gas, and water volumes, reports wellhead pressures, and generates dozens of SCADA parameters. A production engineer is responsible for 80-120 wells. Something goes wrong on Well PAD-3H at 2 AM on a Tuesday. Nobody notices until the weekly production review on Friday. Three days of deferred production.
What AI does: Anomaly detection models monitor streaming production data and flag deviations from expected behavior in near-real-time. The models learn each well's normal operating envelope and alert when something falls outside it -- a sudden water cut increase, a pressure drop, a gas-oil ratio spike.
The engineering insight: This is not sophisticated ML. Most production anomaly detection runs on statistical methods -- moving averages, z-scores, isolation forests. The hard part is not the algorithm. It is connecting to the data (SCADA systems, production databases) and reducing false alarm rates to something an engineer can actually act on.
Who does it: SLB's Tela platform, Baker Hughes' Leucipa, Cognite's Atlas AI, and dozens of smaller vendors including Groundwork Analytics.
2. Decline Curve Prediction: Physics-Informed vs. Pure ML
The problem: Every producing well needs a production forecast. Traditional Arps decline curves assume constant operating conditions that rarely hold in unconventional plays. Pure ML models overfit to training data and fail on new wells or changing conditions.
What AI does: Physics-informed machine learning models combine the governing equations of reservoir flow with neural network flexibility. The model knows that production must decline according to physical laws, but it learns the specific decline behavior from data. This produces forecasts that are both physically consistent and data-adaptive.
The engineering insight: Pure data-driven models for decline curve analysis frequently underperform in the scenarios that matter most. Research has shown 30-60% error reduction when physics constraints are embedded in the ML model compared to black-box approaches. For a deeper treatment, see our article on physics-informed vs. pure ML approaches to decline curve analysis.
3. Drilling Optimization: ROP Prediction and NPT Reduction
The problem: Drilling a horizontal well in the Permian costs $6-10 million. Non-productive time (NPT) -- stuck pipe, lost circulation, equipment failures -- adds 10-20% to well costs. Rate of penetration (ROP) optimization can shave days off a drilling program.
What AI does: ML models trained on offset well data predict ROP as a function of weight on bit, RPM, flow rate, mud weight, and formation properties. More advanced systems predict drilling dysfunctions (stick-slip, whirl) before they occur and recommend parameter adjustments. NPT prediction models identify wells at risk of specific failure modes before spud.
The engineering insight: The value here is not the model itself -- it is getting the model's recommendations into the driller's hands in real time. The data pipeline from surface sensors to the model to the rig floor is where most drilling AI projects fail. WITSML, the industry standard for real-time drilling data transmission, is the backbone of this pipeline.
4. Completion Design Optimization
The problem: Completion costs represent 60-70% of a horizontal well's total cost. Operators pump millions of pounds of proppant and thousands of barrels of fluid into each well. The relationship between completion parameters (cluster spacing, proppant loading, fluid volume, pump rate) and well performance is complex, non-linear, and basin-specific.
What AI does: ML models trained on hundreds to thousands of completed wells learn the relationship between completion design, reservoir properties (net pay, porosity, pressure), well spacing, and EUR. These models identify optimal completion designs for new wells and quantify the expected production uplift from design changes.
The engineering insight: Completion optimization is where AI has arguably delivered the most measurable value in upstream E&P. Multiple operators have published SPE papers documenting 10-20% EUR improvements from ML-optimized completions. The key is that these models augment engineering judgment -- they narrow the design space so engineers can focus on the designs most likely to outperform.
5. Artificial Lift Failure Prediction
The problem: Rod pump failures cost $15,000-50,000 per event in workover costs plus lost production. ESP failures are even more expensive. Most operators run artificial lift equipment to failure, then react.
What AI does: Predictive models analyze motor current, pump vibration, dynamometer cards (rod pump), and intake pressure trends (ESP) to identify equipment approaching failure. Lead times of 7-30 days before failure give operations teams time to plan workovers rather than respond to emergencies.
The engineering insight: Rod pump failure prediction using ML analysis of dynamometer card patterns is one of the oldest AI applications in petroleum engineering -- it predates the current AI hype by two decades. Chord Energy reported in 2025 that they run AI-driven optimization on 99% of their rod lift wells. This is a mature, proven application.
6. Reservoir Characterization and History Matching
The problem: History matching a reservoir simulation model to observed production data is one of the most time-consuming tasks in reservoir engineering. It requires running hundreds of simulations, each taking hours, to calibrate uncertain parameters.
What AI does: Surrogate models (trained neural networks or Gaussian processes) approximate the full reservoir simulator at a fraction of the computational cost, enabling rapid exploration of the parameter space. Ensemble-based methods and deep learning approaches can reduce history matching time from weeks to hours.
The engineering insight: This is the application closest to the author's own research background. Physics-based surrogate models for reservoir simulation are an active research area with direct practical value. If you are a graduate student looking for research topics at the intersection of AI and PE, this is fertile ground.
Part 2: Hands-On Demo -- Python for PE (25 minutes)
This is where the workshop shifts from lecture to hands-on. The presenter shares their screen and live-codes in a Jupyter notebook. Students with laptops open follow along. Students without laptops watch -- the code is all reproduced below and they can run it after the workshop.
Setup
Open a Jupyter notebook. Run the following to verify your environment:
import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
print("All imports successful. Let's go.")
Exercise 1: Load and Plot Production Data
We will work with synthetic production data that mimics a typical Permian Basin horizontal well. In a real workflow, you would pull this from the Texas RRC, Enverus, or your operator's production database.
# Generate synthetic production data for a horizontal well
# Mimics monthly oil production (bbl/month) over 5 years
np.random.seed(42)
months = np.arange(1, 61) # 60 months of production
qi = 15000 # Initial production: 15,000 bbl/month
di = 0.08 # Initial decline rate (nominal, per month)
b = 1.2 # Arps b-factor (hyperbolic)
# Arps hyperbolic decline
q_true = qi / (1 + b * di * months) ** (1 / b)
# Add realistic noise (measurement error + operational variation)
noise = np.random.normal(0, 0.05 * q_true)
q_observed = q_true + noise
# Simulate some operational events
q_observed[14:16] *= 0.3 # Month 15-16: well shut in for offset frac
q_observed[30:32] *= 0.6 # Month 31-32: ESP failure and workover
q_observed[q_observed < 0] = 0
# Create a DataFrame
production = pd.DataFrame({
'month': months,
'oil_bbl': q_observed.round(0),
'date': pd.date_range('2021-01-01', periods=60, freq='ME')
})
# Plot it
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(production['date'], production['oil_bbl'], 'o-', markersize=4,
color='#2E86AB', label='Monthly Oil Production')
ax.set_xlabel('Date')
ax.set_ylabel('Oil Production (bbl/month)')
ax.set_title('Horizontal Well Production History -- Permian Basin')
ax.legend()
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
print(f"Peak production: {production['oil_bbl'].max():,.0f} bbl/month")
print(f"Current production: {production['oil_bbl'].iloc[-1]:,.0f} bbl/month")
print(f"Cumulative production: {production['oil_bbl'].sum():,.0f} bbl")
Discussion point for the presenter: Ask the students what they notice in the production profile. The shut-in at month 15 and the ESP failure at month 31 should be visible. This is exactly the kind of pattern that anomaly detection models look for.
Exercise 2: Decline Curve Analysis with scipy
Now we fit an Arps hyperbolic decline curve to this data -- the same kind of analysis PE students do manually in their reservoir engineering courses, but automated with Python.
# Define the Arps hyperbolic decline model
def arps_hyperbolic(t, qi, di, b):
"""Arps hyperbolic decline equation.
Parameters:
t: time (months)
qi: initial production rate
di: initial nominal decline rate (1/month)
b: Arps b-factor (0 < b < 2 for hyperbolic)
"""
return qi / (1 + b * di * t) ** (1 / b)
# Remove the anomalous months before fitting
# In practice, you'd flag these with domain knowledge or anomaly detection
mask = np.ones(len(production), dtype=bool)
mask[14:16] = False # Shut-in period
mask[30:32] = False # ESP failure
clean_data = production[mask].copy()
# Fit the decline curve
popt, pcov = curve_fit(
arps_hyperbolic,
clean_data['month'].values,
clean_data['oil_bbl'].values,
p0=[15000, 0.08, 1.2], # Initial guess
bounds=([0, 0, 0], [50000, 1, 2]) # Physical bounds
)
qi_fit, di_fit, b_fit = popt
print(f"Fitted parameters:")
print(f" qi = {qi_fit:,.0f} bbl/month")
print(f" di = {di_fit:.4f} /month ({di_fit*12:.1%} /year)")
print(f" b = {b_fit:.2f}")
# Plot the fit
months_forecast = np.arange(1, 121) # Forecast out to 10 years
q_forecast = arps_hyperbolic(months_forecast, qi_fit, di_fit, b_fit)
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(production['month'], production['oil_bbl'], 'o',
markersize=4, color='#2E86AB', label='Observed', alpha=0.7)
ax.plot(months_forecast, q_forecast, '-', color='#E8451E',
linewidth=2, label='Arps Hyperbolic Fit')
ax.axvline(x=60, color='gray', linestyle='--', alpha=0.5, label='Forecast start')
ax.set_xlabel('Month')
ax.set_ylabel('Oil Production (bbl/month)')
ax.set_title('Decline Curve Analysis -- Arps Hyperbolic Fit')
ax.legend()
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
# Calculate EUR (10-year)
eur_10yr = np.trapezoid(q_forecast, months_forecast)
print(f"\nEstimated Ultimate Recovery (10-year): {eur_10yr:,.0f} bbl")
print(f"Remaining reserves (months 61-120): {np.trapezoid(q_forecast[59:], months_forecast[59:]):,.0f} bbl")
Discussion point: Compare the fitted parameters to the "true" parameters we used to generate the data. How close did the fit get? What happens if you do not remove the anomalous months?
Exercise 3: Simple Anomaly Detection
Now for the AI part. We build a dead-simple anomaly detector that flags unexpected production drops. This is the core logic behind production surveillance systems that operators pay vendors six figures for.
def detect_anomalies(production_series, window=6, threshold=2.0):
"""Detect production anomalies using rolling statistics.
Flags months where production drops below the rolling mean
by more than `threshold` standard deviations.
Parameters:
production_series: pandas Series of production values
window: rolling window size (months)
threshold: number of standard deviations for flagging
"""
rolling_mean = production_series.rolling(window=window, min_periods=3).mean()
rolling_std = production_series.rolling(window=window, min_periods=3).std()
lower_bound = rolling_mean - threshold * rolling_std
anomalies = production_series < lower_bound
return anomalies, rolling_mean, lower_bound
# Run anomaly detection
anomalies, rolling_mean, lower_bound = detect_anomalies(
production['oil_bbl'], window=6, threshold=2.0
)
# Plot results
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(production['month'], production['oil_bbl'], 'o-',
markersize=4, color='#2E86AB', label='Production')
ax.plot(production['month'], rolling_mean, '--',
color='#4CAF50', label='Rolling Mean (6-month)')
ax.plot(production['month'], lower_bound, ':',
color='#FF9800', label='Alert Threshold')
ax.scatter(production.loc[anomalies, 'month'],
production.loc[anomalies, 'oil_bbl'],
color='red', s=100, zorder=5, label='Anomaly Detected')
ax.set_xlabel('Month')
ax.set_ylabel('Oil Production (bbl/month)')
ax.set_title('Production Anomaly Detection')
ax.legend()
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
n_anomalies = anomalies.sum()
print(f"\nAnomalies detected: {n_anomalies} months")
for idx in production[anomalies].index:
row = production.loc[idx]
print(f" Month {row['month']:.0f}: {row['oil_bbl']:,.0f} bbl "
f"(expected ~{rolling_mean.loc[idx]:,.0f} bbl)")
Discussion point: This 20-line function is doing what production engineers do mentally when they review production plots. The value of automating it is not that the algorithm is smarter -- it is that the algorithm reviews 500 wells every hour, never takes a day off, and never misses a well because it was busy with a different fire drill.
Connecting to Real Data: petro-mcp
The code above works on synthetic data. In practice, you need to get real production data, well logs, and completion information into your Python environment. That is where petro-mcp comes in, and we will cover it in Part 3 after the break.
Break (10 minutes)
Tell students: "Install petro-mcp during the break if you want to follow the next demo."
pip install petro-mcp
Part 3: The MCP Revolution -- Connecting AI to Oilfield Data (25 minutes)
What Is MCP and Why Should PE Students Care?
The Model Context Protocol (MCP) is an open standard introduced by Anthropic for connecting AI models to external data sources and tools. Think of it as a USB port for AI: a universal interface that lets any AI application -- Claude, ChatGPT, a custom agent -- plug into structured data without custom integration work.
MCP defines three core primitives:
- Tools -- Functions the AI can call. Example: "Query the production database for well X."
- Resources -- Data objects the AI can read. Example: a well header record or a completion summary.
- Prompts -- Pre-built templates that guide the AI through domain-specific workflows. Example: "Analyze this decline curve and flag anomalies."
As of March 2026, there are MCP servers for databases, cloud platforms, developer tools, and thousands of SaaS applications. For the oil and gas industry? Almost nothing. For a detailed breakdown of this gap, see our article on MCP servers for oilfield data.
Why This Gap Is Your Opportunity
The energy industry generates enormous volumes of specialized data in formats that no general-purpose AI tool understands: LAS files for well logs, WITSML for real-time drilling data, Eclipse/CMG simulation decks, SCADA historian databases, and state regulatory filings. Until someone builds MCP servers that speak these formats, AI agents cannot interact with oilfield data in any meaningful way.
This is not a computer science problem. It is a petroleum engineering problem. You need to understand what a gamma ray log means before you can build a tool that interprets one. You need to know what a dynamometer card looks like before you can build a predictive maintenance agent. Domain expertise is the bottleneck, not coding ability.
Demo: petro-mcp in Action
petro-mcp is an open-source MCP server for petroleum engineering data built by Groundwork Analytics. It gives AI agents the ability to work with well data, production history, and engineering calculations.
Here is what a session looks like when an AI agent has petro-mcp available:
User prompt to AI agent:
"Show me the production history for Well PERMIAN-001 and flag any anomalies in the last 6 months."
What the agent does behind the scenes (via MCP tools):
- Calls
get_well_info(well_id="PERMIAN-001")-- retrieves well header, location, completion date - Calls
get_production_data(well_id="PERMIAN-001", start_date="2025-09-01")-- pulls monthly production volumes - Calls
analyze_decline_curve(well_id="PERMIAN-001")-- fits Arps decline model, calculates EUR - Compares recent production to the fitted decline trend
- Returns a natural-language summary with charts
The agent did not need custom code or API integration. It used standardized MCP tools that any AI application can call.
# What a petro-mcp tool looks like under the hood
# (simplified for illustration)
from petro_mcp.tools import get_production_data, analyze_decline_curve
# An AI agent would call these tools automatically
# Here we call them directly to show what happens
production = get_production_data(well_id="PERMIAN-001")
analysis = analyze_decline_curve(well_id="PERMIAN-001")
print(f"Well: {analysis['well_id']}")
print(f"Qi: {analysis['qi']:,.0f} bbl/month")
print(f"Di: {analysis['di']:.4f} /month")
print(f"b-factor: {analysis['b']:.2f}")
print(f"EUR (10-year): {analysis['eur_10yr']:,.0f} bbl")
print(f"Anomalies detected: {len(analysis['anomalies'])}")
How Students Can Contribute to petro-mcp
petro-mcp is open source, hosted at github.com/petropt/petro-mcp, and actively looking for contributors. Here are specific things PE students can build:
- LAS file parser tool -- Build a tool that reads LAS files (using lasio) and makes well log data available to AI agents
- State regulatory data tools -- Build tools that pull public production data from the Texas RRC, Colorado COGCC, North Dakota NDIC, or Oklahoma OCC
- Completion analytics -- Build tools that analyze completion designs from FracFocus data
- Material balance calculator -- Wrap classic PE calculations (OOIP, OGIP, recovery factor) as MCP tools
- Type curve generator -- Build tools that generate type curves from public production data
Contributing to petro-mcp gives you three things: a GitHub portfolio that demonstrates PE domain knowledge, experience with a protocol that is becoming industry-standard, and the ability to tell a future employer: "I built open-source AI tooling for petroleum engineering."
The Career Opportunity
Here is the blunt version: the oil and gas industry needs engineers who can build AI agents for oilfield data. Not data scientists who learned what a wellbore is last week. Not software engineers who think petroleum engineering is "like, fracking?" Engineers who understand both the domain and the tools.
As of 2026, that intersection is sparsely populated. The engineers who position themselves there -- who can write Python, understand MCP, and know what a production anomaly actually means -- will have career options that their classmates who only learned to run Eclipse do not.
Part 4: Career Panel / Q&A (15 minutes)
This section works best as an open conversation, but the presenter should hit these key points:
Skills That Actually Matter
Ranked by practical value for a PE student entering the workforce in 2026-2027:
- Domain knowledge -- This is your competitive advantage over computer science graduates. Understanding what production data means, how wells are completed, why artificial lift systems fail, and what drives drilling costs is the hardest thing to teach. You already have it or are building it.
- Python -- Not R. Not MATLAB (unless your advisor insists). Python is the language of production data science in oil and gas. Learn pandas, numpy, scipy, and matplotlib. That covers 80% of what you need.
- Data engineering basics -- SQL, data pipelines, API integrations. The ability to get data from Point A to Point B in a clean, automated way is more valuable than knowing the latest deep learning architecture.
- ML fundamentals -- Understand regression, classification, clustering, and time series analysis. You do not need to build transformers from scratch. You need to know when a random forest is the right tool and when it is not.
- Communication -- The engineer who can build an ML model and explain the results to a VP of Operations in plain English will advance faster than the engineer who can only build the model.
Building a Portfolio
The hiring market rewards demonstrated ability over credentials. Here is how to build a portfolio:
- Contribute to open-source PE projects -- petro-mcp, lasio, welly, striplog, and the Open Porous Media (OPM) project all accept contributions
- Publish Jupyter notebooks -- Solve a real PE problem with data and code, put it on GitHub, and write a brief explanation. Five good notebooks are worth more than a polished resume
- Write about what you learn -- A LinkedIn post explaining how you used Python to analyze decline curves reaches more hiring managers than a line on your resume that says "proficient in Python"
- Participate in SPE data science competitions -- The SPE Student Data Analytics Competition and similar events are excellent resume builders
The Hiring Landscape
For a comprehensive, data-driven look at who is hiring, what they are paying, and where the opportunities are, read our State of Oil & Gas Hiring 2026 report. Key takeaways for students:
- Traditional PE hiring has been flat. Data engineering, AI/ML, and digital roles are growing.
- The great crew change is real: 50% of the upstream technical workforce is over 50. Retirements are accelerating.
- Mid-size operators (Permian Resources, Crescent Energy, Chord Energy) are often better bets for early-career engineers than majors, because you get broader exposure and more responsibility faster.
- Starting salaries for PE roles range from $85,000-$110,000. PE roles with strong data/AI skills command a 15-25% premium.
For additional career guidance, see our article on breaking into oil and gas in 2026.
Q&A
Open the floor. Common questions from students:
- "Do I need a master's degree?" -- Not for most industry roles. A master's helps for research-oriented positions and can differentiate you in a tight market, but experience and demonstrated skills matter more.
- "Should I learn petroleum engineering or data science?" -- Both. The hybrid skillset is more valuable than either alone. A PE degree with Python/ML skills is a stronger combination than a CS degree with no domain knowledge.
- "Is the oil and gas industry dying?" -- No, but it is changing. Global oil demand is still growing. The workforce is aging. The engineers who thrive will be those who can do more with less -- which is exactly what AI enables.
Closing: Resources and Next Steps (10 minutes)
Essential Resources
Open-source tools:
- petro-mcp -- MCP server for petroleum engineering data
- lasio -- LAS file parsing in Python
- welly -- Well log analysis in Python
- OPM Flow -- Open-source reservoir simulator
Learning paths:
- SPE PetroWiki -- Free, comprehensive PE reference
- Agile Scientific tutorials -- Geocomputing and subsurface Python
- Kaggle -- ML competitions and datasets (search for "oil gas" or "well log")
Groundwork Analytics articles (free, no paywall):
- Agentic AI in Upstream Oil & Gas -- What agentic AI means for E&P
- AI Agent for Production Reporting -- Step-by-step deployment guide
- Physics-Informed vs. Pure ML for Decline Curves -- Why black-box models fail
- MCP Servers for Oilfield Data -- The data connectivity gap
- State of O&G Hiring 2026 -- Data-driven hiring report
Stay Connected
- Follow Groundwork Analytics on LinkedIn and X/Twitter
- Star petro-mcp on GitHub and consider contributing
- Email questions to info@petropt.com
How to Request This Workshop
This workshop is available for any SPE student chapter, university petroleum engineering department, or student engineering organization. It can be delivered in-person or virtually.
To book:
- Email info@petropt.com with your chapter name, university, preferred date, and expected attendance
- Lead time: 2-3 weeks minimum
- Cost: Free for SPE student chapters
- Format: Virtual (Zoom/Teams) or in-person (travel costs may apply for in-person events outside of the presenter's region)
The presenter is Dr. Mehrdad Shirangi, founder of Groundwork Analytics and a Stanford PhD in Energy Systems Optimization who has been building AI solutions for the oil and gas industry since 2018.
Pre-Workshop Setup Guide
Distribute this to students 1 week before the workshop.
Required Software
- Python 3.10+ -- Install via python.org or Anaconda
- Jupyter Notebook or JupyterLab
pip install jupyterlab
- Required Python libraries
pip install numpy pandas scipy matplotlib lasio petro-mcp
- Verify your installation -- Run this in a Jupyter cell:
import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
import lasio
print(f"NumPy: {np.__version__}")
print(f"Pandas: {pd.__version__}")
print("All good. See you at the workshop.")
Optional (for Part 3 deep dive)
- Claude Desktop or VS Code with Claude extension -- to see petro-mcp in action with a real AI agent
- A GitHub account -- if you want to contribute to petro-mcp during or after the workshop
Troubleshooting
- Windows users: If
pipfails, trypython -m pip installinstead - Mac users with M1/M2: Use
condainstead ofpipif you hit architecture issues with scipy - Cannot install anything? No problem -- you can follow along on the presenter's screen and run everything later. All code is in this article.
Adaptations
Shortened Version: 1-Hour Workshop
If your chapter only has a 1-hour slot, cut the workshop as follows:
| Time | Section | Duration |
|---|---|---|
| 0:00 | Opening (condensed) | 5 min |
| 0:05 | Part 1: AI Applications (top 3 only) | 15 min |
| 0:20 | Part 2: Hands-On (Exercise 1 + 2 only) | 20 min |
| 0:40 | Part 3: MCP Overview (no live demo) | 10 min |
| 0:50 | Q&A and Resources | 10 min |
Skip Exercise 3 (anomaly detection) and the live petro-mcp demo. Focus on the decline curve analysis code and the career guidance.
Extended Version: Half-Day Workshop (4 hours)
For chapters that want deeper coverage, expand the workshop to include:
- Extended hands-on session (60 min): Students work in pairs on one of three assignments:
- Assignment A: Download real Texas RRC production data and fit decline curves to three wells
- Assignment B: Build an anomaly detector with configurable sensitivity and test it on messy data
- Assignment C: Read an LAS file with lasio, plot the gamma ray log, and identify formation tops
- MCP contribution sprint (45 min): Fork petro-mcp, pick an issue from the GitHub issues list, and make a first contribution. Presenters help students through their first pull request.
- Panel discussion (30 min): Invite 2-3 industry professionals (virtually, if needed) to discuss AI adoption at their organizations and what they look for when hiring.
Dr. Mehrdad Shirangi is the founder of Groundwork Analytics and holds a PhD from Stanford University in Energy Systems Optimization. He has been building AI solutions for the energy industry since 2018. He is the creator of petro-mcp, the first open-source MCP server for petroleum engineering data. Connect on X/Twitter and LinkedIn, or reach out at info@petropt.com.
Related Articles
- The Petroleum Engineering Skills Gap -- The structural gap this workshop helps close.
- 5 Open-Source Projects Every PE Student Should Contribute To -- Next steps for students who want to keep building after the workshop.
- PE Department Advisory Board Pitch: AI/Data Science Track -- For faculty wanting to build this content into formal curriculum.
Looking for O&G Jobs?
Petro-Jobs uses AI to match your resume to 79+ curated oil & gas positions.
Try Petro-JobsHave questions about this topic? Get in touch.