Editorial disclosure
This article reflects the independent analysis and professional opinion of the author, informed by published research, industry data, and practitioner experience. No company or institution reviewed or influenced this content prior to publication.
Petroleum engineering enrollment has dropped roughly 75% from its peak. The average age of an oil and gas field worker is 56. And the industry is facing a set of technical problems that are more complex, more interdisciplinary, and more consequential than anything the previous generation encountered.
That is not a warning. It is an invitation.
Every challenge on this list represents unsolved technical work that operators will pay to solve. The engineers who understand these problems -- and who bring modern skills in data science, optimization, and systems thinking -- will find themselves in extraordinary demand. Not someday. Right now.
This article maps the ten biggest technical challenges facing upstream oil and gas in 2026, ordered by industry-wide impact. For each one, we cover what the problem actually is, why it matters in dollar terms, what skills solve it, and how someone early in their career can start contributing immediately.
1. Produced Water Management
The problem
The Permian Basin alone generates more than 20 million barrels of produced water per day. That is roughly four barrels of water for every barrel of oil. This water contains dissolved solids, hydrocarbons, naturally occurring radioactive material (NORM), and a complex chemistry that varies by formation and even by well.
For decades, produced water was a disposal problem: pump it into a saltwater disposal (SWD) well and move on. That approach is hitting a wall. Seismicity concerns have led to disposal volume restrictions across west Texas and New Mexico. Disposal well capacity is tightening. Trucking costs are escalating. And the sheer volume keeps growing as operators complete more wells in water-intensive formations.
Why it matters
Water handling is now the second-largest operating expense for many Permian operators, often exceeding $2-4 per barrel of oil equivalent. For a mid-size operator producing 50,000 BOE/day, that translates to $40-70 million per year spent managing water. Disposal well shutdowns due to seismicity can force production curtailments, turning a waste-management issue into a revenue problem.
Recycling and reuse are gaining traction, but the economics depend on treatment technology, logistics, and regulatory frameworks that are still evolving. Operators need engineers who can optimize water networks -- sourcing, treatment, storage, transportation, and disposal -- as integrated systems rather than isolated line items.
Skills that solve it
- Process engineering and water chemistry -- understanding treatment technologies (electrocoagulation, membrane filtration, evaporation) and matching them to specific water compositions
- Network optimization -- modeling water logistics across dozens of well pads, SWD wells, recycling facilities, and frac operations
- Geomechanics -- understanding induced seismicity risk and disposal well capacity limits
- Data analytics -- water quality monitoring, predictive maintenance for treatment systems, forecasting produced water volumes from new completions
How a new engineer can contribute
Water management is one of the few areas in oil and gas where the established playbook is actively failing. That means there is less institutional inertia and more willingness to try new approaches. Entry-level engineers who understand optimization algorithms, logistics modeling, or environmental engineering bring skills that most veteran petroleum engineers simply do not have. Start by learning the basics of produced water chemistry (the PTTC and SPE have excellent primers) and build a water network optimization model -- even a simplified one -- to demonstrate the concept.
Resources
- SPE papers on produced water recycling economics
- EPA Underground Injection Control (UIC) program documentation
- PTTC produced water management workshops
- TexNet seismic monitoring data (publicly available)
2. Data Fragmentation and Integration
The problem
Ask any operator what their number-one barrier to AI adoption is, and the answer is the same: data. Not the absence of data -- operators are drowning in it. The problem is that data lives in dozens of disconnected systems. SCADA data in one platform. Drilling data in another. Production accounting in a third. Completions data in spreadsheets. Geology in yet another application. Land and regulatory in a legacy system that predates the smartphone.
Every operator has this problem. The variation is only in degree. Some have invested in data lakes and integration layers. Most have not. The result is that an engineer who wants to answer a simple question -- "How did our completion design affect production in wells with similar reservoir properties?" -- must manually extract data from four or five systems, clean it, align it on time and space, and merge it into an analysis-ready dataset. That process takes days or weeks. By the time the analysis is done, the decision it was meant to inform has already been made.
Why it matters
Data fragmentation is not just an IT inconvenience. It is the bottleneck that prevents almost every other improvement on this list. AI models cannot be trained on data that does not exist in a unified form. Optimization algorithms cannot optimize systems they cannot see. Cross-functional analysis -- the kind that connects geology, drilling, completions, and production into a single learning loop -- is nearly impossible when each discipline owns its own data silo.
Operators estimate that engineers spend 30-60% of their time on data wrangling rather than analysis. For a company with 20 engineers, that is the equivalent of 6-12 full-time employees doing nothing but reformatting spreadsheets.
Skills that solve it
- Data engineering -- ETL pipelines, data modeling, database design, API integration
- Domain knowledge -- understanding what drilling, completions, and production data actually means (units, conventions, edge cases)
- Cloud platforms -- AWS, Azure, GCP, Databricks, Snowflake
- Standards and protocols -- WITSML, PRODML, OSDU, PPDM
- MCP and AI tooling -- building tool layers that let AI agents access and reason over petroleum data (see the open-source petro-mcp project for an example of this approach)
How a new engineer can contribute
This is arguably the single highest-leverage area for a new engineer with data skills. You do not need ten years of experience to build a data pipeline. You need to understand the domain well enough to know which fields matter, what the edge cases are, and how the data connects across systems. If you can write Python, understand SQL, and have enough petroleum engineering knowledge to know that a "completion date" in one system might mean something different than a "first production date" in another, you are already ahead of most candidates. Build a portfolio project that integrates public well data from state regulatory databases (Texas RRC, New Mexico OCD) with public production data. That project alone will demonstrate more practical value than most coursework.
For a deeper look at how drilling data management works in practice, see our article on drilling data management, WITSML, and cloud platforms.
Resources
- OSDU (Open Subsurface Data Universe) documentation
- WITSML and PRODML standards (Energistics)
- State regulatory databases (Texas RRC, NDIC, OCC, COGCC)
- petro-mcp -- open-source MCP server for petroleum engineering data
3. The AI Adoption Gap
The problem
The numbers tell a contradictory story. The AI in oil and gas market is valued at $4.28 billion in 2026, growing at 13% CAGR. Forty-nine percent of operators say they plan to deploy agentic AI this year. And yet: 45% of oil and gas companies offer zero AI training to their workforce.
That gap -- between AI ambition and AI readiness -- is where the industry is stuck. The technology exists. The use cases are proven. But the organizational capacity to adopt, deploy, and sustain AI systems is profoundly lacking. Companies that have succeeded with AI (Equinor claims $130 million in AI-driven savings in 2025; Aramco reported $1.8 billion in AI value in 2024) have done so by building internal AI teams and investing in data infrastructure over years. Most operators, especially mid-size and smaller companies, have done neither.
The result is a two-tier industry: a handful of large companies with mature AI capabilities, and a long tail of operators who know they need AI but have no idea where to start.
Why it matters
With WTI forecast around $51 for 2026, margins are thin. Operators cannot afford to drill their way to growth. They need to extract more value from existing assets -- through optimization, predictive maintenance, automated surveillance, and smarter decision-making. AI is the enabling technology for all of that. Companies that fail to adopt it will be structurally disadvantaged.
But the barrier is not technology. It is people. There are not enough engineers in the industry who can bridge the gap between petroleum engineering domain knowledge and modern AI/ML capabilities.
Skills that solve it
- Machine learning fundamentals -- supervised learning, time-series modeling, anomaly detection
- Domain translation -- the ability to frame an oil and gas problem as a machine learning problem (and to know when ML is not the right tool)
- MLOps -- deploying models in production, monitoring drift, maintaining data pipelines
- Change management -- helping experienced engineers trust and use AI tools effectively
- Agentic AI and tool integration -- building AI agents that can take actions, not just make predictions
How a new engineer can contribute
You do not need to build a novel deep learning architecture. Most high-value AI applications in oil and gas use straightforward techniques: gradient-boosted trees for production forecasting, time-series anomaly detection for equipment monitoring, clustering for well analogue selection. The hard part is not the algorithm. It is understanding the data, the physics, and the operational context well enough to build something that works reliably in the field. An engineer who can build a simple decline curve model in Python, compare it to a physics-based forecast, and explain the tradeoffs to a room of completion engineers is worth more than a data scientist who can build a transformer model but does not know what a frac hit is.
Resources
- SPE papers on machine learning applications in petroleum engineering
- Coursera/edX courses on machine learning (Andrew Ng's course remains excellent)
- Our article on digital platforms and AI in upstream O&G
- Open-source tools: scikit-learn, XGBoost, Prophet (time-series), TensorFlow
4. The Great Crew Change
The problem
The oil and gas industry has talked about the "great crew change" for two decades. It is no longer theoretical. The average age of an oil and gas worker is 56. Petroleum engineering enrollment at US universities has dropped roughly 75% from its 2014 peak. The engineers who know where the valves are, why the well was drilled that way, and what happened the last time someone tried a particular completion design are retiring. And in many cases, their knowledge is walking out the door with them because it was never documented in any system.
This is not just a headcount problem. It is a knowledge preservation problem. Decades of operational experience -- the kind of tacit knowledge that distinguishes a great production engineer from a mediocre one -- is stored in people's heads, in handwritten notes, in Excel files on local drives, and in institutional memory that evaporates when someone retires.
Why it matters
Knowledge loss drives real costs. When a veteran engineer retires and the new engineer does not know that Well Pad 7 has a history of corrosion in the annulus, the consequence is a failure that could have been prevented. When completions knowledge is lost, companies repeat expensive mistakes. When field operations expertise disappears, safety risk increases.
Operators are simultaneously trying to run more wells with fewer people and losing the people who know how to run them. The math does not work without technology that preserves, augments, and transfers knowledge at scale.
Skills that solve it
- Knowledge management systems -- building searchable, structured repositories of operational knowledge
- AI-assisted knowledge capture -- using NLP and LLMs to extract insights from unstructured data (well reports, morning reports, email threads, meeting notes)
- Digital twins and simulation -- encoding physical and operational knowledge in models that persist beyond any individual's tenure
- Training and mentoring -- the human side of knowledge transfer remains irreplaceable
How a new engineer can contribute
Paradoxically, the great crew change is one of the best things that could happen to your career. The industry needs you. Urgently. But you should be strategic about it. Your first year at an operator or service company, spend time with the veteran engineers. Ask them what they know that is not written down anywhere. Document it. Build tools to capture it. If you can create a system that captures even 20% of a retiring engineer's tacit knowledge in a queryable, reusable format, you will have delivered more lasting value than most capital projects.
Resources
- SPE Career Management resources
- Society of Petroleum Engineers mentoring programs
- Industry oral history projects (several operators have started these)
5. Artificial Lift Optimization
The problem
More than 90% of oil wells in the United States require some form of artificial lift -- rod pumps, electric submersible pumps (ESPs), gas lift, plunger lift, or progressive cavity pumps. These systems are the mechanical heartbeat of production. And they fail. Frequently.
ESP failures in particular are expensive: $50,000 to $200,000 per event when you factor in the cost of the replacement pump, the workover rig, and the lost production during downtime. In the Eagle Ford and Permian, operators running ESP-heavy portfolios can spend tens of millions of dollars per year on artificial lift failures alone.
The optimization challenge is multidimensional. It involves selecting the right lift method for each well, sizing equipment correctly, setting operating parameters to maximize run life, detecting incipient failures before they happen, and managing the transition between lift methods as wells decline.
Why it matters
Artificial lift costs are one of the largest components of lease operating expense (LOE). For a typical Permian operator, artificial lift accounts for 15-25% of LOE. Reducing failure frequency by even 10-15% through better monitoring and optimization translates directly to bottom-line savings. In a $51 WTI environment, those savings can be the difference between a well that is economic and one that is not.
Skills that solve it
- Mechanical engineering and systems dynamics -- understanding pump performance curves, motor loading, fluid dynamics in the wellbore
- Predictive analytics -- time-series anomaly detection, vibration analysis, current signature analysis for ESPs
- Control systems -- variable frequency drives (VFDs), automated set-point optimization, closed-loop control
- Physics-based + data-driven modeling -- combining nodal analysis with ML models for more robust predictions
How a new engineer can contribute
Artificial lift optimization is one of the most accessible areas for a new engineer who wants to combine petroleum engineering fundamentals with data science. The data is readily available (SCADA systems generate continuous streams of pump cards, motor current, tubing pressure, and casing pressure). The physics is well understood. And the business case is clear. Build a dynamometer card classification model using publicly available data. Develop a pump-off detection algorithm. Create a dashboard that tracks ESP health indicators over time. These are portfolio-worthy projects that directly address a problem operators spend real money on.
For a detailed survey of production software platforms, see our article on production operations software, surveillance, and AI.
Resources
- SPE Artificial Lift Workshop proceedings
- Ambyint, OspreyData, and WellAware documentation (publicly available white papers)
- Echometer dynamometer card datasets
- Our article on production operations software and AI
6. Parent-Child Well Interference (Frac Hits)
The problem
As operators develop unconventional plays, they drill infill wells (children) near existing producers (parents). When the child well is hydraulically fractured, the frac treatment can communicate with the parent well's depleted reservoir and fracture network. The result: pressure surges, fluid displacement, and sometimes catastrophic production losses in the parent well. This phenomenon -- commonly called a "frac hit" -- has cost the industry billions of dollars in lost production and remediation.
The problem is particularly acute in the Permian Basin, where well spacing has tightened aggressively. Operators who drilled single-well sections a decade ago are now infilling with four, six, or eight wells per section. Every infill well is a potential frac hit on every existing producer nearby.
Why it matters
Frac hits can permanently reduce parent well production by 10-40%. For a well producing 500 BOE/day, a 20% permanent production loss represents $1-2 million in lost reserves value. Multiply that across dozens of parent wells in a development program, and the cumulative impact can exceed the cost of the infill wells themselves.
The industry has tried several mitigation strategies -- pressure management (injecting into offset wells during frac), timing protocols, modified frac designs, wider spacing -- but none is universally effective. The optimal approach depends on local geology, completion design, depletion state, and stress conditions, which means it requires sophisticated modeling and real-time monitoring.
Skills that solve it
- Geomechanics and fracture modeling -- understanding stress shadows, fracture propagation, and pressure communication
- Reservoir simulation -- coupled flow-geomechanics models, depletion modeling, pressure management design
- Data science -- statistical analysis of parent-child performance data, identifying which variables drive interference severity
- Real-time monitoring -- fiber optic sensing (DAS/DTS), microseismic monitoring, pressure gauges in offset wells
How a new engineer can contribute
Parent-child interference is a frontier problem. No one has fully solved it. That means there is no established playbook that only veterans can execute. If you can build a statistical model that predicts frac hit severity based on spacing, completion parameters, and depletion state -- even using publicly available data -- you are working on a problem that every Permian operator cares about. The best starting point is analyzing public well data to identify parent-child pairs and correlating infill well completion dates with parent well production changes.
Resources
- SPE papers on parent-child well interference (search "frac hit" or "well spacing optimization")
- Completion design data from FracFocus (publicly available)
- State production data (Texas RRC, New Mexico OCD)
7. Declining Tier 1 Inventory
The problem
Every shale basin has a finite number of premium drilling locations -- the Tier 1 inventory. These are the wells with the best rock quality, optimal thickness, favorable pressure, and lowest breakeven costs. In the Permian, Eagle Ford, and Bakken, operators have been high-grading their portfolios for years, drilling the best locations first. The inevitable result: Tier 1 inventory is declining.
This does not mean the basins are "running out of oil." It means the average new well is getting slightly worse. Breakeven costs are rising. Recovery factors need to improve. And operators must extract more value from Tier 2 and Tier 3 locations that would not have been economic five years ago.
Why it matters
At $51 WTI, the margin between a profitable well and an unprofitable one is thin. A Tier 1 Permian well might break even at $35-40 WTI. A Tier 2 location might break even at $48-55. The difference between "drill it" and "defer it" often comes down to completion optimization, artificial lift design, and operational efficiency -- all engineering problems. As operators move into lower-quality rock, every marginal improvement in well performance matters more.
Skills that solve it
- Reservoir characterization -- geostatistics, petrophysics, seismic interpretation to identify the best remaining locations
- Completion optimization -- stage spacing, cluster spacing, proppant loading, fluid chemistry designed for specific rock properties
- Economic modeling -- integrating subsurface uncertainty with cost assumptions and commodity price scenarios
- Enhanced recovery techniques -- huff-and-puff gas injection, surfactant-assisted recovery, and other EOR methods adapted for unconventionals
How a new engineer can contribute
Inventory management is fundamentally an optimization problem under uncertainty. If you understand reservoir engineering basics and can build probabilistic economic models (Monte Carlo simulation, decision trees), you bring a skill set that complements the geological judgment of experienced teams. The engineers who can quantify "how much better does this well need to perform to be economic at $51 WTI?" and then design completions to achieve that target will be highly valued.
See our guide on reservoir management software for the tools used in this space.
Resources
- EIA Drilling Productivity Report (monthly, publicly available)
- Enverus / ShaleProfile well-level production data
- Our article on reservoir management software
- SPE papers on unconventional EOR
8. ESG and Emissions Compliance
The problem
Methane monitoring and emissions reduction have moved from voluntary commitments to regulatory requirements. The EPA's methane fee under the Inflation Reduction Act imposes charges on facilities exceeding emissions thresholds -- starting at $900 per ton in 2024 and rising to $1,500 per ton by 2026. State-level regulations (Colorado, New Mexico, California) add additional requirements. And investor pressure from ESG-focused funds means even operators in less regulated states face scrutiny.
The technical challenge is substantial. Methane emissions from oil and gas operations come from hundreds of potential sources: pneumatic controllers, tank vents, compressor seals, flares, fugitive leaks, and unintentional releases during completions and workovers. Detecting, quantifying, attributing, and reducing these emissions across a portfolio of thousands of wells and facilities requires a monitoring and data infrastructure that most operators do not yet have.
Why it matters
For a mid-size Permian operator, methane fees alone could amount to $5-15 million per year if emissions are not brought under control. Beyond direct regulatory costs, emissions performance increasingly affects access to capital, lease bonus negotiations, and the ability to market gas as "responsibly sourced." The operators who solve this problem gain a real competitive advantage.
Skills that solve it
- Environmental engineering and atmospheric science -- understanding emission sources, dispersion modeling, measurement uncertainty
- Sensor technology and IoT -- continuous monitoring systems, OGI (optical gas imaging), satellite-based methane detection
- Data analytics and reporting -- emissions inventory management, regulatory reporting, reconciliation of bottom-up and top-down measurements
- Process engineering -- emissions reduction through equipment upgrades, electrification, vapor recovery, and operational changes
How a new engineer can contribute
ESG compliance is a growth area with few established experts. The regulatory landscape is new enough that no one has decades of experience to fall back on. If you understand environmental engineering, can work with sensor data, and can build emissions inventories, you are qualified for work that operators are actively hiring for. Several startups (Project Canary, Qube Technologies, Kuva Systems) have emerged specifically to address this need, and they are all hiring.
Resources
- EPA Greenhouse Gas Reporting Program (GHGRP) data
- Stanford Methane Research Group publications
- OGMP 2.0 framework (Oil and Gas Methane Partnership)
- EDF Permian Methane Analysis Project (PermianMAP)
9. Post-M&A Data Integration
The problem
The upstream oil and gas industry is in the middle of a historic consolidation wave. In the past two years alone: Diamondback acquired Endeavor ($26B), Civitas merged into SM Energy, Matador acquired Ameredev ($1.9B), Franklin Mountain was acquired by Coterra ($3.95B), and Aethon is being acquired by Mitsubishi ($5.2B+). Each of these deals creates a massive data integration challenge.
The acquiring company inherits a second set of SCADA systems, production databases, drilling records, land systems, financial systems, and operational workflows. These systems use different schemas, different naming conventions, different units, and different data quality standards. Integrating them into a unified operational platform is a multi-year project that touches every department in the organization.
Why it matters
Until data integration is complete, the acquiring company cannot fully realize the operational synergies that justified the deal. They cannot run portfolio-level analytics across both asset bases. They cannot standardize workflows. They cannot deploy enterprise-wide AI models. In many cases, they are running two parallel operations centers, two IT departments, and two engineering workflows -- doubling costs instead of capturing synergies.
The companies doing this integration work right now are desperate for engineers who understand both the data engineering and the operational context. This is not a job for a pure IT team. It requires people who know what the data means.
Skills that solve it
- Data engineering -- schema mapping, ETL development, master data management, data quality assurance
- Domain expertise -- understanding naming conventions, measurement standards, and operational workflows well enough to correctly map data between systems
- Project management -- M&A integration is a large, cross-functional project with hard deadlines and organizational complexity
- Systems architecture -- designing the target-state data platform that both legacy systems will migrate into
How a new engineer can contribute
Post-M&A integration work is intense, deadline-driven, and often unglamorous. It is also some of the highest-impact work available. If you can write Python, understand SQL, know enough about oil and gas data to recognize when a "well" in one system maps to a "wellbore" in another, and can communicate with both IT teams and operations teams, you can contribute immediately. Several of the companies listed above are actively staffing integration teams right now.
For background on data management challenges and standards, see our article on drilling data management, WITSML, and cloud platforms.
Resources
- PPDM (Professional Petroleum Data Management) association
- OSDU data platform standards
- Our article on drilling data management and WITSML
10. Extended Lateral Complexity
The problem
Horizontal laterals in unconventional plays have gotten dramatically longer. Ten years ago, a 5,000-foot lateral was standard. Today, 10,000-foot laterals are routine, 15,000-foot laterals are common, and operators are drilling 3-mile and even 4-mile laterals (15,000-21,000 feet) in the Permian, Midland Basin, and Eagle Ford.
Longer laterals improve capital efficiency -- you access more reservoir with a single vertical wellbore and surface location. But they introduce a cascade of technical challenges. Friction pressure in the horizontal section increases dramatically with length, making it harder to place proppant uniformly across all frac stages. Heel-toe production imbalance means the stages nearest the vertical section dominate production while the toe stages underperform. Drilling challenges multiply: increased torque and drag, wellbore stability in longer openhole sections, and cementing quality across thousands of feet of horizontal.
Why it matters
A 3-mile lateral costs 30-50% more than a 2-mile lateral but does not reliably deliver 50% more production. The capital efficiency advantage of extended laterals depends entirely on engineering execution: how well the lateral is drilled (smooth trajectory, good cement), how effectively it is completed (uniform stimulation across all stages), and how it is produced (managed drawdown, artificial lift selection). Getting these details right is worth millions of dollars per well.
Skills that solve it
- Drilling engineering -- torque and drag modeling, wellbore stability analysis, drilling fluid design for extended reach
- Completions engineering -- limited-entry design, diverter technology, stage and cluster spacing optimization for long laterals
- Production engineering -- managed drawdown strategies, artificial lift design for wells with heel-toe pressure differentials
- Fiber optic diagnostics -- DAS/DTS interpretation to evaluate stimulation effectiveness and production contribution across the lateral
How a new engineer can contribute
Extended lateral design and optimization are at the frontier of completions engineering. The industry is still learning what works. If you can model frac fluid placement in a 20,000-foot lateral, analyze fiber optic data to quantify stage-by-stage contribution, or build an economic model that determines the optimal lateral length for a given well location, you are working on problems that directly drive capital allocation decisions. This is high-visibility, high-impact work.
For an overview of completions software and optimization platforms, see our article on completions and frac software platforms.
Resources
- SPE papers on extended lateral completions optimization
- Liberty Energy, Halliburton, and SLB completions technology white papers
- Our article on completions and frac software platforms
The Common Thread: Interdisciplinary Skills Win
If you read through all ten challenges, a pattern emerges. The technical skills that solve these problems are not purely petroleum engineering, nor purely computer science, nor purely environmental engineering. They are interdisciplinary. The most valuable engineer in 2026 is someone who can:
- Understand the physics -- reservoir flow, geomechanics, thermodynamics, fluid dynamics
- Work with data -- Python, SQL, API integration, data cleaning, visualization
- Build models -- machine learning, optimization, simulation, statistical analysis
- Communicate across disciplines -- translate between geologists, engineers, data scientists, and business teams
- Learn continuously -- the tools and techniques are evolving faster than any curriculum can capture
You do not need to be world-class in all five areas. But you need competence in at least three, and depth in at least one. The days when a petroleum engineer could build a career on decline curve analysis and reserve reports alone are over. The engineers who thrive in 2026 and beyond will be the ones who treat every challenge on this list as a learning opportunity.
Where to Start
If you are a student, entry-level engineer, or career changer evaluating oil and gas, here is a practical starting path:
Build technical foundations. Take a reservoir engineering course, a machine learning course, and a data engineering course. Understand enough petroleum engineering to speak the language. Understand enough computer science to build tools.
Work with real data. State regulatory databases (Texas RRC, New Mexico OCD, NDIC) provide free access to well-level drilling, completion, and production data. Use it. Build analysis projects. The petro-mcp open-source project provides tools for accessing and working with petroleum engineering data programmatically.
Pick a challenge. Do not try to solve all ten problems. Pick one that matches your interests and skills. Go deep. Read the SPE literature. Attend a workshop. Build a prototype solution. The engineers who can demonstrate that they have worked on real industry problems -- even with public data and open-source tools -- stand out immediately.
Connect with the community. SPE student chapters, AAPG, industry conferences (URTeC, SPE ATCE), LinkedIn groups, and X/Twitter communities are all accessible. The oil and gas community is smaller and more connected than most people realize.
The industry's challenges are real. The talent gap is real. And the opportunity for engineers who show up with the right skills and the willingness to work on hard problems has never been larger.
Dr. Mehrdad Shirangi is the founder of Groundwork Analytics and holds a PhD from Stanford University in Energy Systems Optimization. He has been building AI solutions for the energy industry since 2018. Connect on X/Twitter and LinkedIn, or reach out at info@petropt.com.
Related Articles
- The Petroleum Engineering Skills Gap -- The gap between what PE programs teach and the skills needed to solve these challenges.
- Agentic AI for Upstream Oil & Gas -- How agentic AI addresses the AI adoption gap discussed in Challenge #3.
- Breaking Into Oil & Gas in 2026 -- Practical guidance for pursuing the career opportunities each challenge represents.
Have questions about this topic? Get in touch.