Editorial disclosure
This article reflects the independent analysis and professional opinion of the author, informed by published research, vendor documentation, and practitioner experience. No vendor reviewed or influenced this content prior to publication.
Every mid-size E&P operator has the same story. Someone -- usually IT, sometimes a data analyst hired for a different purpose -- builds a production dashboard. It has company colors. It has gauges that look like speedometers. It has a map with dots that change color. It took four months and a six-figure software license.
Nobody uses it.
The production engineers glance at it during a management presentation and then go back to their spreadsheets. The foremen never open it. The VP of Operations sees it in a quarterly review and asks why the numbers do not match the morning report. The dashboard quietly becomes shelf-ware, joining the ranks of previous BI initiatives that died the same death.
This is not a technology problem. Power BI, Tableau, Spotfire, and custom web dashboards are all capable tools. The failure is in design -- specifically, in designing dashboards that reflect what IT thinks production engineers want rather than what production engineers actually do all day.
This article breaks down why most production dashboards fail, what production engineers actually need from a monitoring interface, and how to build one that becomes the default tool for daily operations rather than a forgotten link in someone's browser bookmarks.
Why Most Production Dashboards Fail
The failure modes are consistent enough across operators to be cataloged. In our experience working with mid-size E&P companies, the same five problems appear again and again.
1. Built by IT, Not by Engineers
The most common failure starts at project kickoff. IT departments or data teams build dashboards based on what they think production operations looks like. They attend a couple of meetings, collect a list of KPIs, and start building. The result is a dashboard that displays correct data in an unusable format.
A production engineer does not think in KPIs. They think in exceptions. "Which wells are down? Which wells are underperforming? What changed overnight?" A dashboard that leads with company-wide production totals and average uptime percentages -- the metrics that management cares about -- is useless for the person who has to decide which wellsite to drive to at 7:30 AM.
2. Too Many Metrics, Not Enough Context
Dashboard sprawl is real. Once stakeholders learn that a dashboard exists, every department wants their metrics on it. The result is a 15-tab monstrosity that tries to serve production engineers, the completions team, the finance group, and the CEO from a single interface. It serves none of them well.
A production engineer managing 200 wells in the Permian needs to see maybe 30 data points in the first five minutes of the day. Adding lease operating expense trends, investor-facing production benchmarks, and ESG emissions metrics to the same dashboard does not add value -- it adds cognitive load.
3. No Exception-Based Design
This is the single biggest design mistake. Most dashboards are built to display the status of everything. A production engineer needs to see only what is wrong.
Exception-based design means the dashboard opens to a list of problems, not a list of wells. Wells producing within normal parameters should be invisible by default. The engineer's attention should be directed to: wells that went down, wells that are producing below their forecast, wells with anomalous pressure or temperature readings, wells with equipment alarms, and wells approaching workover thresholds.
If an engineer has to scan through 200 green dots to find the three red ones, the dashboard has failed.
4. Slow Refresh, Stale Data
Production engineers make time-sensitive decisions. If the dashboard shows data from yesterday afternoon, the engineer has already gotten more current information by calling the field foreman. A dashboard that refreshes once per day is a reporting tool, not an operational tool. There is an important distinction.
SCADA data is available in near-real-time -- typically every 1 to 15 minutes depending on polling frequency. If the dashboard cannot surface that data within a reasonable lag (under 30 minutes for most operational decisions), it loses its value proposition against simply pulling up the SCADA host directly.
5. Wrong Aggregation Level
Management dashboards aggregate. Operational dashboards disaggregate. A production engineer does not need to know that the company is producing 45,000 BOE/d. They need to know that Well 14-3H in the North pad of the Bronco lease dropped from 180 BOPD to 95 BOPD overnight and that the downhole pressure sensor shows a 200 psi decline.
Dashboards built for board presentations get recycled as operational tools. They never work. The aggregation level is wrong, and drilling down from a company-level chart to well-level detail takes too many clicks and too much patience.
What Production Engineers Actually Need
Before designing a dashboard, spend a day -- a real day, not a meeting -- watching a production engineer work. Here is what you will observe.
Exception Alerts
The first thing an engineer checks every morning: what went wrong overnight. Which wells went down. Which wells had alarms. Which compressor stations tripped. This is not optional context -- it is the starting point for the entire day.
An effective dashboard surfaces these exceptions immediately. Not buried in a sub-tab. Not requiring a filter selection. The default view, when the dashboard opens, should be a prioritized list of exceptions.
Well-Level Detail
Production engineers think at the well level. They need to see individual well production rates (oil, gas, water), tubing and casing pressures, artificial lift parameters (pump speed, motor amps, injection rate), current versus expected production, and recent well history (interventions, workovers, choke changes).
This detail must be accessible quickly -- ideally within two clicks from the exception list.
Trend Comparison
A single data point is rarely actionable. Engineers need trends: how has this well's production changed over the last 7, 30, and 90 days? How does it compare to offset wells? How does it compare to its own type curve? Trend charts with overlays are essential.
Anomaly Flags
Beyond hard alarms (equipment failures, communication losses), engineers need soft anomalies: gradual production declines, slow pressure buildups, increasing water cut, gas-oil ratio changes. These are the early warnings that prevent failures -- but they require analytics, not just data display.
Workover and Intervention Tracking
Which wells are scheduled for workovers? Which are currently shut in for interventions? What is the backlog? A dashboard that shows production data without workover context creates confusion. An engineer seeing a well producing zero will check whether it is down or intentionally shut in. If the dashboard cannot answer that question, the engineer will switch to a spreadsheet that can.
The Morning Report Workflow
Understanding the morning report workflow is essential for dashboard design because this is the single most important use case. If the dashboard does not improve this workflow, it will not get adopted.
What a Production Engineer Does at 7 AM
Step 1: Check overnight exceptions (5 minutes). The engineer reviews which wells went down, which alarms fired, and which wells are producing outside normal parameters. In most operations today, this involves checking the SCADA host, scanning email or text alerts, and calling the night pumper.
Step 2: Prioritize responses (10 minutes). Based on the exceptions, the engineer decides what needs immediate attention. A high-rate well that went down gets priority over a marginal stripper well. An ESP that tripped due to a power outage (and may restart on its own) gets lower priority than an ESP showing motor overtemperature (which indicates a potential failure).
Step 3: Dispatch field crews (10 minutes). The engineer communicates priorities to field foremen and pumpers, coordinating who goes where and what they check when they arrive.
Step 4: Review production performance (15 minutes). With emergencies addressed, the engineer reviews overall field performance: yesterday's total production, wells trending down, wells that recently came back online after workovers, and wells approaching artificial lift optimization thresholds.
Step 5: Update the morning report (15 minutes). The engineer compiles the production summary, exception list, and field activity plan into a morning report for management. At many operators, this is still done in Excel.
What the Dashboard Must Do
An effective production dashboard should reduce this entire workflow from 55 minutes to 15 minutes. Specifically:
- Step 1 should be automated: the dashboard opens to an exception list, sorted by priority, with relevant context (well type, production rate, alarm type, duration of issue).
- Step 2 should be supported: clicking an exception shows well-level detail, trend history, and offset well context, giving the engineer enough information to prioritize without pulling up separate systems.
- Step 3 should be facilitated: the dashboard should support assigning or noting dispatches (even if this is just a comment field), so the record of who was sent where lives in the same system.
- Step 4 should be one tab away: a field-level production summary with yesterday's volumes, deviation from forecast, and trend indicators.
- Step 5 should be exportable: the morning report should be auto-generated from the dashboard data, not manually assembled from screenshots and copy-paste operations.
Dashboard Design Principles for E&P
These principles emerge from observing what works in practice, not from BI best-practice guides written for retail analytics.
Start with Exceptions, Not Averages
The landing page should be an exception list. Not a map. Not a production chart. Not a KPI summary. An ordered list of wells that need attention, sorted by estimated production impact or severity.
Every other view (maps, charts, summaries) should be secondary navigation from this starting point. If your dashboard opens to a pretty map with green and red dots, you have already made a design choice that prioritizes aesthetics over workflow.
Well-Level Detail in Two Clicks
From the exception list, clicking a well name should open a well detail view. From a map or chart, clicking a well marker should do the same. No intermediate screens. No filter selections. Two clicks maximum.
The well detail view should include: a production history chart (oil, gas, water, with scale options), current and recent operating parameters, alarm history, workover history, and a link to the well file or completion record.
Real-Time (or Near-Real-Time) SCADA Integration
The dashboard must connect to SCADA data, not just production accounting data. Production accounting data is clean but delayed (often by days or weeks due to allocation and reconciliation processes). SCADA data is messy but current.
The practical architecture is:
- SCADA polling every 1-15 minutes from RTUs
- Data landing in a historian or time-series database (OSIsoft PI, Canary, InfluxDB)
- Dashboard querying the historian directly or via a data warehouse layer
- Dashboard refresh every 5-15 minutes for operational views
If the dashboard only connects to monthly production volumes from accounting systems, it is a reporting tool, not an operational tool.
Mobile-Friendly for Field Use
Production engineers and foremen spend significant time in the field. A dashboard that only works on a desktop monitor in the office misses half the use cases. Mobile design constraints to consider:
- Screen size: Complex multi-panel layouts must collapse to single-column views on phones.
- Connectivity: Field locations often have poor cellular coverage. The dashboard should load quickly and cache data for offline reference.
- Touch interaction: Hover-based tooltips do not work on mobile. Tap targets must be large enough for gloved fingers.
- Sunlight readability: High-contrast color schemes perform better outdoors than subtle pastel palettes.
Most BI platforms offer mobile apps (Power BI Mobile, Tableau Mobile), but the experience degrades significantly if the desktop dashboard was not designed with mobile as a primary use case from the start.
Customizable Views by Field, Lease, and Pad
A production engineer responsible for the Delaware Basin does not need to see Midland Basin data on every login. Default views should be configurable by user, filtering to their area of responsibility. Common filter hierarchies:
- Company > Area/Basin > Field > Lease > Pad > Well
- Company > Foreman Assignment > Well
- Company > Artificial Lift Type > Well
The dashboard should remember each user's preferred filters and restore them on login.
Technology Choices: An Honest Comparison
There is no single "best" tool. The right choice depends on your IT infrastructure, team skills, budget, and how deeply you want to integrate analytics. Here is an honest comparison of the main options.
Microsoft Power BI
Strengths: Low cost (Pro license at $10/user/month), strong integration with Microsoft 365 ecosystem, intuitive drag-and-drop interface for business users, excellent mobile app, broad connector library for data sources, and the largest community and template ecosystem of any BI tool. Daily refresh schedules and email distributions are straightforward to configure.
Weaknesses: Limited real-time streaming capabilities compared to Spotfire. The visual library, while extensive, leans toward presentation-ready charts rather than engineering-oriented visualizations (scatter plots with regression lines, box-whisker plots, cross-plots with lasso selection). Performance can degrade with very large datasets. Advanced analytics require DAX expertise, which is a niche skill.
Best fit: Operators already on Microsoft infrastructure, teams without dedicated BI developers, dashboards that emphasize reporting over interactive analysis, and organizations where budget is a primary constraint.
Tableau
Strengths: Superior data visualization capabilities, excellent at ad-hoc exploration and discovery, strong geographic/mapping features, handles large datasets well, and has a robust developer community. Tableau's visual query language is more intuitive for complex analyses than Power BI's DAX.
Weaknesses: Higher cost ($75/user/month for Creator license). Less native integration with real-time data streams. The Tableau Server infrastructure requires more IT overhead than Power BI Service. In the oil and gas industry specifically, Tableau has a smaller installed base than either Spotfire or Power BI, which means fewer pre-built templates and fewer colleagues who can help troubleshoot.
Best fit: Organizations that prioritize data exploration and visual analysis, teams with data-savvy analysts who will build their own views, and environments where the dashboard will be used for both operational monitoring and strategic analysis.
TIBCO Spotfire
Strengths: The deepest oil and gas heritage of any BI platform. Spotfire dominated E&P analytics in the 2010s and developed features specifically for petroleum engineering workflows: built-in decline curve analysis, cross-plot regression, type curve overlays, and geospatial analytics with well-level mapping. Native TERR engine for R scripting. Real-time data streaming and Complex Event Processing (CEP) capabilities. Strong integration with OSIsoft PI historian.
Weaknesses: Higher cost and complexity than Power BI. Steeper learning curve -- Spotfire is built for power users, not casual consumers. Reports can take significantly longer to load than Power BI (minutes versus seconds in some cases, as data must be paged into memory). The Spotfire developer community has shrunk as Power BI and Tableau have grown. TIBCO's corporate transitions (acquired by Citrix, then spun out) have created uncertainty about long-term product direction.
Best fit: Organizations with existing Spotfire licenses and expertise, teams with strong analytical users (petroleum engineers, reservoir engineers) who need statistical and predictive capabilities embedded in their dashboards, and environments where Spotfire is already integrated with the data historian.
Custom Web Dashboards
Strengths: Complete design control. No per-user licensing costs. Can be optimized for specific workflows (like the morning report). Modern web frameworks (React, Vue, Plotly Dash, Streamlit) enable rapid development. Can integrate directly with APIs, databases, and SCADA systems. Progressive web app (PWA) architecture can provide genuine offline capability for field use.
Weaknesses: Requires software development talent, which most E&P companies do not have in-house. Ongoing maintenance burden. No built-in user management, security, or governance unless you build it. The temptation to over-customize leads to fragile, single-developer-dependent systems.
Best fit: Operators with in-house development capability or a technology partner, very specific workflow requirements that off-the-shelf tools cannot meet, and situations where per-user licensing costs would be prohibitive at scale (e.g., deploying to 200+ field personnel).
The Pragmatic Recommendation
For most mid-size operators (1,000-10,000 wells), Power BI is the right starting point. It is cheap enough to experiment with, the learning curve is manageable, and the Microsoft ecosystem integration simplifies deployment. If you outgrow it -- if you need real-time streaming, embedded statistical analysis, or highly interactive engineering visualizations -- you can migrate specific use cases to Spotfire or a custom solution while keeping Power BI for the broader organization.
Do not start with Spotfire unless you already have it and the expertise to use it. Do not build custom unless you have a developer who will maintain it for years, not months.
Data Architecture: From Wellhead to Dashboard
The dashboard is only as good as the data pipeline feeding it. Here is the architecture that works in practice.
Layer 1: Field Data Acquisition (SCADA)
RTUs at wellheads and facilities collect data from sensors (pressure transmitters, flow meters, temperature probes, motor controllers) and transmit it to a central SCADA host via radio, cellular, or satellite. Polling intervals range from 1 minute (critical equipment) to 15 minutes (routine monitoring).
Common SCADA platforms in upstream: Emerson OpenEnterprise, ABB Ability, WellAware, SitePro, eLynx. The choice of SCADA vendor matters less for dashboard design than the quality and consistency of the data it collects.
Layer 2: Data Historian / Time-Series Database
Raw SCADA data lands in a historian or time-series database. This layer stores high-frequency sensor data, handles compression and retention policies, and provides the query interface for downstream applications.
Common historians: OSIsoft PI (now AVEVA PI), Canary Labs, InfluxDB (open source), TimescaleDB (open source on PostgreSQL). PI remains the dominant historian in oil and gas, but cloud-native time-series databases like InfluxDB are gaining traction, particularly among operators building new data infrastructure rather than extending legacy systems.
Layer 3: Data Warehouse / Integration Layer
This is where SCADA data meets production accounting data, well master data, workover records, and other contextual information. The data warehouse provides a unified, query-optimized view of all production data.
Options range from cloud data warehouses (Snowflake, BigQuery, Azure Synapse) to on-premise SQL Server databases. The key design principle: the data warehouse should present well-centric data models where every metric is linked to a well identifier and a timestamp.
Integration at this layer is where most dashboard projects actually fail. Not because the BI tool is inadequate, but because the data from SCADA, production accounting (Enertia, WolfePak, P2), and well records (OpenWells, WellView) has never been joined into a coherent data model. Budget at least as much time and money for data integration as for dashboard development.
Layer 4: Dashboard / Visualization
The BI tool or custom application queries the data warehouse (for historical and contextual data) and optionally the historian directly (for near-real-time SCADA data). The dashboard renders visualizations, applies alert logic, and presents the exception-based interface described above.
The Common Mistake
Many operators try to connect Power BI directly to the SCADA host or to raw database tables without an intermediate data warehouse. This works for a prototype but fails in production. Direct connections are slow (SCADA databases are optimized for write throughput, not analytical queries), fragile (schema changes break reports), and insecure (dashboard users should not have direct database access).
Invest in the data warehouse layer. It is not glamorous work, but it determines whether the dashboard will be fast, reliable, and maintainable.
What AI Adds to Dashboards
A well-designed dashboard without AI is already a major improvement over spreadsheets and manual SCADA checks. But AI and machine learning add capabilities that are impossible with static rules and threshold-based alerts.
Predictive Decline Analysis
Traditional decline curve analysis (DCA) fits historical production data to Arps equations and projects future rates. Machine learning decline models can incorporate additional variables -- completion parameters, spacing, reservoir pressure, offset well interference -- to produce more accurate forecasts at the individual well level.
In a dashboard context, this means showing not just "what is this well producing today" but "where will this well be in 30, 90, 180 days" with a confidence interval. When the actual production deviates significantly from the ML forecast, that deviation becomes an exception flag.
Anomaly Detection
Rule-based alarms catch binary events: a well is down or it is not. ML-based anomaly detection catches subtle, multivariate patterns: a well whose production is normal but whose combination of tubing pressure, casing pressure, and gas-oil ratio suggests an emerging problem.
Practical anomaly detection models for production dashboards include isolation forests for outlier detection in multivariate sensor data, LSTM or transformer models for time-series anomaly detection on individual well histories, and clustering-based approaches that group wells by behavior and flag wells that change clusters.
The key design principle for AI-generated anomalies: show the anomaly, show why the model flagged it (which parameters are anomalous), and show a confidence score. Engineers will not trust a red flag with no explanation.
Automated Commentary
One of the most underappreciated AI applications in production dashboards is automated narrative generation. Instead of (or in addition to) charts, the dashboard can generate natural language summaries:
"North Bronco field production is down 340 BOPD from last week, driven primarily by three wells: 14-3H (ESP trip, down since Tuesday), 22-1H (increasing water cut, now 78%), and 31-4H (scheduled workover, expected back online Friday). Remaining wells in the field are performing within 5% of forecast."
Large language models make this feasible. The morning report -- which production engineers currently spend 15 minutes assembling manually -- can be auto-generated from the dashboard's data and exception logic.
Virtual Metering
In fields with commingled production and infrequent well tests, AI-based virtual metering models can estimate individual well rates from SCADA data (pressures, choke positions, artificial lift parameters) without requiring physical well tests. This provides pseudo-real-time production estimates at the well level -- data that would otherwise only be available monthly from production allocation.
Template: The 5 Views Every Production Dashboard Should Have
Based on the principles above, here is a practical template for the five essential views in a production operations dashboard.
View 1: Exception Summary (Landing Page)
Purpose: Immediate situational awareness. What needs attention right now.
Contents:
- Prioritized list of exceptions, sorted by estimated production impact (BOPD lost)
- Exception categories: wells down, production anomalies, equipment alarms, communication losses
- Each row shows: well name, field/lease, exception type, duration, estimated production impact, last known status
- Severity color coding (red/yellow/green is fine here -- it is the only place color-coding adds value)
- Timestamp of last data refresh
Interaction: Click any exception to go to View 2 (well detail).
View 2: Well Detail
Purpose: Full context for a single well. Everything the engineer needs to make a decision.
Contents:
- Production history chart (oil, gas, water) with selectable time range (7 days, 30 days, 90 days, 1 year, life of well)
- Current operating parameters: tubing pressure, casing pressure, line pressure, temperature
- Artificial lift parameters: pump speed/stroke length (rod pump), motor amps/intake pressure/frequency (ESP), injection rate/GLR (gas lift)
- Forecast overlay: expected production from decline model versus actual
- Alarm history: recent alarms with timestamps and types
- Well events: workovers, interventions, choke changes, with dates and descriptions
- Offset well comparison: production from same pad or nearby wells for context
Interaction: Toggle between parameters. Overlay offset wells. Jump to View 3 for field context.
View 3: Field / Area Performance
Purpose: Roll-up view for a geographic area, field, or lease. Used during the "review overall performance" step of the morning workflow.
Contents:
- Total field production (oil, gas, water) with day-over-day and week-over-week change
- Well count summary: producing, shut-in (planned), down (unplanned), drilling/completing
- Production versus forecast at the field level
- Top decliners: wells with the largest production decrease over the last 7 days
- Top gainers: wells that recently came back online or increased production
- Map view with well locations, colored by status (producing, down, shut-in)
Interaction: Click any well on the map or in the decliners/gainers list to go to View 2.
View 4: Artificial Lift Monitoring
Purpose: Dedicated view for artificial lift performance, the primary driver of well uptime and operating cost.
Contents:
- Fleet summary by lift type: rod pump, ESP, gas lift, plunger lift, natural flow
- Key performance indicators by lift type: runtime percentage, failure rate, mean time between failures
- Equipment health indicators: motor temperature trends (ESP), polished rod load patterns (rod pump), injection rate efficiency (gas lift)
- Wells approaching failure thresholds: ESPs with rising motor temperature, rod pumps with anomalous dynamometer patterns
- Workover queue: wells approved for artificial lift changes, with scheduled dates
Interaction: Click any well to go to View 2. Filter by lift type, field, or foreman assignment.
View 5: Morning Report (Auto-Generated)
Purpose: Replace the manually assembled daily production report with an auto-generated version.
Contents:
- Executive summary: company/area production totals, change from yesterday, change from last week
- Exception summary: count and list of wells down, with durations and estimated production impact
- Field activity: workovers in progress, completions coming online, planned shut-ins
- Notable events: new wells brought online, wells returning from workover, significant production changes
- AI-generated narrative paragraph summarizing the day's status in plain language
- Export options: PDF for email distribution, data table for Excel download
Interaction: Select date range. Filter by area/foreman. Export with one click.
Implementation Roadmap
Building this dashboard is a 3-6 month project for most operators, not a 3-week sprint. Here is a realistic sequence.
Month 1: Data audit and pipeline. Inventory your data sources (SCADA, historian, production accounting, well records). Map the data flow. Identify gaps and quality issues. Build or configure the data warehouse integration layer. This is the least visible work and the most important.
Month 2: Exception logic and View 1. Define your exception rules (what constitutes "down," what thresholds trigger anomaly flags). Build the exception summary landing page. Get it in front of 2-3 production engineers for daily use. Iterate based on their feedback.
Month 3: Well detail and field views (Views 2-3). Build the well detail drilldown and field performance summary. Integrate production forecasts from your DCA tool. Again, put it in front of engineers and iterate.
Month 4: Artificial lift and morning report (Views 4-5). Add the lift monitoring view and auto-generated morning report. This is where adoption typically accelerates -- the morning report automation saves engineers 15-20 minutes daily, and that time savings creates habit-forming usage.
Month 5-6: Mobile optimization, AI features, refinement. Optimize for mobile. Add ML-based anomaly detection. Add automated commentary. Refine based on accumulated user feedback.
The critical success factor is not the technology. It is putting a working (even partial) dashboard in front of real production engineers as early as possible and iterating based on their actual behavior, not their stated requirements. What engineers say they want in a requirements meeting and what they actually use in practice are often very different things.
Closing Thoughts
The production dashboard problem is not a technology problem. Power BI, Tableau, Spotfire, and custom web applications are all capable of building effective operational dashboards. The problem is a design problem -- specifically, a failure to design for the production engineer's actual workflow rather than for management's reporting needs.
The principles are straightforward: start with exceptions, provide well-level detail in two clicks, integrate SCADA data for near-real-time currency, design for mobile field use, and automate the morning report. The technology choices and data architecture are secondary to getting these design principles right.
If your organization has a production dashboard that nobody uses, the diagnosis is almost certainly one of the five failure modes described above. The fix is not a new tool -- it is a redesign that starts with a day spent watching production engineers work.
Dr. Mehrdad Shirangi is the founder of Groundwork Analytics and holds a PhD from Stanford University in Energy Systems Optimization. He has been building AI solutions for the energy industry since 2018. Connect on X/Twitter and LinkedIn, or reach out at info@petropt.com.
Related Articles
- Production Operations Software: Surveillance, Optimization, and AI -- The production software stack that feeds dashboards and surveillance workflows.
- SCADA Data Quality for AI: The Audit Checklist -- Data quality issues that directly affect dashboard accuracy and reliability.
- How to Deploy an AI Agent for Daily Production Reporting -- Automating the morning report workflow that dashboards support.
- The Mid-Size Operator's Guide to AI -- How dashboards and AI surveillance fit into a broader digital strategy for mid-size operators.
Have questions about this topic? Get in touch.