ranking AI investments by yield
eight columns that force discipline before committing capital
01-Aug-25
Most AI portfolios fail because teams rank by excitement instead of yield. Digital twins always win the executive vote. They rarely deliver. Companies spend $15M building perfect systems before anyone sees value. Business cases claim benefits like better decisions or improved agility that cannot be defended when budget cuts come.
Over the summer worked with a $20B industrial company to identify where AI could add value enterprise wide. Mapped 8 opportunities across operations, supply chain, productivity, and cybersecurity. Built the business case for each, ranked them by computable ROI, and delivered a funded portfolio with $50M+ annual value at scale from $8M pilot spend over 18 months.
The 8 column framework
Built a prioritization tool that forces discipline on 8 dimensions. Initiative name requires specificity. Predictive maintenance for critical pumps and conveyors is an initiative. AI for operations is not. Phased plan maps pilot to integrate to scale to pattern. 3 months on 2 asset classes, integrate to systems, scale to second site, pattern across enterprise. If the plan cannot be written the initiative is not understood well enough to fund.
Decision gate sets the governance calendar. Not when will this be done but when do we need evidence to decide kill, pivot, or scale. 6 months to validate false positive rates. If acceptable scale, if not kill. Every initiative gets a decision date. No zombie projects.
ROI equation is make or break. Write benefit in plain math where every variable is measurable. Savings equals avoided downtime hours times profit per hour, plus labor cost saved, minus program cost. Every term from historian data and maintenance logs. A digital twin equation saying improved decision quality is not computable and not fundable. Model and inputs surfaces data friction. Supply chain optimization needing 18 months of demand history but having 6 months just doubled the timeline. ROM pilot cost provides reality check. Is this $500K or $5M. ROM annual benefit at scale shows steady state value. Pilot might save $200K in year 1 but $8M per year at 10 sites. What stays and what drops enforces scope. Every project starts too broad. Name what stays in phase 1 and what gets deferred.
How the framework surfaces reality
Applied the 8 columns to 8 AI opportunities. The framework killed bad assumptions fast. Supply chain project needs 18 months of clean demand history. Company only has 6 months. Timeline just doubled. Digital twin project has vague ROI equation and 18+ month proof timeline with $5M cost. Not fundable as written. Narrowed scope to 1 high energy unit, link to process controls, prove value in 12 months. That moved it from 8th priority to 4th.
Autonomous blocking for OT cybersecurity sounded impressive until reality check. Autonomous actions can shut down a $50M per day plant. Killed that feature entirely. Kept monitoring and human approval workflow instead.
The ranking criteria
Prioritized by 3 tests in order. Can the ROI be computed in a spreadsheet with named assumptions and measurable variables. How fast can we get to a decision gate with kill, pivot, or scale evidence. What is the annual benefit at scale divided by pilot cost.
Predictive maintenance ranked 1st. Computable ROI from avoided downtime and maintenance savings, 6 month pilot to decision gate, high yield at $8M annual benefit per site from $1.5M pilot cost. Supply chain optimization ranked 2nd. Computable ROI from inventory reduction and logistics savings, 12 month pilot for data cleaning and model validation, $18M+ benefit at scale from $3M pilot. Digital twin ranked 8th initially, moved to 4th after scope discipline. Narrowed from full plant model to single high energy unit. 12 month proof, $5M cost, $8M per unit benefit if it scales.
Framework applied to top 3 initiatives
| Rank | Initiative | Phased Plan | Decision Gate | Pilot Cost | Annual Benefit at Scale | What Stays | What Drops |
|---|---|---|---|---|---|---|---|
| 1 | Predictive maintenance | Q3 2025 pilot 2 asset types, Q1 2026 integrate, Q3 2026 scale to 1 site, Q2 2027 pattern | Q2 2026 | $1.5M | $8M per site | Critical pumps, conveyors, mills | Low priority assets until year 3 |
| 2 | Supply chain optimization | Q3 2025 clean data, Q1 2026 build forecast model, Q3 2026 expand coverage | Q2 2026 | $3M | $18M network wide | Demand forecasting, inventory placement, routing | Enterprise optimizer in phase 1 |
| 4 | Digital twin, single unit | Q1 2026 model 1 high energy unit, Q4 2026 connect to controls, Q2 2027 replicate | Q4 2026 | $5M | $8M per unit | Energy intensive units only | Full plant model |
What got delivered
Ranked list of 8 initiatives with ROM costs and benefits. Phased rollout plan for top 5 showing pilot to integrate to scale to pattern. Decision gate calendar for next 18 months with clear kill, pivot, or scale criteria. ROI equations for each initiative with stated assumptions and sensitivity ranges. Stop doing list of features and projects to defer or eliminate.
Total portfolio value at scale $50M+ per year from $8M pilot investment over 18 months. Executive team had a fundable plan, a governance process, and a defendable ROI story for the board.
Bottom line
Rigorous beats strategic. Computable beats transformational. Phase 1 proof beats enterprise vision. That is how AI portfolios get funded and how projects deliver value instead of drifting into zombie status or getting killed at first budget review.
The framework has been applied across upstream oil and gas, agriculture, and industrial manufacturing. The industries change. The discipline does not.
Built this into a template with ROI equation examples, phased plan formats, and decision gate calendar. Reach out if you want it. hammad at shahnet dot dev.