The most common AI strategy is also the most destructive: identify 10 problems that AI could solve, assign each to a small team or pilot, and hope that at least a few succeed. It sounds reasonable. It almost never works.
The organizations producing the most AI value take the opposite approach. They pick one problem — a single, well-defined business problem — and throw disproportionate resources at it until it's solved, deployed, and generating measurable returns. Then, and only then, they move to problem two.
Why Singular Focus Wins
The argument for focus isn't new. Eliyahu Goldratt's Theory of Constraints, first published in 1984, demonstrated that optimizing every part of a system simultaneously is mathematically inferior to identifying and focusing on the single binding constraint. The principle applies directly to AI strategy.
Consider two approaches to a $2 million AI budget:
Approach A: Spread across 10 projects at $200K each. Each project gets a partial team, shares infrastructure attention, and competes for leadership mindshare. Historical data suggests 1-2 projects reach production, generating $500K-$1M in annual value.
Approach B: Concentrate on 2 projects at $1M each. Each project gets a dedicated team, purpose-built infrastructure, and executive sponsorship. Historical data suggests both reach production, generating $2-5M in annual value.
The most common cause of strategic failure is not choosing the wrong strategy — it's failing to commit resources decisively to the chosen strategy. AI is no exception. — research on why strategy execution unravels
The One-Problem Methodology
The one-problem strategy isn't about limiting ambition — it's about sequencing ambition correctly. You still solve many problems with AI, just one at a time, in an order that builds cumulative advantage. Think of it as a serial investment strategy: each investment builds on the returns of the previous one.
flowchart TD
A[Select One<br/>Problem] --> B[Focus All<br/>Resources]
B --> C[Solve and<br/>Deploy]
C --> D["Compound:<br/>Reuse Assets"]
D --> E[Select Next<br/>Problem]
E --> B
style A fill:#1a1a2e,stroke:#0f3460,color:#fff
style B fill:#1a1a2e,stroke:#0f3460,color:#fff
style C fill:#1a1a2e,stroke:#16c79a,color:#fff
style D fill:#1a1a2e,stroke:#ffd700,color:#fff
style E fill:#1a1a2e,stroke:#0f3460,color:#fffStep 1: Select the Right Problem
The first problem you solve sets the foundation for everything that follows. Choose it based on three criteria.
High business impact, high confidence. Not the most transformative idea — the most impactful idea that you're confident you can execute. First-mover advantage in AI comes from deploying production systems, not from having the most ambitious roadmap.
Data readiness. The problem should involve data you already have, in a format you can access, with enough volume and quality to train useful models. A comprehensive survey on data readiness for AI, reviewing over 120 papers, found that data readiness is the single strongest predictor of AI project success — stronger than talent, budget, or technology choices.
Infrastructure reusability. The infrastructure you build to solve problem one (data pipelines, feature stores, model serving, monitoring) should be applicable to problems two through five. The ideal first problem uses data from your core business systems, requires real-time or near-real-time serving, and has a clear feedback loop — these constraints produce infrastructure useful for almost any subsequent initiative.
Avoid these common traps when selecting the first problem:
- Choosing the "easiest" problem first — easy problems produce easy wins that don't build organizational capability
- Choosing based on executive enthusiasm rather than data readiness — the CEO's favorite idea is worthless if the data doesn't exist
- Choosing a problem that doesn't generate feedback data — problems where feedback arrives months later are poor first choices because the improvement cycle is too slow
Step 2: Concentrate Resources
Once selected, commit resources disproportionate to what feels appropriate. The natural instinct is to hedge — "let's give it 60% of resources and keep some options open." Resist this. The entire point of the one-problem strategy is concentration.
Resource concentration means three specific commitments.
- Dedicated team: 3-5 people whose only priority is this problem — 100% of their working time
- Executive sponsor: A C-level or VP who reviews progress weekly (not monthly) and removes organizational obstacles in real time
- Time-boxed commitment: 90-day sprints with clear deliverables — if the problem isn't showing production-ready progress after 90 days, either the problem selection was wrong or the resource commitment is insufficient
McKinsey's analysis of AI winners found that organizations achieving the highest AI ROI invest 3-5x more per initiative than the average, while running 60-70% fewer initiatives.
Step 3: Solve and Deploy
"Solved" means in production and generating measurable business value — not "the model works" or "we demonstrated it to stakeholders." In production, serving real users or processes, with monitoring confirming it delivers the expected outcome.
The solve-and-deploy phase has three milestones.
- Technical proof: Model achieves target performance on representative data. Architecture validated for production requirements.
- Integration proof: System connected to real data pipelines and downstream consumers. End-to-end flow tested under realistic conditions.
- Value proof: Limited production deployment with instrumented business metrics. Statistical evidence of impact.
Step 4: Compound Through Reuse
This is where the one-problem strategy produces its highest returns. The infrastructure, processes, and organizational muscle you built for problem one dramatically accelerate problem two.
Every completed problem produces reusable assets for the next cycle: data pipelines and feature engineering patterns, model serving infrastructure and monitoring dashboards, CI/CD pipelines for ML deployment, organizational processes (evaluation criteria, deployment checklists, on-call procedures), and institutional knowledge about which approaches work with your data, systems, and organization. A 2024 systematic literature review in Information and Software Technology, analyzing 30 industrial case studies, found that systematic reuse reduces development costs by 40-60% for subsequent projects. In AI, infrastructure reuse is even more impactful because infrastructure is a larger proportion of total effort than in traditional software.
Problem two takes 50-60% of the time and resources of problem one. Problem three takes 40-50%. The compounding effect means that the organization executing sequentially often overtakes the one executing in parallel within 12-18 months — with higher quality deployments and lower total cost.
The compounding also applies to organizational capability. The team that ships problem one has battle-tested knowledge about production AI: what data quality issues actually matter, how monitoring should be configured, what fallback strategies work, and how to handle model degradation. This knowledge doesn't exist in a team that has run 10 pilots but shipped nothing.
Applying Constraint Theory
Goldratt's Theory of Constraints provides a mental model for ongoing prioritization. A systematic review of TOC applied to software development confirms that the framework's focusing process translates directly to engineering organizations.
The five focusing steps apply to AI strategy as follows.
- Identify the constraint. What single factor most limits your organization's ability to generate value from AI? Common constraints: data quality, engineering capacity, organizational alignment, domain expertise.
- Exploit the constraint. Maximize the output of the constraining resource. If ML engineers are the constraint, ensure they spend 100% of their time on the highest-impact work.
- Subordinate everything else. All other activities should support the constraint. If the constraint is data quality, every other team should prioritize data improvements over their own AI experiments.
- Elevate the constraint. Invest to increase the capacity of the constraining resource. Hire more ML engineers, improve data infrastructure, align the organization.
- Repeat. Once the constraint shifts (and it will), identify the new constraint and refocus.
When to Move to Problem Two
The temptation is to move on too early — declaring victory after the model reaches production but before it's truly stable. You're ready for problem two when three conditions are met: the system has been in production for at least 30 days without manual intervention; business metrics show statistically significant improvement against the baseline; and the development team has documented reusable infrastructure and patterns, with an operations team (not the development team) owning and maintaining the system going forward.
Moving to problem two before these criteria are met risks leaving problem one in a fragile state that degrades without the original team's attention.
Common Objections and Responses
"We can't afford to only work on one thing." You can't afford not to. Working on 10 things that each have a 10% chance of reaching production gives you an expected 1 success. Working on 2 things with an 80% chance gives you 1.6 successes — at the same total cost with far less coordination overhead.
"What if we pick the wrong problem?" The 90-day time-box limits your downside. If the problem selection is wrong, you discover it in 90 days and pivot — having spent a fraction of what a 12-month portfolio approach would have consumed before reaching the same conclusion.
"Our stakeholders won't accept being told no." Frame it correctly: you're sequencing, not refusing. Their problem is on the roadmap, and the sequential approach means it will be addressed faster (due to infrastructure reuse) than if it were one of 15 under-resourced initiatives.
Expected Results
The one-problem strategy produces measurable improvements across every dimension organizations typically track. These results compound over time — each cycle strengthens both the infrastructure and the team.
- Faster first production deployment than portfolio approaches, with 80%+ of AI investments reaching production compared to 20-30% for portfolio approaches
- Compounding velocity: each subsequent project deploys faster than the one before as infrastructure and team expertise compound
- Better stakeholder relationships: delivering one real system builds more credibility than promising ten
Boundary Conditions
This strategy fails when leadership repeatedly injects side priorities into the active execution window. The pattern is specific: a team is focused on their selected problem, making progress, and then an executive requests "just a quick proof-of-concept" for an unrelated idea. Each interruption seems minor in isolation, but the cumulative effect fragments the team's focus. Within weeks, the "one problem" strategy has silently reverted to a multi-project portfolio, and the team loses the concentration advantage that makes the approach work.
The second failure mode is premature declaration of success. Organizations under pressure to show breadth redefine "solved" downward — a working demo becomes "good enough" to move on, even though the system isn't in production, isn't monitored, and hasn't demonstrated business impact. This defeats the compounding mechanism: if problem one's infrastructure and processes aren't production-grade, problem two doesn't benefit from reuse. Organizations seeing either pattern need structural protection: a written commitment that the selected problem is the team's sole priority for the 90-day window, with executive sign-off required to formally approve any scope change.
First Steps
- Force-rank your AI initiatives. List every active initiative — including informal experiments and vendor-provided tools — then rank by confidence-weighted impact, not potential impact. High potential with low confidence ranks below moderate potential with high confidence.
- Pick one and commit publicly. Select the top-ranked initiative, reallocate resources from other initiatives to the selected one, and define what "in production and generating value" looks like for this specific problem within 90 days.
- Define the reuse plan before building. Identify which components — data pipelines, serving infrastructure, monitoring — will be designed for reuse in subsequent projects. This ensures the first cycle creates compounding leverage, not a one-off solution.
Practical Solution Pattern
Adopt single-problem concentration for each cycle so resources, leadership attention, and integration effort converge on one meaningful outcome before expansion. Force-rank every active and proposed AI initiative by confidence-weighted impact, select the top-ranked one, and reallocate resources from lower-ranked initiatives to the selected problem — including the best engineers, an executive sponsor reviewing progress weekly, and a 90-day time-boxed commitment with a defined "in production and generating value" success criterion.
This approach works through compounding: each completed cycle produces infrastructure, processes, and institutional knowledge that make the next cycle 30-50% faster. The dedicated team that ships problem one has battle-tested knowledge about production AI — what data quality issues actually matter, how monitoring should be configured, what fallback strategies work — that teams running 10 pilots but shipping nothing never accumulate. The expected value calculation also favors concentration: two initiatives with 80% production probability each yield 1.6 expected successes at lower total cost than ten initiatives with 10% probability each yielding 1.0 expected successes.
References
- Goldratt, E. M. Theory of Constraints. North River Press, 1984.
- McKinsey & Company. The State of AI. McKinsey Global Survey, 2024.
- Harvard Business Review. Why Strategy Execution Unravels — and What to Do About It. Harvard Business Review, 2015.
- Lwakatare, L. E., et al. A Comprehensive Survey on Data Readiness for AI. arXiv, 2024.
- Lenarduzzi, V., et al. A Systematic Literature Review on Technical Debt in ML Systems. Information and Software Technology, 2024.
- Gustavsson, T., et al. A Systematic Review of TOC Applied to Software Development. IEEE Access, 2019.
- American Psychological Association. Multitasking: Switching Costs. APA Research, 2023.
- Deloitte. State of AI in the Enterprise. Deloitte Insights, 2024.