What If It Stopped?

A stress test for AI infrastructure dependencies

What This Simulator Models

This models what happens if the AI systems businesses depend on suddenly stopped working.

Specifically: Large language models (LLMs) - the AI systems like ChatGPT, Claude, and Gemini that companies now use for writing, coding, analysis, and customer service. Not other AI like fraud detection or warehouse robots - those have different failure modes.

Why this matters: 92% of the world's 500 largest companies now depend on these systems for daily operations. This is a stress test to understand what that dependency means - not a prediction that failure will happen.

Think of this like a fire drill: we're not predicting a fire, we're planning for one.

Scenario Configuration

7 days
How fast can it happen?
• 2 days: Silicon Valley Bank collapsed this quickly in 2023
• 7 days: When Lehman Brothers failed in 2008, the first week was chaos
• 14-30 days: If companies got some warning
• 30-90 days: If there was advance notice and time to prepare
50% intervention
Historical context:
• 0-25%: Limited intervention (Lehman Brothers allowed to fail, 2008)
• 50%: Moderate support (SVB depositor guarantee within 72 hours, 2023)
• 75%: Major intervention (2008 TARP - $560B bank bailout)
• 100%: Full nationalization (unprecedented for tech infrastructure)
1.0× (U.S.-Only)
1.0×
U.S. Only
1.5×
Advanced
Economies
2.0×
Global
2.5×
Global +
Supply Chain

✓ Responsible Use of This Tool

Assess your company's AI dependency risks
Plan for business continuity scenarios
Understand systemic risks in AI infrastructure
Inform academic research and policy discussions
Stress-test investment portfolios
Promote diversification strategies

✗ Do NOT Use This Tool To:

Make investment decisions without professional advice
Spread fear or panic about AI companies
Claim certainty about unknowable future events
Attack or harm specific companies
Short stocks based solely on this analysis
Justify opposition to AI development generally

What Can Be Done? Actionable Steps for Resilience

This isn't doom-and-gloom. Here are concrete actions to reduce systemic risk:

For Businesses

  • Adopt multi-provider strategies (OpenAI + Claude + Gemini)
  • Abstract API calls through compatibility layers
  • Maintain model-agnostic prompting practices
  • Regular disaster recovery testing
  • Document critical workflows for migration

For Investors

  • Diversify AI exposure across providers
  • Assess portfolio dependencies on single models
  • Stress-test for OpenAI failure scenarios
  • Consider open-source hedges (Llama, Mistral)
  • Monitor concentration metrics regularly

For Policymakers

  • Monitor concentration risk in AI infrastructure
  • Establish API portability (being able to switch providers without rebuilding everything) standards
  • Create contingency frameworks (not bailouts)
  • Support competition and open-source alternatives
  • Require "living wills" for major AI providers

For AI Industry

  • Develop interoperability (systems talking to each other) standards
  • Support multi-cloud architectures
  • Publish dependency transparency reports
  • Collaborate on disaster recovery protocols
  • Build redundancy into infrastructure

Transparency: Methodology & Data Sources

You don't need to read this to use the simulator - but if you want to see how calculations work and where data comes from, it's all here.

Core Philosophy: This simulator uses verified 2025 data to model potential scenarios. Economic impacts are projections based on historical precedents and interdependency analysis, not predictions. Every assumption is documented. All economic figures are shown in nominal dollars.

Important Note on Uncertainty: Small changes in assumptions can produce large variations in outcomes-this is inherent to interconnected systems. The ranges shown reflect this uncertainty.

✓ Verified Data (Cross-Checkable Facts)

Data as of December 2025. All figures cross-referenced against primary sources and major financial press.

  • 1,000,000+ business customers - Source: OpenAI "State of Enterprise AI 2025" report (December 2025). Accessible via OpenAI official announcements.
  • 92% of Fortune 500 using OpenAI technology - Source: Financial Times reporting on OpenAI business penetration (November 2025), cited in multiple industry analyses (Exploding Topics, IndexBox). Includes ChatGPT Enterprise, Microsoft Copilot (powered by OpenAI models), and API integrations. Note: This reflects OpenAI technology presence (direct and via Microsoft) rather than formal enterprise-wide deployment census.
  • 7 million+ ChatGPT for Work seats - Source: OpenAI official announcement (November 5, 2025). Reported 40% growth in two months.
  • ~$13 billion ARR (annual recurring revenue) - Source: Financial Times reporting (November 2025), confirmed by multiple outlets (TechCrunch, Reuters). Note: This is annualized recurring revenue, not audited GAAP revenue.
  • ~800M regular ChatGPT users, dominant consumer LLM service - Source: Financial Times, TechCrunch reporting (Q4 2025). OpenAI maintains dominant position in consumer generative AI chatbot usage.
  • Enterprise LLM market concentration (2025) - The enterprise LLM market is effectively a duopoly between OpenAI (~$10.4B ARR) and Anthropic (~$5.6B ARR), with Google Gemini as the third player. These three providers control the vast majority of enterprise LLM usage. Sources: Company revenue data from OpenAI and Anthropic public statements (2025); market structure described by Menlo Ventures "2025 LLM Market Update" and multiple industry analyses. Methodology note: Revenue figures reflect the dominant position of these providers; market characterized as high concentration across the sector.
  • $200B Azure services commitment - Source: Microsoft-OpenAI partnership announcement (October 2025). OpenAI contracted to purchase $250 billion of Azure cloud infrastructure services. Reuters, CNBC, Yahoo Finance covered extensively.
  • 40-60 min daily productivity gains - Source: OpenAI "State of Enterprise AI 2025" report survey of 9,000 workers across ~100 enterprises (December 2025). Self-reported time savings from using ChatGPT Enterprise and related tools. Methodology note: Vendor-published survey data; represents perceived time saved, not independently measured productivity.
  • 320x increase in reasoning token usage YoY - Source: OpenAI "State of Enterprise AI 2025" report (December 2025). Metric: "API reasoning token consumption per organisation increased 320x year-over-year." Represents growth in advanced reasoning capabilities (o1/o3 model family) usage among existing enterprise customers.
  • GitHub Copilot code generation - Copilot writes approximately 46% of code on average for developers actively using it, with some languages (e.g., Java) reaching up to 61%. Note: This figure is widely reported in secondary analyses of GitHub telemetry data. Primary sources (GitHub/Microsoft official statistics) should be consulted for authoritative figures.

⚠️ Modeling Assumptions (How Impacts Are Calculated)

Jobs at Risk Formula: (Scenario Base Companies × Time Multiplier × Global Scale) × 15 employees × (1 - Government Response / 150)

Where "Scenario Base Companies" is the scenario's baseline affected companies before time/global adjustments are applied.

Assumption: Average 15 employees per affected business. Source: US Small Business Administration reports average employer firm ~24.9 employees (2020), small firms ~11.7 employees. Our 15 is a conservative blend weighted toward smaller firms in the affected pool.

Economic Impact: Base Economic Impact × Cascade Multiplier × Time Multiplier × (1 - Government Response / 150) × Global Scale. Base values derived from 2023 Silicon Valley Bank collapse ($16B cost, 50% of tech startups affected) scaled to OpenAI's broader reach (92% of Fortune 500 using OpenAI technology). Knock-on effectss (1.2x-2.5x) account for Microsoft exposure, venture capital (VC) funding freeze and productivity losses. Time Multiplier represents collapse speed: faster failures create more economic shock, slower failures allow more adaptation.

📐 Model Calibration Constants (Transparent Assumptions)

These are calibration parameters based on historical precedents and reasonable estimates, not measured data:

  • Employees per affected company: 15 - Blend of SBA data (~24.9 for employer firms, ~11.7 for small firms)
  • Market cap multiple: 8× - Historical basis: SVB 6×, Lehman 16×, using conservative middle ground
  • Base recovery timeline: 180 days - Based on SVB taking 6 months to market stabilization
  • Government impact mitigation cap: 66% - Reflects policy limits in recreating technical capacity
  • Government recovery mitigation cap: 50% - Maximum emergency response acceleration
  • Scenario base companies: 1M/2.5M/5M - Calibrated estimates, not census data
  • Base economic impacts: $32B/$120B/$160B - SVB precedent ($16B) scaled to broader reach
  • Cascade multipliers: 1.4×/2.5×/2.0× - Account for Microsoft exposure and VC freeze
  • Global scale: Linear amplifier (1.0-2.5×) - Real cascades are non-linear; this is a simplified lens

These constants allow exploration of impact scales. They are explicitly model choices, not measured market data.

Government Response Limits: The 0-100% slider represents government intervention intensity, but mitigation is capped because policy cannot instantly recreate missing LLM capacity or organisational integrations:

  • Impact mitigation cap: ~66% - At 100% response, government can reduce first-order disruption by up to 66% (formula: 1 - 100/150 = 0.333 minimum multiplier, thus 66% max reduction). The remaining 34% reflects that liquidity support and confidence measures cannot instantly restore lost tooling, integrations, and workforce capabilities.
  • Recovery mitigation cap: 50% - Government response can cut recovery time by up to half through emergency support, regulatory relief, and coordination. Beyond this, technical rebuilding and organisational adaptation take unavoidable time.

Collapse Speed Floor (30% minimum): Even slow-motion failures retain a 30% impact floor because business dependencies are "sticky" - workflows, integrations, and organisational muscle memory cannot instantly revert. This reflects that LLM dependencies are operational, not just transactional.

Market Cap Loss: Economic Impact × 8. Historical basis: SVB was 6x, Lehman 2008 was 16x. Our 8x is conservative middle ground accounting for Microsoft's $108B stake, Nvidia sensitivity and broader AI sector contagion. Government response effects are already reflected in the economic impact calculation.

Recovery Timeline: 180 days (6 month baseline) × Scenario Cascade Multiplier × Collapse Speed Multiplier × (1 - Government Response × 0.5). Faster collapses require longer recoveries (inverse relationship). Based on SVB taking 6 months to stabilize after 48-hour collapse. Government response can reduce recovery time by up to 50%.

Fortune 500 Reference Frame: Fortune 500 affected figures represent U.S. headquartered companies and do not scale with the global multiplier. These companies have global operations, but the count itself is anchored to the U.S. corporate base. Global Scale impacts flow through economic and market cap calculations.

🌍 Global Impact Multiplier

This optional multiplier accounts for AI adoption and economic exposure outside the United States. The U.S. represents roughly 24% of global GDP, but frontier AI models are integrated across Europe, Asia, Africa, Latin America and the Middle East. Because global industries rely on shared cloud infrastructure and interconnected supply chains, disruptions can propagate internationally. The multiplier (1.0-2.5×) provides a conservative scaling estimate of global spillover effects.

📖 Historical Precedent: Silicon Valley Bank (2023)

Primary reference for concentration risk modeling:

  • Bank size: 16th largest U.S. bank, $167B assets
  • Customer base: ~50% of venture-backed tech companies
  • Collapse speed: 48 hours from announcement to seizure
  • Bank run magnitude: $34B withdrawn in 24 hours (fastest in history)
  • Direct cost: $16B FDIC fund depletion
  • Contagion: Signature Bank (2 days), Credit Suisse (9 days)
  • Government response: Full depositor guarantee within 72 hours
  • Recovery time: 6-9 months to market stabilization

Key parallel: SVB had dangerous concentration (50% of tech startups). OpenAI's technology penetration is nearly double (92% of Fortune 500 using OpenAI-powered tools) affecting much larger companies.

🔗 Direct Source Links (Primary Sources)

For maximum transparency, here are the primary sources for key claims:

  • $200B Azure commitment: Microsoft official blog - "The next chapter of the Microsoft–OpenAI partnership" (October 2025)
  • OpenAI $32B funding: OpenAI official funding announcement (March 31, 2025)
  • OpenAI $10.4B ARR & ~800M users: Financial Times reporting (November 2025), corroborated by TechCrunch, Reuters
  • Enterprise LLM market structure: Menlo Ventures "2025 LLM Market Update" and "The State of Generative AI in the Enterprise" reports
  • SVB $16B loss estimate: FDIC official estimates and Congressional reviews of Silicon Valley Bank collapse (2023)
  • Employee counts (15 per firm): US Small Business Administration "Employer Firm Demographics" reports showing ~24.9 avg for employer firms, ~11.7 for small firms (2020)
  • Note on secondary sources: Some figures (particularly GitHub Copilot code generation percentages) are reported in third-party analyses of telemetry data rather than official public releases. We acknowledge these limitations in our methodology.

Key Limitations & Uncertainties

This simulator has significant limitations:

  • No historical precedent: No AI system failure of this scale has ever occurred, making all estimates highly uncertain
  • Point estimates mask variance: Actual impacts could vary by 50-200% depending on specific circumstances, response effectiveness and adaptation speed
  • Assumes limited adaptation: Companies and governments may respond more effectively than our base models assume
  • Conservative on innovation: Doesn't account for rapid alternative solutions or workarounds that might emerge
  • Simplified dependencies: Real enterprise dependencies are more nuanced than our categorical approach captures
  • Likelihood not addressed: This models impact IF failure occurs, not probability that it will occur (which is likely low given current capitalisation and revenue)

Use responsibly: These scenarios are valuable for planning and preparedness, but should not be treated as forecasts or predictions.

Legal Disclaimer

This simulator is provided for educational and research purposes only. It is not financial, investment, legal, or business advice. All scenarios are hypothetical models based on historical precedents and publicly available data. Actual outcomes may differ significantly from any modeled scenario.

Users should conduct their own due diligence and consult qualified professionals (financial advisors, legal counsel, business consultants) before making any decisions. The creators and publishers of this tool assume no liability for any use, misuse, or decisions made based on this simulator. By using this tool, you acknowledge that you understand these limitations and agree to use it responsibly.

Copyright Notice: This tool and methodology are provided under educational fair use. May be freely shared with attribution to original source.

🏢 How Companies Actually Use These Systems

This isn't about employees asking ChatGPT questions. It's about AI built into how businesses function globally. In just 2-3 years, these tools became load-bearing infrastructure:

Software Development: GitHub Copilot writes 40-60% of code. Developers stopped memorising how to look things up without AI. When it stops, teams can't meet any deadlines without rewriting roadmaps.

Customer Service: AI chatbots handle 50-65% of questions. Call centres went from 200 agents to 40, assuming AI covers basics. You can't rehire and retrain hundreds of people in weeks.

Business Communication: Microsoft 365 Copilot drafts emails, reports, summaries for millions of users. Marketing teams produce ten times the content they could manually. Productivity collapses when people must write everything by hand again.

Analysis & Reports: AI queries databases, generates reports, creates visualisations, writes executive summaries. Decision-making slows dramatically when insights take days instead of hours.

Legal & Compliance: Contract review, regulatory documents, due diligence all AI-accelerated. Legal teams sized for AI efficiency can't handle the same volume manually. Merger deals and compliance deadlines assume AI speed.

Product Development: Requirements, specifications, go-to-market strategies all AI-drafted. Product development velocity collapses without it.

The critical point: This reshaped team sizes (companies downsized assuming AI coverage), skills (employees' manual capabilities atrophied), timelines (deadlines assume AI speed), and workflows (business processes redesigned around AI).

It took 2-3 years to build these dependencies. They cannot be unwound in weeks. This pattern exists across London, Singapore, Frankfurt, Tokyo, Sydney - every major market.

💡 Why Large Language Models Specifically?

Because LLM capability cannot be recreated on demand. Training GPT-4 took months and hundreds of millions of pounds. When OpenAI released it, the training run was already complete. If OpenAI fails, Microsoft cannot just spin up a replacement overnight.

This makes LLM dependencies uniquely fragile compared to other technology:

But frontier LLMs? Training GPT-4 or Claude Opus requires:

The time gap is the vulnerability. Companies rewired around these capabilities. If they vanish, rebuilding takes months minimum - even with unlimited funding. And months of full productivity loss creates economic damage on a scale we haven't seen from technology failure before.

This simulator models what that scale looks like.

© 2025 Paul Iliffe / Staying Human Project

This simulator is proprietary software. Unauthorised use, reproduction or distribution is prohibited without written permission.