What If It Stopped?

A stress test for AI infrastructure dependencies

What This Simulator Models

This models what happens if the AI systems businesses depend on suddenly stopped working.

Specifically: Large language models (LLMs) - the AI systems like ChatGPT, Claude, and Gemini that companies now use for writing, coding, analysis, and customer service. Not other AI like fraud detection or warehouse robots - those have different failure modes.

Why this matters: 92% of the world's 500 largest companies now depend on these systems for daily operations. This is a stress test to understand what that dependency means - not a prediction that failure will happen.

Think of this like a fire drill: we're not predicting a fire, we're planning for one.

Scenario Configuration

7 days
How fast can it happen?
• 2 days: Silicon Valley Bank collapsed this quickly in 2023
• 7 days: When Lehman Brothers failed in 2008, the first week was chaos
• 14-30 days: If companies got some warning
• 30-90 days: If there was advance notice and time to prepare
50% intervention
Historical context:
• 0-25%: Limited intervention (Lehman Brothers allowed to fail, 2008)
• 50%: Moderate support (SVB depositor guarantee within 72 hours, 2023)
• 75%: Major intervention (2008 TARP - $560B bank bailout)
• 100%: Full nationalization (unprecedented for tech infrastructure)
1.0× (U.S.-Only)
1.0×
U.S. Only
1.5×
Advanced
Economies
2.0×
Global
2.5×
Global +
Supply Chain

✓ Responsible Use of This Tool

Assess your company's AI dependency risks
Plan for business continuity scenarios
Understand systemic risks in AI infrastructure
Inform academic research and policy discussions
Stress-test investment portfolios
Promote diversification strategies

✗ Do NOT Use This Tool To:

Make investment decisions without professional advice
Spread fear or panic about AI companies
Claim certainty about unknowable future events
Attack or harm specific companies
Short stocks based solely on this analysis
Justify opposition to AI development generally

What Can Be Done? Actionable Steps for Resilience

This isn't doom-and-gloom. Here are concrete actions to reduce systemic risk:

For Businesses

  • Adopt multi-provider strategies (OpenAI + Claude + Gemini)
  • Abstract API calls through compatibility layers
  • Maintain model-agnostic prompting practices
  • Regular disaster recovery testing
  • Document critical workflows for migration

For Investors

  • Diversify AI exposure across providers
  • Assess portfolio dependencies on single models
  • Stress-test for OpenAI failure scenarios
  • Consider open-source hedges (Llama, Mistral)
  • Monitor concentration metrics regularly

For Policymakers

  • Monitor concentration risk in AI infrastructure
  • Establish API portability (being able to switch providers without rebuilding everything) standards
  • Create contingency frameworks (not bailouts)
  • Support competition and open-source alternatives
  • Require "living wills" for major AI providers

For AI Industry

  • Develop interoperability (systems talking to each other) standards
  • Support multi-cloud architectures
  • Publish dependency transparency reports
  • Collaborate on disaster recovery protocols
  • Build redundancy into infrastructure

Transparency: Methodology & Data Sources

You don't need to read this to use the simulator - but if you want to see how calculations work and where data comes from, it's all here.

Core Philosophy: This simulator uses verified 2025 data to model potential scenarios. Economic impacts are projections based on historical precedents and interdependency analysis, not predictions. Every assumption is documented. All economic figures are shown in nominal dollars.

Important Note on Uncertainty: Small changes in assumptions can produce large variations in outcomes-this is inherent to interconnected systems. The ranges shown reflect this uncertainty.

✓ Verified Data (Cross-Checkable Facts)

Data as of December 2025. All figures cross-referenced against primary sources and major financial press.

⚠️ Modeling Assumptions (How Impacts Are Calculated)

Jobs at Risk Formula: (Scenario Base Companies × Time Multiplier × Global Scale) × 15 employees × (1 - Government Response / 150)

Where "Scenario Base Companies" is the scenario's baseline affected companies before time/global adjustments are applied.

Assumption: Average 15 employees per affected business. Source: US Small Business Administration reports average employer firm ~24.9 employees (2020), small firms ~11.7 employees. Our 15 is a conservative blend weighted toward smaller firms in the affected pool.

Economic Impact: Base Economic Impact × Cascade Multiplier × Time Multiplier × (1 - Government Response / 150) × Global Scale. Base values derived from 2023 Silicon Valley Bank collapse ($16B cost, 50% of tech startups affected) scaled to OpenAI's broader reach (92% of Fortune 500 using OpenAI technology). Knock-on effectss (1.2x-2.5x) account for Microsoft exposure, venture capital (VC) funding freeze and productivity losses. Time Multiplier represents collapse speed: faster failures create more economic shock, slower failures allow more adaptation.

📐 Model Calibration Constants (Transparent Assumptions)

These are calibration parameters based on historical precedents and reasonable estimates, not measured data:

These constants allow exploration of impact scales. They are explicitly model choices, not measured market data.

Government Response Limits: The 0-100% slider represents government intervention intensity, but mitigation is capped because policy cannot instantly recreate missing LLM capacity or organisational integrations:

Collapse Speed Floor (30% minimum): Even slow-motion failures retain a 30% impact floor because business dependencies are "sticky" - workflows, integrations, and organisational muscle memory cannot instantly revert. This reflects that LLM dependencies are operational, not just transactional.

Market Cap Loss: Economic Impact × 8. Historical basis: SVB was 6x, Lehman 2008 was 16x. Our 8x is conservative middle ground accounting for Microsoft's $108B stake, Nvidia sensitivity and broader AI sector contagion. Government response effects are already reflected in the economic impact calculation.

Recovery Timeline: 180 days (6 month baseline) × Scenario Cascade Multiplier × Collapse Speed Multiplier × (1 - Government Response × 0.5). Faster collapses require longer recoveries (inverse relationship). Based on SVB taking 6 months to stabilize after 48-hour collapse. Government response can reduce recovery time by up to 50%.

Fortune 500 Reference Frame: Fortune 500 affected figures represent U.S. headquartered companies and do not scale with the global multiplier. These companies have global operations, but the count itself is anchored to the U.S. corporate base. Global Scale impacts flow through economic and market cap calculations.

🌍 Global Impact Multiplier

This optional multiplier accounts for AI adoption and economic exposure outside the United States. The U.S. represents roughly 24% of global GDP, but frontier AI models are integrated across Europe, Asia, Africa, Latin America and the Middle East. Because global industries rely on shared cloud infrastructure and interconnected supply chains, disruptions can propagate internationally. The multiplier (1.0-2.5×) provides a conservative scaling estimate of global spillover effects.

📖 Historical Precedent: Silicon Valley Bank (2023)

Primary reference for concentration risk modeling:

Key parallel: SVB had dangerous concentration (50% of tech startups). OpenAI's technology penetration is nearly double (92% of Fortune 500 using OpenAI-powered tools) affecting much larger companies.

🔗 Direct Source Links (Primary Sources)

For maximum transparency, here are the primary sources for key claims:

Key Limitations & Uncertainties

This simulator has significant limitations:

  • No historical precedent: No AI system failure of this scale has ever occurred, making all estimates highly uncertain
  • Point estimates mask variance: Actual impacts could vary by 50-200% depending on specific circumstances, response effectiveness and adaptation speed
  • Assumes limited adaptation: Companies and governments may respond more effectively than our base models assume
  • Conservative on innovation: Doesn't account for rapid alternative solutions or workarounds that might emerge
  • Simplified dependencies: Real enterprise dependencies are more nuanced than our categorical approach captures
  • Likelihood not addressed: This models impact IF failure occurs, not probability that it will occur (which is likely low given current capitalisation and revenue)

Use responsibly: These scenarios are valuable for planning and preparedness, but should not be treated as forecasts or predictions.

Legal Disclaimer

This simulator is provided for educational and research purposes only. It is not financial, investment, legal, or business advice. All scenarios are hypothetical models based on historical precedents and publicly available data. Actual outcomes may differ significantly from any modeled scenario.

Users should conduct their own due diligence and consult qualified professionals (financial advisors, legal counsel, business consultants) before making any decisions. The creators and publishers of this tool assume no liability for any use, misuse, or decisions made based on this simulator. By using this tool, you acknowledge that you understand these limitations and agree to use it responsibly.

Copyright Notice: This tool and methodology are provided under educational fair use. May be freely shared with attribution to original source.

🏢 How Companies Actually Use These Systems

This isn't about employees asking ChatGPT questions. It's about AI built into how businesses function globally. In just 2-3 years, these tools became load-bearing infrastructure:

Software Development: GitHub Copilot writes 40-60% of code. Developers stopped memorising how to look things up without AI. When it stops, teams can't meet any deadlines without rewriting roadmaps.

Customer Service: AI chatbots handle 50-65% of questions. Call centres went from 200 agents to 40, assuming AI covers basics. You can't rehire and retrain hundreds of people in weeks.

Business Communication: Microsoft 365 Copilot drafts emails, reports, summaries for millions of users. Marketing teams produce ten times the content they could manually. Productivity collapses when people must write everything by hand again.

Analysis & Reports: AI queries databases, generates reports, creates visualisations, writes executive summaries. Decision-making slows dramatically when insights take days instead of hours.

Legal & Compliance: Contract review, regulatory documents, due diligence all AI-accelerated. Legal teams sized for AI efficiency can't handle the same volume manually. Merger deals and compliance deadlines assume AI speed.

Product Development: Requirements, specifications, go-to-market strategies all AI-drafted. Product development velocity collapses without it.

The critical point: This reshaped team sizes (companies downsized assuming AI coverage), skills (employees' manual capabilities atrophied), timelines (deadlines assume AI speed), and workflows (business processes redesigned around AI).

It took 2-3 years to build these dependencies. They cannot be unwound in weeks. This pattern exists across London, Singapore, Frankfurt, Tokyo, Sydney - every major market.

💡 Why Large Language Models Specifically?

Because LLM capability cannot be recreated on demand. Training GPT-4 took months and hundreds of millions of pounds. When OpenAI released it, the training run was already complete. If OpenAI fails, Microsoft cannot just spin up a replacement overnight.

This makes LLM dependencies uniquely fragile compared to other technology:

The irreversibility: Moving from ChatGPT to Claude isn't like switching email providers - it's like rebuilding your entire workflow from scratch. Every integration, every fine-tuned prompt, every business process.

That's why this simulator exists: to model what happens when something this deeply embedded, this expensive to recreate, and this concentrated (three companies control most of it) suddenly stops working.

© 2025 Paul Iliffe / Staying Human Project

This simulator is proprietary software. Unauthorised use, reproduction or distribution is prohibited without written permission.