A stress test for AI infrastructure dependencies
This models what happens if the AI systems businesses depend on suddenly stopped working.
Specifically: Large language models (LLMs) - the AI systems like ChatGPT, Claude, and Gemini that companies now use for writing, coding, analysis, and customer service. Not other AI like fraud detection or warehouse robots - those have different failure modes.
Why this matters: 92% of the world's 500 largest companies now depend on these systems for daily operations. This is a stress test to understand what that dependency means - not a prediction that failure will happen.
Think of this like a fire drill: we're not predicting a fire, we're planning for one.
This isn't doom-and-gloom. Here are concrete actions to reduce systemic risk:
You don't need to read this to use the simulator - but if you want to see how calculations work and where data comes from, it's all here.
Core Philosophy: This simulator uses verified 2025 data to model potential scenarios. Economic impacts are projections based on historical precedents and interdependency analysis, not predictions. Every assumption is documented. All economic figures are shown in nominal dollars.
Important Note on Uncertainty: Small changes in assumptions can produce large variations in outcomes-this is inherent to interconnected systems. The ranges shown reflect this uncertainty.
Data as of December 2025. All figures cross-referenced against primary sources and major financial press.
Jobs at Risk Formula: (Scenario Base Companies × Time Multiplier × Global Scale) × 15 employees × (1 - Government Response / 150)
Where "Scenario Base Companies" is the scenario's baseline affected companies before time/global adjustments are applied.
Assumption: Average 15 employees per affected business. Source: US Small Business Administration reports average employer firm ~24.9 employees (2020), small firms ~11.7 employees. Our 15 is a conservative blend weighted toward smaller firms in the affected pool.
Economic Impact: Base Economic Impact × Cascade Multiplier × Time Multiplier × (1 - Government Response / 150) × Global Scale. Base values derived from 2023 Silicon Valley Bank collapse ($16B cost, 50% of tech startups affected) scaled to OpenAI's broader reach (92% of Fortune 500 using OpenAI technology). Knock-on effectss (1.2x-2.5x) account for Microsoft exposure, venture capital (VC) funding freeze and productivity losses. Time Multiplier represents collapse speed: faster failures create more economic shock, slower failures allow more adaptation.
These are calibration parameters based on historical precedents and reasonable estimates, not measured data:
These constants allow exploration of impact scales. They are explicitly model choices, not measured market data.
Government Response Limits: The 0-100% slider represents government intervention intensity, but mitigation is capped because policy cannot instantly recreate missing LLM capacity or organisational integrations:
Collapse Speed Floor (30% minimum): Even slow-motion failures retain a 30% impact floor because business dependencies are "sticky" - workflows, integrations, and organisational muscle memory cannot instantly revert. This reflects that LLM dependencies are operational, not just transactional.
Market Cap Loss: Economic Impact × 8. Historical basis: SVB was 6x, Lehman 2008 was 16x. Our 8x is conservative middle ground accounting for Microsoft's $108B stake, Nvidia sensitivity and broader AI sector contagion. Government response effects are already reflected in the economic impact calculation.
Recovery Timeline: 180 days (6 month baseline) × Scenario Cascade Multiplier × Collapse Speed Multiplier × (1 - Government Response × 0.5). Faster collapses require longer recoveries (inverse relationship). Based on SVB taking 6 months to stabilize after 48-hour collapse. Government response can reduce recovery time by up to 50%.
Fortune 500 Reference Frame: Fortune 500 affected figures represent U.S. headquartered companies and do not scale with the global multiplier. These companies have global operations, but the count itself is anchored to the U.S. corporate base. Global Scale impacts flow through economic and market cap calculations.
This optional multiplier accounts for AI adoption and economic exposure outside the United States. The U.S. represents roughly 24% of global GDP, but frontier AI models are integrated across Europe, Asia, Africa, Latin America and the Middle East. Because global industries rely on shared cloud infrastructure and interconnected supply chains, disruptions can propagate internationally. The multiplier (1.0-2.5×) provides a conservative scaling estimate of global spillover effects.
Primary reference for concentration risk modeling:
Key parallel: SVB had dangerous concentration (50% of tech startups). OpenAI's technology penetration is nearly double (92% of Fortune 500 using OpenAI-powered tools) affecting much larger companies.
For maximum transparency, here are the primary sources for key claims:
This simulator has significant limitations:
Use responsibly: These scenarios are valuable for planning and preparedness, but should not be treated as forecasts or predictions.
This simulator is provided for educational and research purposes only. It is not financial, investment, legal, or business advice. All scenarios are hypothetical models based on historical precedents and publicly available data. Actual outcomes may differ significantly from any modeled scenario.
Users should conduct their own due diligence and consult qualified professionals (financial advisors, legal counsel, business consultants) before making any decisions. The creators and publishers of this tool assume no liability for any use, misuse, or decisions made based on this simulator. By using this tool, you acknowledge that you understand these limitations and agree to use it responsibly.
Copyright Notice: This tool and methodology are provided under educational fair use. May be freely shared with attribution to original source.
This isn't about employees asking ChatGPT questions. It's about AI built into how businesses function globally. In just 2-3 years, these tools became load-bearing infrastructure:
Software Development: GitHub Copilot writes 40-60% of code. Developers stopped memorising how to look things up without AI. When it stops, teams can't meet any deadlines without rewriting roadmaps.
Customer Service: AI chatbots handle 50-65% of questions. Call centres went from 200 agents to 40, assuming AI covers basics. You can't rehire and retrain hundreds of people in weeks.
Business Communication: Microsoft 365 Copilot drafts emails, reports, summaries for millions of users. Marketing teams produce ten times the content they could manually. Productivity collapses when people must write everything by hand again.
Analysis & Reports: AI queries databases, generates reports, creates visualisations, writes executive summaries. Decision-making slows dramatically when insights take days instead of hours.
Legal & Compliance: Contract review, regulatory documents, due diligence all AI-accelerated. Legal teams sized for AI efficiency can't handle the same volume manually. Merger deals and compliance deadlines assume AI speed.
Product Development: Requirements, specifications, go-to-market strategies all AI-drafted. Product development velocity collapses without it.
The critical point: This reshaped team sizes (companies downsized assuming AI coverage), skills (employees' manual capabilities atrophied), timelines (deadlines assume AI speed), and workflows (business processes redesigned around AI).
It took 2-3 years to build these dependencies. They cannot be unwound in weeks. This pattern exists across London, Singapore, Frankfurt, Tokyo, Sydney - every major market.
Because LLM capability cannot be recreated on demand. Training GPT-4 took months and hundreds of millions of pounds. When OpenAI released it, the training run was already complete. If OpenAI fails, Microsoft cannot just spin up a replacement overnight.
This makes LLM dependencies uniquely fragile compared to other technology:
The irreversibility: Moving from ChatGPT to Claude isn't like switching email providers - it's like rebuilding your entire workflow from scratch. Every integration, every fine-tuned prompt, every business process.
That's why this simulator exists: to model what happens when something this deeply embedded, this expensive to recreate, and this concentrated (three companies control most of it) suddenly stops working.
© 2025 Paul Iliffe / Staying Human Project
This simulator is proprietary software. Unauthorised use, reproduction or distribution is prohibited without written permission.