tudy.club
BlogResourcesAbout
KO

© 2024 tudy.club

← Back to Blog
The Day After AGI - 2026 Davos WEF Discussion
Aai

The Day After AGI - 2026 Davos WEF Discussion

When will AGI arrive, what happens to jobs, and can the risks be controlled? Two titans of the AI industry share their outlook at Davos.

2026-01-22•4 min read
#AI#AGI

The Day After AGI - 2026 Davos WEF Discussion

2026-01-22

  • Moderator: Zanny Minton Beddoes (Editor-in-Chief, The Economist)

  • Dario Amodei - Anthropic CEO

  • Demis Hassabis - Google DeepMind CEO

1. When Will We Reach AGI?

DarioDemis
PredictionWithin 1-2 years5-10 years (late 2020s)
ReasoningModels proficient at coding/AI research → generate next model → development acceleration loop. Engineers at Anthropic already "don't write code directly, just edit what the model writes"Coding/math are fast because verification is easy. Natural sciences require experimental verification. "Coming up with the questions themselves" is still lacking
UncertaintyChip manufacturing, model training time -- parts that AI can't accelerateCan the self-improvement loop truly close without human intervention?

2. Industry Landscape

Demis:

  • "I was confident we'd return to the top of the leaderboards"
  • Why: Deepest and broadest research talent + startup mindset re-injected
  • Google DeepMind = "Google's engine room"

Dario (on independent AI company viability):

  • Revenue growth: 2023 $0→$100M / 2024 $100M→$1B / 2025 $1B→$10B (10x each year)
  • Key: Building the best model in your focus area

3. Winner-Take-All vs. General Technology?

Demis: "It's absolutely not a general technology"

AreaOutlook
Coding/MathSelf-improvement loop already working
Natural SciencesMay require AGI itself (hard to verify, a "messy" domain)
RoboticsIf hardware is part of the loop, self-improvement speed could be limited

4. AI Risk (Preview of Dario's Follow-Up Essay)

Core stance: AI can help cure cancer, fight tropical diseases, and understand the universe. But risks exist too.

Essay framing: Quoting Carl Sagan's "Contact"

"How did you manage to get through your technological adolescence without destroying yourselves?"

Risk AreaDetails
Control problemHow do you control highly autonomous systems smarter than humans?
Individual misuseBioterrorism, etc.
State misuseExploitation by authoritarian governments
Economic impactLabor displacement
The unexpectedCould be the hardest to deal with

Response: Individual efforts by industry leaders + collaboration + role of governments and social institutions

5. Impact on Jobs

DarioDemis
Now"Very small beginnings" in software/codingStarting to affect junior/intern-level roles this year
Short-termHalf of entry-level office jobs at risk within 1-5 yearsSome displacement + new job creation (similar to past tech revolutions)
Key concernIf AI becomes better than humans at everything within 1-2 years, the labor market may not be able to adapt fast enoughPost-AGI is "completely uncharted territory"

Demis's advice: "I would tell college students to become incredibly proficient with these tools"

6. Government Preparedness & the Question of Meaning/Purpose

Dario: "Government preparedness is nowhere near sufficient. I'm surprised that even meeting economists at Davos, hardly anyone is thinking about this"

Demis:

  • The question of meaning and purpose could be harder than the economic problem
  • Optimistic take: We already do things unrelated to economic gain -- extreme sports, art, etc.
  • "We'll explore the universe too"

7. Risk of Public Backlash

Demis:

  • Risk exists (fears about jobs and livelihoods are rational)
  • Industry responsibility: Need to show more things that are "unambiguously good for the world" like AlphaFold
  • What's needed: Minimum safety standards for deployment (affects all of humanity across borders)

8. China Policy

Dario's core argument: Chip export ban is the most important thing

"It's like selling nuclear weapons to North Korea and saying it's good for Boeing's bottom line"

  • With a chip ban: US-China competition → becomes "a competition between me and Demis." That's manageable.
  • This single measure is far more effective than other aggressive China policies

9. AI Safety & Doomerism

DarioDemis
StanceSkeptical of doomerism. Doesn't agree with "we're doomed, there's nothing we can do"Been doing this for 20+ years. Aware of risks but they're solvable
ConditionManageable if we work together. Problems arise if we race without guardrailsNeeds time, focus, and top talent working together. Gets harder if fragmented
Key pointWe can control and steer it through science"A very tractable problem if we have time"

Anthropic's research: Mechanistic Interpretability -- looking inside models to understand why they behave the way they do

10. The Fermi Paradox and AI Risk

Question: Isn't the strongest argument for doomerism the Fermi Paradox?

Dario:

  • "That can't be the reason -- we should be seeing AIs coming from somewhere, but we don't"
  • No Dyson spheres, no structures of any kind
  • Personal view: "We've already passed the Great Filter. It was probably the evolution of multicellular life"
  • "What happens next is ours to write"

11. What to Watch Next Year

DarioDemis
The AI-building-AI loop -- how this plays out determines whether it takes a few more years or we face "wonder and a big emergency"World Models, Continual Learning, potential "breakthrough moment" in Robotics

Moderator's closing remark: "Based on what you've both said, I think we should all be hoping it takes a little longer"

Demis: "I'd prefer that too. It would be better for the world"


Key Takeaways

TopicDario AmodeiDemis Hassabis
AGI timelineCould happen within 1-2 years5-10 years (late 2020s)
Self-improvement loopAlready started in coding, will close quicklySome domains require AGI itself
Job impactHalf of entry-level office jobs at risk within 1-5 yearsNew jobs created short-term, post-AGI is unknown
DoomerismSkeptical -- risks are manageableSolvable if we have time
China chip exportsStrongly against (nuclear weapons analogy)Need international cooperation/minimum safety standards
What to watch next yearThe AI-building-AI loopWorld models, continual learning, robotics

목차

  • 1. When Will We Reach AGI?
  • 2. Industry Landscape
  • 3. Winner-Take-All vs. General Technology?
  • 4. AI Risk (Preview of Dario's Follow-Up Essay)
  • 5. Impact on Jobs
  • 6. Government Preparedness & the Question of Meaning/Purpose
  • 7. Risk of Public Backlash
  • 8. China Policy
  • 9. AI Safety & Doomerism
  • 10. The Fermi Paradox and AI Risk
  • 11. What to Watch Next Year
  • **Key Takeaways**