
When will AGI arrive, what happens to jobs, and can the risks be controlled? Two titans of the AI industry share their outlook at Davos.
Moderator: Zanny Minton Beddoes (Editor-in-Chief, The Economist)
Dario Amodei - Anthropic CEO
Demis Hassabis - Google DeepMind CEO
| Dario | Demis | |
|---|---|---|
| Prediction | Within 1-2 years | 5-10 years (late 2020s) |
| Reasoning | Models proficient at coding/AI research → generate next model → development acceleration loop. Engineers at Anthropic already "don't write code directly, just edit what the model writes" | Coding/math are fast because verification is easy. Natural sciences require experimental verification. "Coming up with the questions themselves" is still lacking |
| Uncertainty | Chip manufacturing, model training time -- parts that AI can't accelerate | Can the self-improvement loop truly close without human intervention? |
Demis:
Dario (on independent AI company viability):
Demis: "It's absolutely not a general technology"
| Area | Outlook |
|---|---|
| Coding/Math | Self-improvement loop already working |
| Natural Sciences | May require AGI itself (hard to verify, a "messy" domain) |
| Robotics | If hardware is part of the loop, self-improvement speed could be limited |
Core stance: AI can help cure cancer, fight tropical diseases, and understand the universe. But risks exist too.
Essay framing: Quoting Carl Sagan's "Contact"
"How did you manage to get through your technological adolescence without destroying yourselves?"
| Risk Area | Details |
|---|---|
| Control problem | How do you control highly autonomous systems smarter than humans? |
| Individual misuse | Bioterrorism, etc. |
| State misuse | Exploitation by authoritarian governments |
| Economic impact | Labor displacement |
| The unexpected | Could be the hardest to deal with |
Response: Individual efforts by industry leaders + collaboration + role of governments and social institutions
| Dario | Demis | |
|---|---|---|
| Now | "Very small beginnings" in software/coding | Starting to affect junior/intern-level roles this year |
| Short-term | Half of entry-level office jobs at risk within 1-5 years | Some displacement + new job creation (similar to past tech revolutions) |
| Key concern | If AI becomes better than humans at everything within 1-2 years, the labor market may not be able to adapt fast enough | Post-AGI is "completely uncharted territory" |
Demis's advice: "I would tell college students to become incredibly proficient with these tools"
Dario: "Government preparedness is nowhere near sufficient. I'm surprised that even meeting economists at Davos, hardly anyone is thinking about this"
Demis:
Demis:
Dario's core argument: Chip export ban is the most important thing
"It's like selling nuclear weapons to North Korea and saying it's good for Boeing's bottom line"
| Dario | Demis | |
|---|---|---|
| Stance | Skeptical of doomerism. Doesn't agree with "we're doomed, there's nothing we can do" | Been doing this for 20+ years. Aware of risks but they're solvable |
| Condition | Manageable if we work together. Problems arise if we race without guardrails | Needs time, focus, and top talent working together. Gets harder if fragmented |
| Key point | We can control and steer it through science | "A very tractable problem if we have time" |
Anthropic's research: Mechanistic Interpretability -- looking inside models to understand why they behave the way they do
Question: Isn't the strongest argument for doomerism the Fermi Paradox?
Dario:
| Dario | Demis |
|---|---|
| The AI-building-AI loop -- how this plays out determines whether it takes a few more years or we face "wonder and a big emergency" | World Models, Continual Learning, potential "breakthrough moment" in Robotics |
Moderator's closing remark: "Based on what you've both said, I think we should all be hoping it takes a little longer"
Demis: "I'd prefer that too. It would be better for the world"
| Topic | Dario Amodei | Demis Hassabis |
|---|---|---|
| AGI timeline | Could happen within 1-2 years | 5-10 years (late 2020s) |
| Self-improvement loop | Already started in coding, will close quickly | Some domains require AGI itself |
| Job impact | Half of entry-level office jobs at risk within 1-5 years | New jobs created short-term, post-AGI is unknown |
| Doomerism | Skeptical -- risks are manageable | Solvable if we have time |
| China chip exports | Strongly against (nuclear weapons analogy) | Need international cooperation/minimum safety standards |
| What to watch next year | The AI-building-AI loop | World models, continual learning, robotics |