Consuming Davos in 30 Minutes: An Experiment in AI-Assisted Information Synthesis
How I used Anthropic's Opus 4.5 to watch 243 videos I didn't have time for, and what it taught me about the future of knowledge consumption
I watched exactly one session from Davos 2026 in full: Dario Amodei and Demis Hassabis discussing “The World After AGI.” It was fascinating, two of the most important people in AI, disagreeing politely about timelines while agreeing that everything is about to change.
But here’s the thing: the World Economic Forum uploaded 243 videos from this year’s meeting. At an average of 15-20 minutes each, that’s roughly 60-80 hours of content. I don’t have 60 hours. I have a company to run, a product to ship, users to support.
This is the fundamental tension of our information age. The content that matters keeps growing. Our time doesn’t. For years, we’ve dealt with this through curation, letting algorithms or editors decide what’s worth our attention. But curation means missing things. It means trusting someone else’s judgment about what matters to you.
So I ran an experiment. What if AI could help me consume all of Davos, not by summarizing it into meaninglessness, but by creating a structured synthesis I could actually engage with?
Here’s what happened.
The Setup
The goal was simple: extract, synthesize, and analyze every significant discussion from Davos 2026. Not summaries. Not bullet points. A comprehensive document that captured who said what, where they agreed, where they disagreed, and what predictions they made.
The tools:
Claude Opus 4.5 (via Cursor)
yt-dlp for transcript extraction
Python for automation
About 15 minutes of my time, spread across an evening
The process unfolded in four phases.
Phase 1: Acquisition
First, I needed the transcripts. YouTube auto-generates captions for most videos, and the official WEF playlist had everything in one place.
I asked Opus 4.5 to write a Python script that would:
Pull all video IDs from the WEF 2026 playlist
Extract transcripts using YouTube’s auto-generated captions
Clean up the VTT format (timestamps, duplicates, HTML artifacts)
Save each transcript as a markdown file with metadata
The first version didn’t work. YouTube’s transcript API has quirks. The script failed silently on videos without English captions. The VTT parser left duplicate lines everywhere.
Here’s what the raw output looked like initially:
And our next guest is on the front lines
And our next guest is on the front lines
And our next guest is on the front lines
of the AI revolution. Joining us right
Classic auto-caption stuttering. Every phrase repeated 2-3 times as the speech recognition refined its guess.
Opus 4.5 iterated. Added deduplication logic. Implemented yt-dlp as a fallback. Cleaned HTML entities. After three rounds of debugging, the script worked:
# Parse VTT format - deduplicate lines
lines = []
seen_lines = set()
for line in content.split('\n'):
line = line.strip()
# Skip headers, timestamps, and empty lines
if not line or line.startswith('WEBVTT') or '-->' in line:
continue
# Clean HTML tags and entities
line = re.sub(r'<[^>]+>', '', line)
line = line.replace('>', '>').replace('<', '<')
if line and line not in seen_lines:
seen_lines.add(line)
lines.append(line)
Result: 243 transcripts, totaling roughly 500,000 words of content.
Phase 2: Synthesis
Now the interesting part. I had half a million words of raw transcripts. I needed structure.
My prompt to Opus 4.5 was deliberately open-ended:
“Review all of them in extreme detail and present a synthesis document that’s detailed and lists out key topics discussed, different positions proposed and forecasts, who said what on each topic.”
Opus 4.5 began reading transcripts in batches. Not skimming, actually processing the content and building a structured document. The synthesis emerged organically:
Major themes identified:
AI timelines and capabilities (AGI predictions from Amodei, Hassabis, Musk)
The collapse of the international order (Carney, Merz, Zelenskyy)
The Greenland crisis (unified European rejection of Trump’s demands)
Energy constraints on AI (Musk’s solar thesis)
Job displacement and transformation (IMF data, Dimon’s predictions)
Middle East dynamics (Qatar PM on Gaza, Iran)
European defense and unity (von der Leyen, Macron, Merz)
What struck me was the disagreements Opus 4.5 surfaced. This is what summaries usually lose. The tension. The debate. The places where smart people look at the same facts and reach different conclusions.
Phase 3: Probability Analysis
Here’s where it got interesting. I asked Opus 4.5 to do something unusual:
“Analyze each position and assign probabilities yourself based on your knowledge of human history and tech development.”
Opus 4.5 searched the web for current data, IEA reports on solar deployment, IMF forecasts, expert surveys on AGI timelines, SpaceX’s actual progress on rocket reusability. Then it produced probability assessments:
High Confidence (>75% likely):
40-60% of jobs affected by AI (85-95%)
US-China AI duopoly persisting (85-95%)
Full SpaceX rocket reusability by 2026 (70-85%)
Medium Confidence (40-75%):
AGI by end of decade (55-65%)
Humanoid robots for sale by 2027 (60-75%)
Low Confidence (<40%):
AGI by 2026-2027 (25-35%)
Ukraine war ending in 2026 (20-30%)
Gaza Phase 2 success (25-40%)
Very Low Confidence (<15%):
More robots than people by 2031 (5-15%)
US acquiring Greenland (2-5%)
The reasoning mattered more than the numbers. On Musk’s prediction of “more robots than people by 2031”:
“Global population: ~8 billion. Current global humanoid deployment: ~2,500. Even with 52% CAGR projected, reaching 8 billion by 2031 would require manufacturing capacity that doesn’t exist.”
On AGI timelines:
“Musk has a track record of aggressive timelines (Full Self-Driving was promised for 2017, 2018, 2019, 2020...). However, expert surveys show median predictions have collapsed from 50 years to ~5 years.”
This isn’t Opus 4.5 having opinions. It’s Opus 4.5 synthesizing available evidence and being explicit about uncertainty. That’s more useful than false confidence.
The Output
The final synthesis document runs about 1,000 lines. It includes:
Executive Summary - Key takeaways in one page
16 Thematic Sections - Deep dives on AI, geopolitics, economics, energy, healthcare
Speaker Directory - Who said what, with their positions and affiliations
Prediction Tables - Every forecast, organized by timeline and topic
Agreement/Disagreement Matrix - Where leaders aligned and diverged
Probability Analysis - Independent assessment of each major prediction
Methodology - How the synthesis was created
What This Taught Me
1. AI doesn’t replace consumption—it changes what consumption means
I didn’t “watch” 243 videos. But I engaged with their content more deeply than I would have by skimming headlines or reading summaries written by journalists with their own editorial angles.
The synthesis preserved nuance. It captured disagreements. It let me drill into specific topics (I spent 20 minutes reading the full Middle East section after the synthesis flagged it as significant).
This is a new mode of information consumption. Not passive watching. Not algorithmic curation. Active synthesis with AI as a research partner.
2. The bottleneck shifted from access to attention
I could have watched all 243 videos. They’re free. They’re on YouTube. The constraint was never access, it was attention.
AI doesn’t give me more hours. It gives me leverage on the hours I have. The 30 minutes I spent on this experiment yielded more insight than 30 minutes of watching random sessions would have.
3. Transparency matters more than ever
The synthesis document includes methodology. It shows which transcripts were analyzed. It explains how probabilities were calculated. It cites sources.
This matters because AI-generated content can be confidently wrong. The only defense is transparency about process. If you can’t see how a conclusion was reached, you can’t evaluate whether to trust it.
4. Human judgment is still the bottleneck
Opus 4.5 did the heavy lifting. But I chose what to ask. I decided which threads to pull. I evaluated whether the probability assessments made sense.
The experiment worked because I knew what I was looking for. “Synthesize Davos” is a meaningful prompt because I have context about why Davos matters and what kinds of insights would be useful.
AI amplifies human judgment. It doesn’t replace it.
The Uncomfortable Question
Here’s what I keep thinking about: what happens when everyone can do this?
Right now, this feels like a superpower. I have a comprehensive synthesis of Davos 2026 that probably doesn’t exist anywhere else. I can reference specific quotes, track disagreements between leaders, and evaluate predictions against historical base rates.
But the tools I used are available to anyone. Opus 4.5 is a subscription. yt-dlp is open source. The methodology is reproducible.
Within a few years, this kind of synthesis will be table stakes. The question becomes: what do you do with the synthesis? What actions does it inform? What decisions does it improve?
The value isn’t in having the information. It’s in knowing what to do with it.
Try It Yourself
If you want to replicate this experiment:
Pick a corpus - A conference, a podcast series, a set of earnings calls
Extract transcripts - yt-dlp for YouTube, Whisper for audio files
Prompt for synthesis, not summary - Ask for structure, disagreements, predictions
Request probability assessments - Force the model to evaluate claims against evidence
Iterate on gaps - Ask follow-up questions about areas that seem thin
The whole process took about 60 minutes of active time, plus overnight processing. The output was a 1,000-line document that would have taken weeks to produce manually.
That’s the trade-off we’re all navigating now. Not whether to use AI for knowledge work, but how to use it well.
Appendix: Key Findings from Davos 2026
For those who want the conclusions without the methodology:
AGI Timeline Consensus
The most striking finding was the convergence among AI leaders on near-term AGI:
Dario Amodei (Anthropic): 2026-2027
Elon Musk (xAI): End of 2026 or early 2027
Demis Hassabis (DeepMind): End of the decade (more conservative)
My probability assessment: 25-35% for 2026-2027, 55-65% by end of decade
The Collapse of International Order
Multiple leaders declared the post-Cold War rules-based order “over”:
Mark Carney: “This is a rupture, not a transition”
Friedrich Merz: “The international order as we knew it is unraveling”
Zelenskyy: Called Europe’s response “Groundhog Day”
My probability assessment: 75-85% that US-led order is collapsing (this is already observable)
Energy as AI’s Binding Constraint
Elon Musk’s most interesting prediction wasn’t about robots, it was about power:
“Chip production will exceed electrical power capacity by late 2026... The constraint on AI is not chips, it’s electricity.”
He noted China is deploying 1,000+ GW of solar annually while the West falls behind.
My probability assessment: 75-85% that energy becomes the primary AI bottleneck
Job Transformation Scale
The IMF presented sobering data:
40% of global jobs affected by AI
60% in advanced economies
Low-wage workers 14x more likely to need occupational changes
Jamie Dimon predicted fewer JPMorgan employees in 5 years.
My probability assessment: 85-95% that 40-60% of jobs are affected (note: “affected” ≠ “eliminated”)
European Defense Surge
In response to Trump’s pressure and the Ukraine war:
Germany committing to 5% GDP defense spending (up from 2%)
€90B EU support package for Ukraine
New trade deals (Mercosur, India negotiations)
My probability assessment: 30-40% that Europe actually reaches 5% (historically, commitments exceed delivery)


The content-time tension is so real. Your AI synthesis is brilliant. But what about human intution?