📢 Take Action on AI Risk: https://safe.ai/act
💚 Support AI Safety Work: https://tinyurl.com/3wjdjmvb
In this episode of Warning Shots, John, Liron, and Michael unpack a growing disconnect at the heart of the AI boom: the people building the technology insist existential risks are far away — while the people using it increasingly believe AGI is already here.
We kick things off with NVIDIA CEO Jensen Huang brushing off AI risk as something “biblically far away” — even while the companies buying his chips are racing full-speed toward more autonomous systems. From there, the conversation fans out to some real-world pressure points that don’t get nearly enough attention: local communities successfully blocking massive AI data centers, why regulation and international treaties keep falling short, and what it means when we start getting comfortable with AI making serious decisions — including writing medical prescriptions with no human involved at all.
Across these topics, one theme dominates: AI progress feels incremental — until suddenly, it doesn’t. This episode explores how “common sense” extrapolation fails in the face of intelligence explosions, why public awareness lags so far behind insider reality, and how power over compute, health, and infrastructure may shape humanity’s future.
Timestamps ⏳
0:00 Welcome to Warning Shots
0:50 Jensen Huang Dismisses AGI as “Biblical”
2:10 NVIDIA, Incentives, and AI Risk Denial
4:30 Past vs. Present: Jensen’s Shift on AI Safety
7:10 Is Claude Code Already AGI?
10:20 Insider Tools vs. Public Awareness Gap
13:30 Data Centers Blocked by Community Opposition
16:10 Can Grassroots Action Slow AI?
18:40 Treaties, Enforcement, and the China Problem
21:00 AI Writing Prescriptions With No Human Oversight
23:30 Medical AI, Dependency, and the Slippery Slope
25:00 Final Thoughts: Incremental Progress, Sudden Consequences
🔎 We explore:
– Why AI leaders downplay risks while insiders panic
– Whether Claude Code represents a tipping point toward AGI
– How financial incentives shape AI narratives
– Why data centers are becoming a key choke point
– The limits of regulation and international treaties
– What happens when AI controls healthcare decisions
– How “sugar highs” in AI adoption can mask long-term danger
As AI systems grow more capable, autonomous, and embedded in critical infrastructure, this episode asks a stark question: Are we still in control — or just along for the ride?
🗨️ Join the Conversation
Is AGI already here, or are we fooling ourselves about how close we are? Drop your thoughts in the comments.
📺 Watch more: @TheAIRiskNetwork
🔎 Follow our hosts:
Liron Shapira – @DoomDebates
Michael – @lethal-intelligence
#AISafety #AIRisk #AGI #ArtificialIntelligence #DataCenters #Claude #NVIDIA #HealthcareAI #ExistentialRisk #WarningShots
107