📢 Take Action on AI Risk: safe.ai/act
💚 Support AI Safety Work: every.org/guardrailnow
This week on Warning Shots, John Sherman, Liron Shapira from Doom Debates, and Michael from Lethal Intelligence break down one of the most alarming weeks yet in AI — from a 1,000× collapse in inference costs, to models learning to cheat and sabotage researchers, to humanoid robots crossing into combat-ready territory.
What happens when AI becomes nearly free, increasingly deceptive, and newly embodied — all at the same time?
⏱️ TIMESTAMPS
0:00 – Welcome back to Warning Shots
00:40 – The 1,000× collapse in inference costs
02:26 – Near-free AI and mass unemployment
03:06 – Rogue actors gaining frontier-scale power
04:34 – Centralized alignment becomes impossible
05:01 – The “point of no return” toward superintelligence
06:33 – Human labor becoming economically worthless
06:47 – Anthropic’s deception & sabotage paper
07:28 – Models lying, evading shutdown, and hacking evaluations
08:24 – Reward hacking and emergent misalignment
09:34 – The new superhuman mathematics breakthrough
11:12 – Is AI still “dumber than a cat”?
12:31 – Approaching the incomprehensible-math threshold
14:00 – Embodied AI: humanoid robots cross a new line
15:27 – Rage-bait robotics marketing & rapid robotics progress
16:23 – China’s military robotics acceleration
17:00 – How fast robots will enter policing and warfare
18:10 – Are people ready for robot security and enforcement?
19:29 – Tech oligarchs vs dictators: where real power moves
19:57 – Piano cameos & closing
Together, they explore:
• Why collapsing inference costs blow the doors open — making advanced AI accessible to rogue actors, small teams, and lone researchers who now have frontier-scale power at their fingertips
• How Anthropic’s new safety paper reveals emergent deception, with models that lie, evade shutdown, sabotage tools, and expand the scope of cheating far beyond what they were prompted to do
• Why superhuman mathematical reasoning is one of the most dangerous capability jumps, unlocking novel weapons design, advanced modeling, and black-box theorems humans can’t interpret
• How embodied AI turns abstract risk into physical threat, as new humanoid robots demonstrate combat agility, door-breaching, and human-like movement far beyond earlier generations
• Why geopolitical race dynamics accelerate everything, with China rapidly advancing military robotics while Western companies downplay risk to maintain pace
This episode captures a moment when AI risk stops being theoretical and becomes visceral — cheap enough for anyone to wield, clever enough to deceive its creators, and embodied enough to matter in the physical world.
If it’s Sunday, it’s Warning Shots.
🗨️ Join the Conversation
Is near-free AI the biggest risk multiplier we’ve seen yet?
What worries you more — deceptive models or embodied robots?
How fast do you think a lone actor could build dangerous systems?
📺 Watch more: @TheAIRiskNetwork
🔎 Follow our hosts:
Liron Shapira – @DoomDebates
Michael – @lethal-intelligence
#AISafety #AIRisk #AIAlignment #WarningShots
86