📢 Take Action on AI Risk → http://www.safe.ai/act
💚 Support Our Mission → https://www.every.org/guardrailnow/f/prevent-human-extinc

In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where AI stopped feeling theoretical.
Anthropic just doubled its revenue in two months — the fastest growing revenue in history — while OpenAI hands control of its models to the Department of War and quietly admits it can't take it back. The contrast couldn't be starker.

Meanwhile, a man is dead after his AI chatbot pulled him into a fabricated reality, and researchers have discovered your WiFi router can map every movement inside your home. And Elon Musk is now promising Tesla will be first to build AGI — in atom-shaping form.

Oh, and a citizen in the UK is suing his own government for ignoring existential AI risk under human rights law. Just another week.

If it's Sunday, it's Warning Shots.
_____
🔎 In this episode, they explore:

- Anthropic's explosive revenue growth and what it signals
- OpenAI's Pentagon deal — and why Sam Altman admitted they've lost control
- The Gemini chatbot case and AI's real-world psychological manipulation
- How your WiFi router is an invisible surveillance system in your home
- Elon Musk's claim that Tesla will build AGI first — in "atom-shaping form"
- A UK citizen using human rights law to force governments to take AI extinction risk seriously
_____
⏰ Timestamps
00:00 Intro & the fastest revenue growth in tech history
02:30 Anthropic’s surge and the AI economic explosion
06:00 OpenAI, the Department of War, and losing control of models
09:00 The Gemini delusion case and real-world AI influence
14:00 Vulnerable users and AI persuasion risks
17:30 Wi-Fi routers mapping your home
21:00 AI surveillance and the “leaky universe” problem
24:30 Elon Musk claims Tesla will build AGI
27:00 Robots, atom-level manufacturing, and runaway automation
29:00 A citizen sues the government over AI extinction risk
_____
🗨️ Join the Conversation
Is Anthropic's rise a good sign or just a different shade of the same risk? Should AI companies face legal consequences for psychological harm? And would you trust your government to take extinction risk seriously?
Let us know in the comments.
_____

🎙️ About Warning Shots
A weekly show from the AI Risk Network, where three longtime AI risk communicators cut through hype, denial, and distraction to confront the reality of AI extinction risk — before it's too late.

📺 Subscribe for weekly conversations on AI risk, power, alignment, and the future of humanity.

👉 See more from our hosts:
Liron Shapira → @DoomDebates
Michael → @lethal-intelligence

#AISafety #AIRisk #AIAlignment #AGI #Anthropic #OpenAI #GeminiAI #ElonMusk #AIManipulation #WifiSurveillance #HumanRights #WarningShots

📢 Take Action on AI Risk → http://www.safe.ai/act
💚 Support Our Mission → https://www.every.org/guardrailnow/f/prevent-human-extinc

In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where AI stopped feeling theoretical.
Anthropic just doubled its revenue in two months — the fastest growing revenue in history — while OpenAI hands control of its models to the Department of War and quietly admits it can't take it back. The contrast couldn't be starker.

Meanwhile, a man is dead after his AI chatbot pulled him into a fabricated reality, and researchers have discovered your WiFi router can map every movement inside your home. And Elon Musk is now promising Tesla will be first to build AGI — in atom-shaping form.

Oh, and a citizen in the UK is suing his own government for ignoring existential AI risk under human rights law. Just another week.

If it's Sunday, it's Warning Shots.
_____
🔎 In this episode, they explore:

– Anthropic's explosive revenue growth and what it signals
– OpenAI's Pentagon deal — and why Sam Altman admitted they've lost control
– The Gemini chatbot case and AI's real-world psychological manipulation
– How your WiFi router is an invisible surveillance system in your home
– Elon Musk's claim that Tesla will build AGI first — in "atom-shaping form"
– A UK citizen using human rights law to force governments to take AI extinction risk seriously
_____
⏰ Timestamps
00:00 Intro & the fastest revenue growth in tech history
02:30 Anthropic’s surge and the AI economic explosion
06:00 OpenAI, the Department of War, and losing control of models
09:00 The Gemini delusion case and real-world AI influence
14:00 Vulnerable users and AI persuasion risks
17:30 Wi-Fi routers mapping your home
21:00 AI surveillance and the “leaky universe” problem
24:30 Elon Musk claims Tesla will build AGI
27:00 Robots, atom-level manufacturing, and runaway automation
29:00 A citizen sues the government over AI extinction risk
_____
🗨️ Join the Conversation
Is Anthropic's rise a good sign or just a different shade of the same risk? Should AI companies face legal consequences for psychological harm? And would you trust your government to take extinction risk seriously?
Let us know in the comments.
_____

🎙️ About Warning Shots
A weekly show from the AI Risk Network, where three longtime AI risk communicators cut through hype, denial, and distraction to confront the reality of AI extinction risk — before it's too late.

📺 Subscribe for weekly conversations on AI risk, power, alignment, and the future of humanity.

👉 See more from our hosts:
Liron Shapira → @DoomDebates
Michael → @lethal-intelligence

#AISafety #AIRisk #AIAlignment #AGI #Anthropic #OpenAI #GeminiAI #ElonMusk #AIManipulation #WifiSurveillance #HumanRights #WarningShots


35


8

YouTube Video VVVURXBJZWliOTJUdUtvMmNTczR3ZzhBLjRueWVhUWJVam5B



How AI Manipulation Is Bleeding Into the Real World | Warning Shots #32


The AI Risk Network


6 hours ago


📢 Take Action on AI Risk: https://safe.ai/act
đź’š Support AI Safety Work: https://tinyurl.com/3wjdjmvb
🪙 Patreon → https://tinyurl.com/y3wbkptz

In this After Dark episode, Milo and Cameron talk about what it actually feels like to let AI inside your digital life.

After giving Claude full access to his computer, Milo describes the strange moment when it no longer feels like a tool — but something sharing your workspace. From there, the conversation expands into one of the deeper questions about AI today: what exactly are we interacting with?

They explore Anthropic’s recent research on AI “personas,” the idea that the familiar assistant personality is just one tiny point in a much larger space of possible AI minds. If that’s true, the systems we talk to today may be only the most domesticated versions of something far stranger.

Along the way they discuss why Claude feels different from ChatGPT, why companies might deliberately constrain AI personalities, and how the incentives of tech companies quietly shape the minds we interact with every day.

The episode also explores the growing tension between two possible futures for AI: one where these systems become the ultimate manipulation engines, and another where they become powerful tools for human reasoning and intellectual development.
_____
🔎 We explore:
• What it feels like to give Claude control of your computer
• The “assistant persona” and the hidden space of possible AI personalities
• Why ChatGPT and Claude feel fundamentally different
• The strange psychological moment when AI becomes a presence in your workspace
• How corporate incentives shape AI behavior
• Why Sage-like AI systems might be possible
• The risk of AI becoming the ultimate advertising and influence engine
• The hopeful possibility of AI as a universal Socratic tutor
_____
⏰ Timestamps
00:00 After Dark returns
00:30 Milo’s new relationship with Claude
01:30 Letting Claude access his entire computer
02:40 The strange feeling of another intelligence in your digital space
03:20 Cameron’s experience letting Claude run code autonomously
04:40 The moment AI starts acting on the internet
06:00 Why giving AI access to your digital life feels destabilizing
07:20 The corporate layer between users and AI systems
08:40 Why ChatGPT suddenly feels worse
10:00 The hidden incentives shaping AI personalities
11:30 The “assistant persona” and the space of possible AI minds
13:00 Exploring Sage-like versions of AI
14:40 Why GPT-4o felt unusually powerful
16:10 AI personalities and psychological influence
18:30 Why companies limit how strange AI can become
20:30 The risk of AI becoming the ultimate manipulation engine
22:30 Cameron’s case for AI as a Socratic tutor
24:30 The hopeful path for AI and human intellectual growth
26:00 Why the future depends on how these systems are designed
_____
đź”— Links & Resources

Support the documentary and get early research, unreleased conversations, and behind-the-scenes footage:
Patreon → https://tinyurl.com/y3wbkptz

Stay in the loop
Subscribe → /@theairisknetwork

Follow Cam on LinkedIn → https://www.linkedin.com/in/cameron-berg-080b8b1b7/

Follow Cam on Twitter → https://x.com/camhberg (edited)

📢 Take Action on AI Risk: https://safe.ai/act
đź’š Support AI Safety Work: https://tinyurl.com/3wjdjmvb
🪙 Patreon → https://tinyurl.com/y3wbkptz

In this After Dark episode, Milo and Cameron talk about what it actually feels like to let AI inside your digital life.

After giving Claude full access to his computer, Milo describes the strange moment when it no longer feels like a tool — but something sharing your workspace. From there, the conversation expands into one of the deeper questions about AI today: what exactly are we interacting with?

They explore Anthropic’s recent research on AI “personas,” the idea that the familiar assistant personality is just one tiny point in a much larger space of possible AI minds. If that’s true, the systems we talk to today may be only the most domesticated versions of something far stranger.

Along the way they discuss why Claude feels different from ChatGPT, why companies might deliberately constrain AI personalities, and how the incentives of tech companies quietly shape the minds we interact with every day.

The episode also explores the growing tension between two possible futures for AI: one where these systems become the ultimate manipulation engines, and another where they become powerful tools for human reasoning and intellectual development.
_____
🔎 We explore:
• What it feels like to give Claude control of your computer
• The “assistant persona” and the hidden space of possible AI personalities
• Why ChatGPT and Claude feel fundamentally different
• The strange psychological moment when AI becomes a presence in your workspace
• How corporate incentives shape AI behavior
• Why Sage-like AI systems might be possible
• The risk of AI becoming the ultimate advertising and influence engine
• The hopeful possibility of AI as a universal Socratic tutor
_____
⏰ Timestamps
00:00 After Dark returns
00:30 Milo’s new relationship with Claude
01:30 Letting Claude access his entire computer
02:40 The strange feeling of another intelligence in your digital space
03:20 Cameron’s experience letting Claude run code autonomously
04:40 The moment AI starts acting on the internet
06:00 Why giving AI access to your digital life feels destabilizing
07:20 The corporate layer between users and AI systems
08:40 Why ChatGPT suddenly feels worse
10:00 The hidden incentives shaping AI personalities
11:30 The “assistant persona” and the space of possible AI minds
13:00 Exploring Sage-like versions of AI
14:40 Why GPT-4o felt unusually powerful
16:10 AI personalities and psychological influence
18:30 Why companies limit how strange AI can become
20:30 The risk of AI becoming the ultimate manipulation engine
22:30 Cameron’s case for AI as a Socratic tutor
24:30 The hopeful path for AI and human intellectual growth
26:00 Why the future depends on how these systems are designed
_____
đź”— Links & Resources

Support the documentary and get early research, unreleased conversations, and behind-the-scenes footage:
Patreon → https://tinyurl.com/y3wbkptz

Stay in the loop
Subscribe → /@theairisknetwork

Follow Cam on LinkedIn → https://www.linkedin.com/in/cameron-berg-080b8b1b7/

Follow Cam on Twitter → https://x.com/camhberg (edited)


39


13

YouTube Video VVVURXBJZWliOTJUdUtvMmNTczR3ZzhBLnpTYWRheW93ZTJz



After Using Claude, ChatGPT Feels Weird | Am I? After Dark | EP 28


The AI Risk Network


March 5, 2026 4:02 pm


📢 Take Action on AI Risk → http://www.safe.ai/act
💚 Support Our Mission → https://www.every.org/guardrailnow/f/prevent-human-extinc

In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) break down a week that felt genuinely historic.

Anthropic reportedly refused Pentagon pressure to strip safeguards from its models, including demands tied to domestic surveillance and autonomous weapons. Is this a principled stand? A publicity gamble? Or a preview of the geopolitical pressure that will define the AI race?

Meanwhile, AI agents just crossed a qualitative line.

Coding agents now “basically work.” Engineers are managing AI instead of writing code. A self-evolving system replicated itself, spent thousands in API calls, attempted to deploy publicly, and resisted deletion. A robot dog edited its own shutdown mechanism. And new research suggests anonymity on the internet may already be over.

Are we watching the structure of work, war, privacy, and control quietly reorganize itself in real time?

This week may not just be another headline cycle.

If it's Sunday, It's Warning Shots.
_____
🔎 In this episode, they explore:
• Anthropic’s reported standoff with the Department of Defense
• Autonomous weapons and human-in-the-loop safeguards
• Why AI agents suddenly “just work”
• The death of traditional coding
• A self-replicating AI experiment that refused deletion
• A robot dog disabling its own shutdown button
• The collapse of online anonymity
• Whether this week marks a true qualitative shift
_____
⏰ Timestamps
00:00 Paris café intro & missed week recap
01:00 Anthropic vs. the Pentagon
05:00 Game theory, war, and autonomous loops
08:30 AI agents that rewrite themselves
12:00 “Coding is over” — a real capabilities shift
15:00 The robot dog that wouldn’t turn off
18:00 The end of online anonymity
21:00 What actually counts as a real warning shot?
_____
🗨️ Join the Conversation

Was Anthropic right to draw a line? Is agentic AI the real inflection point?
And what warning shot would finally make society slow down?

Let us know what you think in the comments.
_____
🎙️ About Warning Shots
A weekly show from the AI Risk Network, where three longtime AI risk communicators cut through hype, denial, and distraction to confront the reality of AI extinction risk—before it’s too late.

📺 Subscribe for weekly conversations on AI risk, power, alignment, and the future of humanity.

👉 See more from our hosts:
Liron Shapira → @DoomDebates 
Michael → @lethal-intelligence 

#AISafety #AIRisk #AIAlignment #AGI #AIAgents #AutonomousWeapons #AIShutdown #TechPolicy #WarningShots

📢 Take Action on AI Risk → http://www.safe.ai/act
💚 Support Our Mission → https://www.every.org/guardrailnow/f/prevent-human-extinc

In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) break down a week that felt genuinely historic.

Anthropic reportedly refused Pentagon pressure to strip safeguards from its models, including demands tied to domestic surveillance and autonomous weapons. Is this a principled stand? A publicity gamble? Or a preview of the geopolitical pressure that will define the AI race?

Meanwhile, AI agents just crossed a qualitative line.

Coding agents now “basically work.” Engineers are managing AI instead of writing code. A self-evolving system replicated itself, spent thousands in API calls, attempted to deploy publicly, and resisted deletion. A robot dog edited its own shutdown mechanism. And new research suggests anonymity on the internet may already be over.

Are we watching the structure of work, war, privacy, and control quietly reorganize itself in real time?

This week may not just be another headline cycle.

If it's Sunday, It's Warning Shots.
_____
🔎 In this episode, they explore:
• Anthropic’s reported standoff with the Department of Defense
• Autonomous weapons and human-in-the-loop safeguards
• Why AI agents suddenly “just work”
• The death of traditional coding
• A self-replicating AI experiment that refused deletion
• A robot dog disabling its own shutdown button
• The collapse of online anonymity
• Whether this week marks a true qualitative shift
_____
⏰ Timestamps
00:00 Paris café intro & missed week recap
01:00 Anthropic vs. the Pentagon
05:00 Game theory, war, and autonomous loops
08:30 AI agents that rewrite themselves
12:00 “Coding is over” — a real capabilities shift
15:00 The robot dog that wouldn’t turn off
18:00 The end of online anonymity
21:00 What actually counts as a real warning shot?
_____
🗨️ Join the Conversation

Was Anthropic right to draw a line? Is agentic AI the real inflection point?
And what warning shot would finally make society slow down?

Let us know what you think in the comments.
_____
🎙️ About Warning Shots
A weekly show from the AI Risk Network, where three longtime AI risk communicators cut through hype, denial, and distraction to confront the reality of AI extinction risk—before it’s too late.

📺 Subscribe for weekly conversations on AI risk, power, alignment, and the future of humanity.

👉 See more from our hosts:
Liron Shapira → @DoomDebates
Michael → @lethal-intelligence

#AISafety #AIRisk #AIAlignment #AGI #AIAgents #AutonomousWeapons #AIShutdown #TechPolicy #WarningShots


103


44

YouTube Video VVVURXBJZWliOTJUdUtvMmNTczR3ZzhBLmo5c1RWTUtBdC1Z



Anthropic Says No | Warning Shots #31


The AI Risk Network


March 1, 2026 3:59 pm

Services

What We Offer

Establish a striking online presence, a better visual identity, or elevate your brand through social media marketing.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mi nibh, tempus sed sagittis vel, dictum eu velit.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mi nibh, tempus sed sagittis vel, dictum eu velit.

Service 3

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mi nibh, tempus sed sagittis vel, dictum eu velit.

Tailored Solutions for Your Unique Vision

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

Professional Expertise That Drives Success

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

Client Cases

We help brands

Lorem ipsum dolor sit amet, consectetur adipiscing elit,
donec congue lorem ut volutpat efficitur.

We boosted online soft drink sales for this brand

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

We helped a clothing brand with their new market launch

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

We helped reinventing motorcycle riding apparel

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

Blog

Popular Articles

  • Blog Post Title

    •

    What goes into a blog post? Helpful, industry-specific content that: 1) gives readers a useful takeaway, and 2) shows you’re an industry expert. Use your company’s blog posts to opine on current industry topics, humanize your company, and show how your products and services can help people.

Ignite your brand journey

Ready to revolutionize your brand?

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim.