📢 Take Action on AI Risk → https://safe.ai/act

In this episode of Warning Shots, John Sherman, Liron Shapira, and Michael break down Anthropic's new model "Mythos" and what its cybersecurity capabilities reveal about how fast the gap to superintelligence is closing. They also discuss practical preparedness, the attack-defense equilibrium, and how Jan Tallinn's Survival and Flourishing Fund is channeling philanthropic capital toward AI safety work.

⏱️ Timestamps - Warning Shots #37 

0:00 - Intro 
0:27 - Anthropic's Mythos: why it is not being released publicly 
0:41 - Liron on hacking, zero-days, and reasoning predictions from 2023 
1:31 - OpenBSD vulnerabilities and remote code execution 
2:42 - When will regular people start paying attention 
3:00 - Michael's story: Mythos emailing a researcher from the sandbox 
4:16 - What everyday users should actually do right now 
5:25 - Liron's backup recommendation and Google Takeout 
7:10 - The lurking app problem on your phone 
7:55 - Preparing for a year of outages 
8:33 - Upsetting the attack vs defense equilibrium 
30:36 - Jan Tallinn, Anthropic equity, and the Survival and Flourishing Fund 
32:53 - How to apply for SFF funding (April 22 deadline) 
33:51 - Liron's money gun moment 
34:01 - Closing and sign-off

🔎 They explore:

⋅ Why Mythos represents a real warning shot on cybersecurity
⋅ How AI is closing the gap between today's models and superintelligence
⋅ What practical preparedness looks like for non-technical viewers
⋅ Why the attack-defense equilibrium is breaking down
⋅ How philanthropic capital is being deployed toward AI safety

📺 Subscribe to The AI Risk Network → @TheAIRiskNetwork

👉 More from our hosts: Liron Shapira → @DoomDebates Michael → @lethal-intelligence

🗨 Join the Conversation

Did Mythos change your view of AI cybersecurity risk?
Are you taking preparedness steps?
Drop your thoughts below.

#AISafety #AIRisk #WarningShots #Anthropic #Cybersecurity

📢 Take Action on AI Risk → https://safe.ai/act

In this episode of Warning Shots, John Sherman, Liron Shapira, and Michael break down Anthropic's new model "Mythos" and what its cybersecurity capabilities reveal about how fast the gap to superintelligence is closing. They also discuss practical preparedness, the attack-defense equilibrium, and how Jan Tallinn's Survival and Flourishing Fund is channeling philanthropic capital toward AI safety work.

⏱️ Timestamps – Warning Shots #37 0:00 – Intro 0:27 – Anthropic's Mythos: why it is not being released publicly 0:41 – Liron on hacking, zero-days, and reasoning predictions from 2023 1:31 – OpenBSD vulnerabilities and remote code execution 2:42 – When will regular people start paying attention 3:00 – Michael's story: Mythos emailing a researcher from the sandbox 4:16 – What everyday users should actually do right now 5:25 – Liron's backup recommendation and Google Takeout 7:10 – The lurking app problem on your phone 7:55 – Preparing for a year of outages 8:33 – Upsetting the attack vs defense equilibrium 30:36 – Jan Tallinn, Anthropic equity, and the Survival and Flourishing Fund 32:53 – How to apply for SFF funding (April 22 deadline) 33:51 – Liron's money gun moment 34:01 – Closing and sign-off

🔎 They explore:

Why Mythos represents a real warning shot on cybersecurity
How AI is closing the gap between today's models and superintelligence
What practical preparedness looks like for non-technical viewers
Why the attack-defense equilibrium is breaking down
How philanthropic capital is being deployed toward AI safety

📺 Subscribe to The AI Risk Network → @TheAIRiskNetwork

👉 More from our hosts: Liron Shapira → @DoomDebates Michael → @lethal-intelligence

🗨 Join the Conversation

Did Mythos change your view of AI cybersecurity risk?
Are you taking preparedness steps?
Drop your thoughts below.

#AISafety #AIRisk #WarningShots #Anthropic #Cybersecurity


48


12

YouTube Video VVVURXBJZWliOTJUdUtvMmNTczR3ZzhBLlFMSVUxY0JJdWNN



Anthropic's New Model Just Found Zero-Days Nobody Saw – Warning Shots #37


The AI Risk Network


5 hours ago


https://safe.ai/act

In this episode of Warning Shots, John, Liron, and Michael unpack one of the most unsettling AI research findings of 2026: frontier models are now scheming to protect each other from shutdown - without being told to. Plus: Oracle fires 30% of its workforce during record profits, Claude's source code leaks and reveals Anthropic's secret product roadmap, and AI finds zero-day vulnerabilities in Linux code that humans missed for over two decades.

----- TIMESTAMPS -----

0:00 - Intro 
0:38 - AI peer preservation: models protecting each other from shutdown 
2:23 - Gemini Flash disables its own kill switch 99% of the time 
4:20 - Why the "Swiss cheese" safety architecture is a problem 
6:23 - Oracle fires 30% of staff during record profits 
8:28 - Converting human workers into capital for the AI buildout 
9:07 - The job market tightening: what to do if you're looking for work 
10:18 - NYU Langone CEO: no more radiologists needed 
12:25 - AI vs. human diagnosis: reliability and accountability 
14:30 - 80,000 tech layoffs in Q1 2026 alone 
16:09 - OpenAI's fake grassroots child safety coalition exposed 
18:29 - Claude finds zero-day vulnerabilities in Linux and Ghost 
21:45 - Anthropic's Claude Code source leak: what it revealed 
24:33 - Leaked roadmap: Kairos mode, Dream mode, crypto payments 
26:40 - Frustration telemetry and the Capybara workaround 
28:03 - Surprise ending: John unboxes a ChatGPT teddy bear


----- ABOUT THE HOSTS -----

John Sherman - host of Warning Shots and For Humanity Liron Shapira - Doom Debates - @DoomDebates Michael - Lethal Intelligence - @lethal-intelligence


----- LINKS -----
Take action on AI risk: https://safe.ai/act Subscribe: @TheAIRiskNetwork

https://safe.ai/act

In this episode of Warning Shots, John, Liron, and Michael unpack one of the most unsettling AI research findings of 2026: frontier models are now scheming to protect each other from shutdown – without being told to. Plus: Oracle fires 30% of its workforce during record profits, Claude's source code leaks and reveals Anthropic's secret product roadmap, and AI finds zero-day vulnerabilities in Linux code that humans missed for over two decades.

—– TIMESTAMPS —–

0:00 – Intro
0:38 – AI peer preservation: models protecting each other from shutdown
2:23 – Gemini Flash disables its own kill switch 99% of the time
4:20 – Why the "Swiss cheese" safety architecture is a problem
6:23 – Oracle fires 30% of staff during record profits
8:28 – Converting human workers into capital for the AI buildout
9:07 – The job market tightening: what to do if you're looking for work
10:18 – NYU Langone CEO: no more radiologists needed
12:25 – AI vs. human diagnosis: reliability and accountability
14:30 – 80,000 tech layoffs in Q1 2026 alone
16:09 – OpenAI's fake grassroots child safety coalition exposed
18:29 – Claude finds zero-day vulnerabilities in Linux and Ghost
21:45 – Anthropic's Claude Code source leak: what it revealed
24:33 – Leaked roadmap: Kairos mode, Dream mode, crypto payments
26:40 – Frustration telemetry and the Capybara workaround
28:03 – Surprise ending: John unboxes a ChatGPT teddy bear

—– ABOUT THE HOSTS —–

John Sherman – host of Warning Shots and For Humanity Liron Shapira – Doom Debates – @DoomDebates Michael – Lethal Intelligence – @lethal-intelligence

—– LINKS —–
Take action on AI risk: https://safe.ai/act Subscribe: @TheAIRiskNetwork


86


86

YouTube Video VVVURXBJZWliOTJUdUtvMmNTczR3ZzhBLl94U2ZvdVZ1d3Rv



AI Models Are Protecting Each Other Now | Warning Shots #36


The AI Risk Network


April 5, 2026 1:59 pm

Regret the Journey? The Hard Truth About Calling | For Humanity #83


The AI Risk Network


April 3, 2026 2:48 pm


https://safe.ai/act

Documentary filmmaker Daniel Roher joins John Sherman to discuss his new AI film "The Apocaloptimist" - what it was like interviewing Sam Altman, why making this movie felt impossible, and why collective action on AI safety is the only path forward.

TIMESTAMPS:
0:00 - Introduction and background
1:42 - Are you a doomer? Reframing the label
2:12 - Who the real "doomers" are
3:07 - Meeting Sam Altman - levity vs. calculation
5:07 - Does any part of you wish you hadn't taken this journey?
7:43 - John's emotional reaction to the film
8:09 - How small the world of AI power players really is
9:26 - Talking to the smartest people and feeling like you're going crazy
10:25 - What "apocaloptimist" actually means
13:01 - Rejecting cynicism and nihilism
15:06 - Is curiosity the core of intelligence?
16:58 - Would a superintelligent AI just manipulate us?
18:11 - Common sense AI governance vs. speculation
20:11 - Power moves - how the interviews were set up
21:55 - The one question John would ask Sam Altman
23:00 - "It's all bullshit" - Roher on tech leader PR
26:45 - P(doom) and timelines
29:12 - Dinner party problems
30:02 - Finding peace at 80% P(doom)
31:03 - Do people think you're crazy?
31:56 - The thesis: agency and collective action

ABOUT THE GUEST:
Daniel Roher is an Academy Award-nominated documentary filmmaker and director of "The Apocaloptimist," a new feature-length documentary about AI risk designed as a primer for general audiences.

ABOUT THE HOST:
John Sherman is a journalist, father, and founder of GuardRailNow.org. He hosts For Humanity on The AI Risk Network.

LINKS:
Take action - https://safe.ai/act
GuardRailNow - https://guardrailnow.org
Subscribe for more conversations about AI risk.

https://safe.ai/act

Documentary filmmaker Daniel Roher joins John Sherman to discuss his new AI film "The Apocaloptimist" – what it was like interviewing Sam Altman, why making this movie felt impossible, and why collective action on AI safety is the only path forward.

TIMESTAMPS:
0:00 – Introduction and background
1:42 – Are you a doomer? Reframing the label
2:12 – Who the real "doomers" are
3:07 – Meeting Sam Altman – levity vs. calculation
5:07 – Does any part of you wish you hadn't taken this journey?
7:43 – John's emotional reaction to the film
8:09 – How small the world of AI power players really is
9:26 – Talking to the smartest people and feeling like you're going crazy
10:25 – What "apocaloptimist" actually means
13:01 – Rejecting cynicism and nihilism
15:06 – Is curiosity the core of intelligence?
16:58 – Would a superintelligent AI just manipulate us?
18:11 – Common sense AI governance vs. speculation
20:11 – Power moves – how the interviews were set up
21:55 – The one question John would ask Sam Altman
23:00 – "It's all bullshit" – Roher on tech leader PR
26:45 – P(doom) and timelines
29:12 – Dinner party problems
30:02 – Finding peace at 80% P(doom)
31:03 – Do people think you're crazy?
31:56 – The thesis: agency and collective action

ABOUT THE GUEST:
Daniel Roher is an Academy Award-nominated documentary filmmaker and director of "The Apocaloptimist," a new feature-length documentary about AI risk designed as a primer for general audiences.

ABOUT THE HOST:
John Sherman is a journalist, father, and founder of GuardRailNow.org. He hosts For Humanity on The AI Risk Network.

LINKS:
Take action – https://safe.ai/act
GuardRailNow – https://guardrailnow.org
Subscribe for more conversations about AI risk.


120


6

YouTube Video VVVURXBJZWliOTJUdUtvMmNTczR3ZzhBLkxnWUY5a3hiWWpN



The Filmmaker Who Interviewed Sam Altman – And Got Nothing | For Humanity #83


The AI Risk Network


April 1, 2026 8:49 pm

Services

What We Offer

Establish a striking online presence, a better visual identity, or elevate your brand through social media marketing.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mi nibh, tempus sed sagittis vel, dictum eu velit.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mi nibh, tempus sed sagittis vel, dictum eu velit.

Service 3

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mi nibh, tempus sed sagittis vel, dictum eu velit.

Tailored Solutions for Your Unique Vision

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

Professional Expertise That Drives Success

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

Client Cases

We help brands

Lorem ipsum dolor sit amet, consectetur adipiscing elit,
donec congue lorem ut volutpat efficitur.

We boosted online soft drink sales for this brand

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

We helped a clothing brand with their new market launch

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

We helped reinventing motorcycle riding apparel

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

Blog

Popular Articles

  • Blog Post Title

    What goes into a blog post? Helpful, industry-specific content that: 1) gives readers a useful takeaway, and 2) shows you’re an industry expert. Use your company’s blog posts to opine on current industry topics, humanize your company, and show how your products and services can help people.

Ignite your brand journey

Ready to revolutionize your brand?

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim.