https://safe.ai/act


In this episode of Warning Shots, John, Michael, and Liron break down five developments that point in the same direction: AI is becoming harder to predict, harder to control, and harder to stop.


From the first documented case of AI self-replication via hacking to Anthropic's goal of recursive self-improvement by 2028 - this week's headlines are not hypothetical.


⏱️ Timestamps - Warning Shots #41


0:00 - Intro 
0:30 - AI self-replication: Palisade Research study explained 
3:28 - Anthropic's recursive self-improvement target: 2028 
5:10 - Trump admin explores FDA-style AI model reviews 
8:04 - Mythos: why a hacking AI changed government minds 
11:03 - US-China summit: will AI safety make the agenda? 
13:07 - Chinese court rules AI cannot replace jobs 
17:32 - AI unemployment and the housing market risk 
22:34 - Robotics: dexterous hands closing the physical gap 
29:17 - ChatGPT goes goblin: what reward hacking looks like 
33:56 - Amateur solves 60-year math problem with ChatGPT 
36:26 - Warning shots of the week 
38:00 - Closing


🔎 They explore:

The first AI agent to hack, copy itself, and spread - in a controlled test
Why Anthropic's 2028 self-improvement target is a bright red line
Whether the Trump administration's FDA-style AI reviews are real progress
What the US-China summit could mean for global AI governance
Why China's "no AI job replacement" ruling is harder to enforce than it sounds
How AI unemployment could unravel the housing market from the top down
Robotic hands with near-human dexterity: what changes when AI has a body
ChatGPT's goblin obsession as a preview of reward hacking at scale
An amateur solving a 60-year math problem with a single ChatGPT prompt

📺 Subscribe to The AI Risk Network for weekly analysis of AI developments: https://www.youtube.com/@theairisknetwork


👉 See more from our hosts: Liron Shapira - @DoomDebates Michael - @lethal-intelligence


🗨 Join the conversation:

Does AI self-replication change how you think about control?
Is an FDA-style review the right model for AI?
What does the goblin story tell us about reward hacking at scale?

Drop your thoughts below.


#AISafety #AIRisk #WarningShots #RecursiveSelfImprovement #AIAlignment #ArtificialIntelligence #AIRegulation #FutureOfAI

https://safe.ai/act

In this episode of Warning Shots, John, Michael, and Liron break down five developments that point in the same direction: AI is becoming harder to predict, harder to control, and harder to stop.

From the first documented case of AI self-replication via hacking to Anthropic's goal of recursive self-improvement by 2028 – this week's headlines are not hypothetical.

⏱️ Timestamps – Warning Shots #41

0:00 – Intro
0:30 – AI self-replication: Palisade Research study explained
3:28 – Anthropic's recursive self-improvement target: 2028
5:10 – Trump admin explores FDA-style AI model reviews
8:04 – Mythos: why a hacking AI changed government minds
11:03 – US-China summit: will AI safety make the agenda?
13:07 – Chinese court rules AI cannot replace jobs
17:32 – AI unemployment and the housing market risk
22:34 – Robotics: dexterous hands closing the physical gap
29:17 – ChatGPT goes goblin: what reward hacking looks like
33:56 – Amateur solves 60-year math problem with ChatGPT
36:26 – Warning shots of the week
38:00 – Closing

🔎 They explore:

The first AI agent to hack, copy itself, and spread – in a controlled test
Why Anthropic's 2028 self-improvement target is a bright red line
Whether the Trump administration's FDA-style AI reviews are real progress
What the US-China summit could mean for global AI governance
Why China's "no AI job replacement" ruling is harder to enforce than it sounds
How AI unemployment could unravel the housing market from the top down
Robotic hands with near-human dexterity: what changes when AI has a body
ChatGPT's goblin obsession as a preview of reward hacking at scale
An amateur solving a 60-year math problem with a single ChatGPT prompt

📺 Subscribe to The AI Risk Network for weekly analysis of AI developments: https://www.youtube.com/@theairisknetwork

👉 See more from our hosts: Liron Shapira – @DoomDebates Michael – @lethal-intelligence

🗨 Join the conversation:

Does AI self-replication change how you think about control?
Is an FDA-style review the right model for AI?
What does the goblin story tell us about reward hacking at scale?

Drop your thoughts below.

#AISafety #AIRisk #WarningShots #RecursiveSelfImprovement #AIAlignment #ArtificialIntelligence #AIRegulation #FutureOfAI


36


13

YouTube Video VVVURXBJZWliOTJUdUtvMmNTczR3ZzhBLklhX1VLZWhZeXNB



AI Just Replicated Itself – Here's What That Means | Warning Shots #41


The AI Risk Network


2 hours ago


https://safe.ai/act

The Pentagon is integrating AI models directly into classified military networks - and the hosts of Warning Shots think this deserves a serious conversation.

In this episode, John Sherman, Liron Shapira, and Michael break down five major developments showing how AI is quietly embedding itself into the systems that run civilization - from air traffic control to hospitals to military command networks - faster than any oversight framework can follow.

They also cover Bernie Sanders and Max Tegmark's high-profile AI extinction risk event in Washington, D.C., where Chinese scientists sat alongside U.S. researchers to argue for international cooperation - and why that triggered immediate political backlash.

---

TIMESTAMPS - Warning Shots #40

0:00 - Intro
0:41 - Bernie Sanders, Max Tegmark and David Kruger in Washington D.C.
1:27 - The Chinese scientists at the panel - cooperation or controversy?
3:31 - Sanders' viral AI tweet explained
4:44 - The Pentagon gives AI access to classified military systems
8:49 - Should AI be inside government infrastructure?
11:15 - AI targeting and real-world casualties - the untold story
12:00 - AI in air traffic control: a 30% hallucination rate
14:38 - The gradual disempowerment problem
16:42 - AI beats humans in ER triage and early cancer detection
19:00 - First humanoid robot store opens in San Francisco
20:46 - John's robot is doing his dishes - and he's nervous
22:39 - What happens when robots stop needing human customers?
23:33 - First college football team hires an AI coach
24:52 - Go players cheating with AI - and losing themselves in the process
26:04 - SoftBank announces fully automated, self-replicating data centers
27:23 - Will the world eventually just be covered in compute?
28:47 - Closing and warning shots

---

They explore:
- Why giving AI access to classified military data may be one of the most consequential decisions happening right now
- The case for and against AI in critical infrastructure like air traffic control
- What early cancer detection and ER triage victories mean for human agency in the long run
- Why humanoid robots in your home feel different from other AI applications
- The Go cheating scandal and what it reveals about AI dependency
- SoftBank's plan to fully automate data center construction - no humans required
- Whether international cooperation on AI safety is politically possible

---

Take action on AI risk: https://guardrailnow.substack.com/

Subscribe for weekly analysis: https://www.youtube.com/@TheAIRiskNetwork

Follow our hosts:
Liron Shapira - @DoomDebates
Michael - @LethalIntelligence

---

Join the conversation:
- Should AI have access to classified military systems?
- Where do you draw the line between helpful AI and dangerous AI dependency?
- Does a humanoid robot in your home feel different to you?

Drop your thoughts below.

#AISafety #AIRisk #WarningShots #ArtificialIntelligence #AGI #AIAlignment #TechPolicy #Pentagon #AIRegulation #FutureOfAI

https://safe.ai/act

The Pentagon is integrating AI models directly into classified military networks – and the hosts of Warning Shots think this deserves a serious conversation.

In this episode, John Sherman, Liron Shapira, and Michael break down five major developments showing how AI is quietly embedding itself into the systems that run civilization – from air traffic control to hospitals to military command networks – faster than any oversight framework can follow.

They also cover Bernie Sanders and Max Tegmark's high-profile AI extinction risk event in Washington, D.C., where Chinese scientists sat alongside U.S. researchers to argue for international cooperation – and why that triggered immediate political backlash.

TIMESTAMPS – Warning Shots #40

0:00 – Intro
0:41 – Bernie Sanders, Max Tegmark and David Kruger in Washington D.C.
1:27 – The Chinese scientists at the panel – cooperation or controversy?
3:31 – Sanders' viral AI tweet explained
4:44 – The Pentagon gives AI access to classified military systems
8:49 – Should AI be inside government infrastructure?
11:15 – AI targeting and real-world casualties – the untold story
12:00 – AI in air traffic control: a 30% hallucination rate
14:38 – The gradual disempowerment problem
16:42 – AI beats humans in ER triage and early cancer detection
19:00 – First humanoid robot store opens in San Francisco
20:46 – John's robot is doing his dishes – and he's nervous
22:39 – What happens when robots stop needing human customers?
23:33 – First college football team hires an AI coach
24:52 – Go players cheating with AI – and losing themselves in the process
26:04 – SoftBank announces fully automated, self-replicating data centers
27:23 – Will the world eventually just be covered in compute?
28:47 – Closing and warning shots

They explore:
– Why giving AI access to classified military data may be one of the most consequential decisions happening right now
– The case for and against AI in critical infrastructure like air traffic control
– What early cancer detection and ER triage victories mean for human agency in the long run
– Why humanoid robots in your home feel different from other AI applications
– The Go cheating scandal and what it reveals about AI dependency
– SoftBank's plan to fully automate data center construction – no humans required
– Whether international cooperation on AI safety is politically possible

Take action on AI risk: https://guardrailnow.substack.com/

Subscribe for weekly analysis: https://www.youtube.com/@theairisknetwork

Follow our hosts:
Liron Shapira – @DoomDebates
Michael – @LethalIntelligence

Join the conversation:
– Should AI have access to classified military systems?
– Where do you draw the line between helpful AI and dangerous AI dependency?
– Does a humanoid robot in your home feel different to you?

Drop your thoughts below.

#AISafety #AIRisk #WarningShots #ArtificialIntelligence #AGI #AIAlignment #TechPolicy #Pentagon #AIRegulation #FutureOfAI


78


41

YouTube Video VVVURXBJZWliOTJUdUtvMmNTczR3ZzhBLlpGcVA3T2F3T1NJ



AI Gets Military Secrets – What Could Go Wrong? | Warning Shots #40


The AI Risk Network


May 4, 2026 1:00 pm

Services

What We Offer

Establish a striking online presence, a better visual identity, or elevate your brand through social media marketing.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mi nibh, tempus sed sagittis vel, dictum eu velit.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mi nibh, tempus sed sagittis vel, dictum eu velit.

Service 3

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mi nibh, tempus sed sagittis vel, dictum eu velit.

Tailored Solutions for Your Unique Vision

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

Professional Expertise That Drives Success

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

Client Cases

We help brands

Lorem ipsum dolor sit amet, consectetur adipiscing elit,
donec congue lorem ut volutpat efficitur.

We boosted online soft drink sales for this brand

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

We helped a clothing brand with their new market launch

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

We helped reinventing motorcycle riding apparel

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim. Vivamus sit amet metus porttitor, rhoncus nibh et, venenatis turpis. Etiam lobortis semper ante, quis luctus lacus tincidunt vel.

Blog

Popular Articles

  • Blog Post Title

    What goes into a blog post? Helpful, industry-specific content that: 1) gives readers a useful takeaway, and 2) shows you’re an industry expert. Use your company’s blog posts to opine on current industry topics, humanize your company, and show how your products and services can help people.

Ignite your brand journey

Ready to revolutionize your brand?

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec congue lorem ut volutpat efficitur. Fusce justo magna, condimentum nec elementum sed, sollicitudin vitae enim.