In Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. James shares his powerful insight, long-time awareness, and expertise helping others find a way to survive and rebuild from a post-AGI disaster warning shot.
FULL INTERVIEW STARTS AT **00:04:47**
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: [email protected]
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Timestamps
### **Relevance to AGI (00:05:05)**
### **Nuclear Threats and Survival (00:05:34)**
### **Introduction to the Podcast (00:06:18)**
### **Open Source AI Discussion (00:09:28)**
### **James's Background and Location (00:11:00)**
### **Prepping and Quality of Life (00:13:12)**
### **Creating Spaces for Preparation (00:13:48)**
### **Survival Odds and Nuclear Risks (00:21:12)**
### **Long-Term Considerations (00:22:59)**
### **The Warning Shot Discussion (00:24:21)**
### **The Need for Preparation (00:27:38)**
### **Planning for Population Centers (00:28:46)**
### **Likelihood of Extinction (00:29:24)**
### **Basic Preparedness Steps (00:30:04)**
### **Natural Disaster Preparedness (00:32:15)**
### **Timeline for Change (00:32:58)**
### **Predictions for AI Breakthroughs (00:34:08)**
### **Human Nature and Future Risks (00:37:06)**
### **Societal Influences on Behavior (00:40:00)**
### **Living Off-Grid (00:43:04)**
### **Conformity Bias in Humanity (00:46:38)**
### **Planting Seeds of Change (00:48:01)**
### **The Evolution of Human Reasoning (00:48:22)**
### **Looking Back to 1998 (00:48:52)**
### **Emergency Preparedness Work (00:52:19)**
### **The Shift to Effective Altruism (00:53:22)**
### **The AI Safety Movement (00:54:24)**
### **The Challenge of Public Awareness (00:55:40)**
### **The Historical Context of AI Discussions (00:57:01)**
### **The Role of Effective Altruism (00:58:11)**
### **Barriers to Knowledge Spread (00:59:22)**
### **The Future of AI Risk Advocacy (01:01:17)**
### **Shifts in Mindset Over 26 Years (01:03:27)**
### **The Impact of Youthful Optimism (01:04:37)**
### **Disillusionment with Altruism (01:05:37)**
### **Short Timelines and Urgency (01:07:48)**
### **Human Nature and AI Development (01:08:49)**
### **The Risks of AI Leadership (01:09:16)**
### **Public Reaction to AI Risks (01:10:22)**
### **Consequences for AI Researchers (01:11:18)**
### **Contradictions of Abundance (01:11:42)**
### **Personal Safety in a Risky World (01:12:40)**
### **Assassination Risks for Powerful Figures (01:13:41)**
### **Future Governance Challenges (01:14:44)**
### **Distribution of AI Benefits (01:16:12)**
### **Ethics and AI Development (01:18:11)**
### **Moral Obligations to Non-Humans (01:19:02)**
### **Utopian Futures and AI (01:21:16)**
### **Varied Human Values (01:22:29)**
### **International Cooperation on AI (01:27:57)**
### **Hope Amidst Uncertainty (01:31:14)**
### **Resilience in Crisis (01:31:32)**
### **Building Safe Zones (01:32:18)**
### **Urgency for Action (01:33:06)**
### **Doomsday Prepping Reflections (01:33:56)**
### **Celebration of Life (01:35:07)**
138