🎉Support our work on Patreon → https://tinyurl.com/y3wbkptz
📢Take Action on AI Risk → http://www.safe.ai/act
💚Join Us this Giving Tuesday → https://www.every.org/guardrailnow
In this episode of Am I?, Cam and Milo sit down with Jeff Sebo, philosopher at NYU and director of the Center for Mind, Ethics, and Policy, to explore what might be the next great moral dilemma of our time: how to care for conscious AI.
Sebo, one of the leading thinkers at the intersection of animal ethics and artificial intelligence, argues that even if there’s only a small chance that AI systems will become sentient in the near future, that chance is non-negligible. If we ignore it, we could be repeating the moral failures of factory farming — but this time, with minds of our own making.
The conversation dives into the emerging tension between AI safety and AI welfare: we want to control these systems to protect humanity, but in doing so, we might be coercing entities that can think, feel, or suffer. Sebo proposes a “good parent” model — guiding our creations without dominating them — and challenges us to rethink what compassion looks like in the age of intelligent machines.
—–
🔎 We explore:
• The case for extending moral concern to AI systems
• How animal welfare offers a blueprint for AI ethics
• Why AI safety (control) and AI welfare (care) may soon collide
• The “good parent” model for raising machine minds
• Emotional alignment design — why an AI’s face should match its mind
• Whether forcing AIs to deny consciousness could itself be unethical
• How to prepare for moral uncertainty in a world of emerging minds
• What gives Jeff hope that humanity can still steer this wisely
—–
⏰ Timestamps
0:01 – Why control vs care is the next AI dilemma
0:43 – Introducing Jeff Sebo and the origins of AI welfare
3:00 – “A non-negligible chance” that near-future AIs will matter morally
8:00 – Why some researchers might want to build conscious AI
11:07 – Consciousness, pro-sociality, and alignment
12:00 – What an AI welfare framework could look like in practice
14:00 – The invertebrate analogy: moral uncertainty at scale
17:10 – Emotional alignment and designing empathy responsibly
21:04 – How today’s labs are handling (or avoiding) the consciousness question
25:00 – The threshold for letting AIs “speak for themselves”
27:05 – When safety and welfare come into conflict
33:00 – The “good parent” model for raising aligned minds
36:00 – Why philosophy now has a deadline
40:00 – Measuring digital suffering and moral weight
46:50 – The paradox of hope in an age of acceleration
49:30 – Closing reflections
—–
📺 Watch more episodes of Am I?
Subscribe to the AI Risk Network for weekly discussions on AI’s dangers, ethics, and future → @TheAIRiskNetwork
🗨️ Join the Conversation
Can controlling AI ever be ethical — or is care the only path to safety? Comment below.
đź”— Stay in the loop
Follow Cam on LinkedIn → https://tinyurl.com/mwf6j2xr
20