23-year-old Leonard Tang, CEO of Haize Labs, is revolutionizing AI safety at a critical moment in technological history. His company works with OpenAI and Anthropic to develop rigorous testing systems that expose hidden vulnerabilities in AI models like Claude and ChatGPT.
Your Technology podcast hosts Mark Fielding and Jeremy Gilbertson speak with Leonard to learn how he and his team is creating the security standards that will shape the future of AI and keep us all safe. Sweet AI dreams, not nightmares.
—
TIMESTAMPS
(00:00) – Disruptors and Curious Minds
(01:07) – Our Sponsor: Conviction
(01:50) – Introducing Leonard Tang: AI CEO and Founder
(03:37) – The Importance of AI Safety: What’s at Stake in AI Development?
(06:21) – Using Mathematics and Modeling to Understand Human Behaviour in AI
(08:12) – Why Are Technologists So Often Musicians?
(11:06) – Language, Culture, and AI
(17:05) – Common Misconceptions About AI: What People Get Wrong
(19:20) – The Dartmouth Conference: Birth of AI and Its Lasting Impact
(19:55) – Claude and ChatGPT Pre-training: What Do The Models Go Through?
(25:20) – An Alan Watts AI Model for Enhanced Understanding
(28:33) – Claude vs ChatGPT: Comparing AI Models and Performance
(31:44) – AI Jailbreak Detection
(33:25) – How Dreamlike Images Enhance AI Safety and Trustworthiness
(38:20) – Top-Down vs Bottom-Up AI Development: Approaches to Building Safer AI
(42:55) – Protecting Artists, Intellectual Property, and Art in the Age of AI
(48:20) – Developing an AI Code of Conduct for Ethical AI Usage
(49:45) – A Message for Veteran AI Stars
(52:35) – Restructuring Education for Critical Thinking in the Age of AI
(54:16) – Book Club Live
—
Quotes from the show:
“We need to rigorously test AI models to discover all their vulnerabilities, failure modes, and gotchas before they get deployed in production.”
“AI is a technology of language, and inevitably, it will empower us to merge cultures.”
“We’re trying to get AI to be a little more mature, a little more sophisticated, and just more reliable.”
“What we’re interested in is enforcing an AI code of conduct for specific applications, making AI systems tightly aligned with the needs of their use cases.”
“People in legacy industries are underestimating AI’s potential, while Silicon Valley is often overhyping it.”
—
🔗 More:
Visit Haize Labs: https://haizelabs.com/
Visit Thinking On Paper: https://www.thinkingonpaper.xyz/