This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.
Download NetSuite’s popular KPI Checklist, designed to give you consistently excellent performance - absolutely free at https://netsuite.com/EYEONAI
On episode 158 of Eye on AI, host Craig Smith dives deep into the world of AI safety, governance, and open-source dilemmas with Connor Leahy, CEO of Conjecture, an AI company specializing in AI safety.
Connor, known for his pioneering work in open-source large language models, shares his views on the monopolization of AI technology and the risks of keeping such powerful technology in the hands of a few.
The episode starts with a discussion on the dangers of centralizing AI power, reflecting on OpenAI's situation and the broader implications for AI governance. Connor draws parallels with historical examples, emphasizing the need for widespread governance and responsible AI development. He highlights the importance of creating AI architectures that are understandable and controllable, discussing the challenges in ensuring AI safety in a rapidly evolving field.
We also explore the complexities of AI ethics, touching upon the necessity of policy and regulation in shaping AI's future. We discuss the potential of AI systems, the importance of public understanding and involvement in AI governance, and the role of governments in regulating AI development.
The episode concludes with a thought-provoking reflection on the future of AI and its impact on society, economy, and politics. Connor urges the need for careful consideration and action in the face of AI's unprecedented capabilities, advocating for a more cautious approach to AI development.
Remember to leave a 5-star rating on Spotify and a review on Apple Podcasts if you enjoyed this podcast.
Stay Updated:
Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
(00:00) Preview
(00:25) Netsuite by Oracle
(02:42) Introducing Connor Leahy
(06:35) The Mayak Facility: A Historical Parallel
(13:39) Open Source AI: Safety and Risks
(19:31) Flaws of Self-Regulation in AI
(24:30) Connor’s Policy Proposals for AI
(31:02) Implementing a Kill Switch in AI Systems
(33:39) The Role of Public Opinion and Policy in AI
(41:00) AI Agents and the Risk of Disinformation
(49:26) Survivorship Bias and AI Risks
(52:43) A Hopeful Outlook on AI and Society
(57:08) Closing Remarks and A word From Our Sponsors
1
vid:194125