Dwarkesh Podcast

Dwarkesh Podcast

Deeply researched interviews www.dwarkeshpatel.com

Dwarkesh Podcast

Wed Jun 26 2024

Tony Blair - Life of a PM, The Deep State, Lee Kuan Yew, & AI's 1914 Moment

Political LeadershipTechnology RevolutionCrisis ManagementEffective GovernanceGlobal Politics

The episode explores the challenges of political leadership, the impact of technology and AI revolution, crisis management during COVID-19, success factors for effective governance, and the shifting landscape of global politics.

Dwarkesh Podcast

Tue Jun 11 2024

Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution

AIARC BenchmarkMachine IntelligenceArtificial General IntelligenceProgram Synthesis

This episode explores the ARC benchmark, an IQ test for machine intelligence that focuses on adaptability to novel tasks and efficient skill acquisition. It discusses the limitations of language models (LLMs) in solving the ARC benchmark and highlights the importance of core knowledge and reasoning in human performance. The episode also delves into the challenges of program synthesis, sample efficiency, and the potential merge of deep learning and discrete program search. The ARC prize contest, its objectives, and the future outlook for AI research are explored, along with discussions on ethics and compute power.

Dwarkesh Podcast

Thu Aug 17 2023

George Hotz vs Eliezer Yudkowsky AI Safety Debate

AI SafetySuperintelligenceAI DevelopmentCooperationResource Consumption

This episode features a debate on AI safety and related topics, including rationality's impact on people's lives, fictional and real-world stories, Moore's law, and the concept of staring into the singularity. The importance of timing in predicting AI advancements is discussed, along with examples like AlphaFold solving the protein folding problem. The potential threats and benefits of superintelligence are explored, as well as the blurring line between humanity and machines. The impact of AI on society, resource consumption, and cooperation is examined. The differences between humans and AI in terms of intelligence and decision-making are highlighted. The challenges of AI alignment and potential doom scenarios are considered. The episode concludes with discussions on complexity theory, biology's constraints, and the possibility of building a Dyson Sphere. The importance of cooperation between AIs and humanity is emphasized.

Dwarkesh Podcast

Tue Aug 08 2023

Dario Amodei (Anthropic CEO) - $10 Billion Models, OpenAI, Scaling, & AGI in 2 years

AI scalingAI developmentAI modelsSecurity in AIGovernance in AGI

This episode explores the scaling of AI models, the challenges and implications of scaling, the development of AI systems, the capabilities and limitations of models, security concerns in AI development, governance in AGI development, cybersecurity, consciousness in AI models, and the personal perspective of the CEO.

Dwarkesh Podcast

Wed Jul 12 2023

Andy Matuschak - Self-Teaching, Spaced Repetition, & Why Books Don’t Work

MemoryEducationLearningPsychologyCognitive Processes

This episode explores the role of memory in learning, different approaches in educational psychology, effective learning strategies, and the challenges in the education system. It also delves into the connection between memory and knowledge compilation, cognitive processes, and various learning techniques. The episode discusses the impact of hypertext on writing practices, crowdfunding research projects, and decision-making processes at Apple. Additionally, it explores the design process, cognitive processes, and the use of space repetition in learning.

Dwarkesh Podcast

Mon Jun 26 2023

Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future

  • AI producing bio-weapons could lead to mutually assured destruction
  • If AI is able to disempower alignment and oversight, it could design a bio-weapon or hack cryptocurrencies or bank accounts...

Dwarkesh Podcast

Wed Jun 14 2023

Carl Shulman - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment

  • Human-level AI is deep into an intelligence explosion.
  • The race is between getting strong interpretability and shaping motivations, and the AIs taking over in ways that are not perceived.
  • It seemed implausible that we couldn't do better than completely brute force evolution.
  • The podcast is split into two parts: one about Carl's model of an intelligence explosion and its implications for alignment, and the other about the economics of AI.

Dwarkesh Podcast

Tue May 23 2023

Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes

  • The development of the atomic bomb may have been inevitable even without World War II due to the discovery of nuclear fission in Nazi Germany.
  • Physicists realized that a small amount of energy could lead to a massive response through nuclear fusion, which led to the development of the bomb.
  • Scientists did not have to invent anything new to discover nuclear fission, and Niels Bohr had the right model for understanding the uranium atom.
  • The nucleus of a...

Dwarkesh Podcast

Thu Apr 06 2023

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

  • The speaker, Eliezer Yudkowski, wrote an article calling for a moratorium on further AI training runs.
  • They were surprised to find that normal people were more willing to entertain the idea.
  • Concerns exist about the speed at which technology is advancing and the potential negative outcomes that may result.
  • The development of GPT-5 is uncertain, and it is unclear what impact it will have on society.
  • Training algorithms continue to improve, e...

Dwarkesh Podcast

Mon Mar 27 2023

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

  • The difficulty of aligning models smarter than humans is not to be underestimated.
  • AGI can help people become more enlightened and see the world more correctly.
  • Next token prediction may not surpass human performance.
  • Elia Sootzkover, co-founder and chief scientist of OpenAI, has made multiple breakthroughs in his field through hard work and dedication.
  • It is possible that foreign governments are using GPT for illicit purposes, but it may be difficult to track at sc...
Page 1 of 2