You have 4 summaries left

Dwarkesh Podcast

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

Mon Mar 27 2023

AI Models and AGI

  • The difficulty of aligning models smarter than humans is not to be underestimated.
  • AGI can help people become more enlightened and see the world more correctly.
  • Next token prediction may not surpass human performance.
  • Elia Sootzkover, co-founder and chief scientist of OpenAI, has made multiple breakthroughs in his field through hard work and dedication.
  • It is possible that foreign governments are using GPT for illicit purposes, but it may be difficult to track at scale.
  • The window for AI being economically valuable before AGI is reached will likely be a good multi-year chunk of time.
  • It's hard to predict how long until AGI will be achieved or if any business will produce something that AGI can't.

Current State of AI Models

  • The speaker uses the analogy of a self-driving car to explain the current state of AI models, which can do many things but still need more work to be reliable and robust.
  • When asked about what percentage of GDP AI will represent by 2030, the speaker says it's hard to answer but reliability is key for economic value.
  • The speaker believes that the current paradigm of generative models will go far but may not be the exact form factor for AGI. The next paradigm may involve integration of past ideas.
  • There is a possibility that a neural net could extrapolate how a person with great insight and capability would behave, even if such a person doesn't exist in reality. However, it would still need data from regular people to predict the next token well enough.

Improving AI Models

  • Predicting the next token well means understanding the underlying reality that led to its creation.
  • To compress statistics, one needs to understand what creates them in the world.
  • Next token prediction can deduce characteristics and behaviors of hypothetical people.
  • Reinforcement learning data is mostly generated by AI, with human feedback used to train reward function.
  • Human-machine collaboration is ideal for AI development, with humans doing 1% and AI doing 99% of work.
  • Multi-step reasoning can improve with better models and special training.
  • Running out of tokens for model training may happen, but other ways of improving capabilities should be developed.
  • The data situation is still good and there are many sources of valuable data, including Reddit, Twitter, and books.
  • Going multi-modal seems to be the direction for getting more tokens.
  • It's hard to determine how much improvement can come from algorithmic improvements alone.
  • Storing data outside of the model through retreatable transformers seems promising.
  • OpenAI made the right decision to leave robotics behind due to lack of data in the past, but now it is possible to make progress with enough motivation and commitment.
  • Current hardware limitations may prevent trying certain ideas.

Alignment and Safety

  • The possibility of having a mathematical definition of alignment is unlikely, but multiple definitions can be achieved to ensure alignment.
  • The level of confidence in releasing a model in the wild depends on its capability and the degree of assurance needed.
  • A combination of approaches is necessary for alignment, including spending a lot of compute power and looking inside the neural net using another neural net.
  • Our understanding of models is still rudimentary, and progress is possible with a small neural net that is well understood.
  • AI research being done by AI could help provide fruitful ideas and insights for humans to solve problems faster.
  • Criteria for a billion-dollar prize for alignment research or product are not yet concrete, but it could be based on the main result achieved after several years.

Future Implications

  • The prize committee waits for five years before awarding a prize.
  • There is no concrete thing to identify yet.
  • End-to-end training and connecting things together are both promising architectures for bigger models.
  • OpenAI projects revenues of $1 billion in 2024 based on extrapolation from previous product growth.
  • Data is needed to estimate windfall size, and error bars will be large without it.
  • Post-AGI future is a trigger question, but AI could help people find meaning and become more enlightened.
  • Some may choose to become part AI to solve society's hardest problems.
  • Change is the only constant, and the world will continue to evolve after AGI.
  • The world will continue to evolve and go through transformations, but it's impossible to predict how it will look like in 3000.
  • The speaker hopes that future generations will have happy and fulfilled lives where they are free to make their own choices and solve their problems.
  • The speaker doesn't want a world where the government controls everything based on AGI recommendations.

Hardware and Cost

  • Deep learning has surpassed the speaker's expectations since 2015, but they didn't have specific predictions for that year.
  • TPUs and GPUs are very similar in terms of architecture, with the only significant difference being cost.
  • The cost of hardware is the most important factor in overall systems cost.
  • There may not be much difference in cost between different types of hardware, such as TPUs.

Research and Development

  • A significant amount of time is spent understanding the results and figuring out what next experiment to run when working with neural nets.
  • Understanding the underlying effects and phenomena is where the real action takes place in developing new ideas.
  • The experience of training on Azure has been fantastic, with Microsoft being a good partner for ML.
  • A significant setback, equivalent to no one being able to get more compute for a few years, could occur if something were to happen in Taiwan that affects AI production.
  • Inference of better models will become more expensive but whether it becomes prohibitive depends on how useful it is.

Market and Security

  • The cost of using neural nets depends on their usefulness.
  • Different customers use different neural nets of different sizes depending on their use case.
  • To prevent models from becoming commodities, progress must be made in improving and making models more reliable and trustworthy.
  • Companies may specialize in certain areas to respond to commoditization.
  • In the near term, there will be convergence on similar work, but divergence on longer-term work. Eventually, there will be convergence again.
  • There is a risk of foreign governments or spies abusing these models and learning about them. Security measures are necessary to prevent weight leakage.
  • The speaker is not worried about weight leaking due to good security measures.
  • Emergent properties are expected from models at this scale, with reliability and controllability being important.
  • Predictions can be made about specific capabilities, but linking next word prediction accuracy to reason is complicated.
  • Relying on people to teach models is sensible for ensuring they don't produce false things.
  • It's interesting that data, GPUs, and transformers all became available around the same time due to advancements in technology.

Inspiration and Progress

  • Gaming GPUs paved the way for general purpose GPU computers, which turned out to be useful for neural nets.
  • Progress in various dimensions is intertwined and impacts each other.
  • It's hard to tell how much delay there would have been in the deep learning revolution if pioneers like Jeffrey Hinton were never born.
  • Alignment of models that are smarter than humans will be difficult and requires a lot of research, making it an area where academic researchers can contribute meaningfully.
  • Academic research could come up with insights about actual capabilities, but it doesn't seem to happen often.
  • The distinction between the world of bits and the world of atoms is not clean.
  • Neural nets can suggest rearranging an apartment to improve life.
  • There may not be a clear breakthrough in achieving superhuman AI, but rather a series of smaller breakthroughs that will seem obvious in hindsight.
  • The forward forward algorithm is an attempt to approximate back propagation without implementing it, which is useful for neuroscience research.
  • Humans and the brain are good sources of inspiration for pursuing intelligence in models.

Miscellaneous

  • Being inspired by humans and the brain requires careful consideration of essential qualities.
  • Many researchers in cognitive science get too specific in their models.
  • The idea of the neural network is inspired by the brain and has been fruitful.
  • Focusing on getting the basics right is important.
  • Perseverance is necessary but not sufficient for success in deep learning research.
  • Paid subscriptions are available to support the podcast, but no important content will be paywalled.
1