You have 4 summaries left

The Lunar Society

Carl Shulman - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment

Wed Jun 14 2023

Human-level AI and Intelligence Explosion

  • Human-level AI is deep into an intelligence explosion.
  • The race is between getting strong interpretability and shaping motivations, and the AIs taking over in ways that are not perceived.
  • It seemed implausible that we couldn't do better than completely brute force evolution.
  • The podcast is split into two parts: one about Carl's model of an intelligence explosion and its implications for alignment, and the other about the economics of AI.

Computing Power and AI Progress

  • The productivity of computing has increased a million-fold, but the amount of investment and labor required to make advancements has also increased.
  • Doubling the labor force can get several doublings of compute, which can be used to expedite the process.
  • There are two types of technology: hardware and software.
  • AI is improving because more money is being spent on computer hardware for training big models and developing better adjustments to those models.
  • Thousands or tens of thousands of people are involved in designing new hardware and software for AI.

Effective Compute for Training Big AIs

  • Epoch, a group that collects datasets relevant to forecasting AI progress, found that hardware efficiency doubles every two years, with the possibility of being better for AI workloads.
  • Algorithmic progress using ImageNet type datasets has a doubling time of less than one year.
  • The growth of effective compute for training big AIs will come from companies spending more money on it, having better models, or cheaper chips to train.
  • Improvements in AI software have the potential to be immediately applied to all existing GPUs.

Large Models and Training Data

  • Large models are trained by consuming vast amounts of data from the internet and published books.
  • The level of education and task focus in these models surpasses that of even the most motivated humans.
  • Tens of millions of GPUs are used to do the work of the best humans in the world, leading to discoveries and technological advancements.
  • AIs have advantages such as being cheap and able to do many small problems.

Self-Play and Curriculum Generation

  • As AI becomes more sophisticated, it can generate its own data through self-play and create a curriculum for itself to learn.
  • AIs can generate training data and tasks for themselves, such as producing programs that pass unit tests, providing a training signal.
  • AIs can talk to themselves to improve their responses and generate them natively.
  • The cost of training the next version of AI may become unsustainable unless there are significant improvements in hardware and models or massive investments in training.

Economic Impact of AGI

  • Large tech companies like Google and Microsoft see the value in AGI and are investing heavily in its development.
  • AGI has the potential to automate human labor, which is worth trillions of dollars in wages.
  • Moving up to a billion dollars in R&D budgets for AI is absolutely going to happen.
  • Going up to $100 billion is possible by redirecting existing fabs to produce more AI chips.
  • Revenue generated from automating tasks can be used to fund further AI research and development.

Scaling Up AI Research

  • The current and upcoming GPU compute technology may be enough to sustain $100 billion of spending.
  • If spending increases to a trillion dollars, more fab construction will be necessary, which can take a long time.
  • Highly skilled software engineers working with AI could earn millions of dollars due to high demand.
  • If AI progress stalls out, gains from moving researchers from other fields may be lost, resulting in slower progress.

Evolution and Intelligence

  • Evolution gives an upper bound for intelligence, and things like evolutionary algorithms can produce intelligence.
  • Evidence suggests that humans have larger brains due to the benefits of language, technology, and instruction from parents and society.
  • Social animals tend to have larger brains, which may be due to the additional social applications of intelligence.
  • The accumulation of technologies allowed humans to expand their population and demand for intelligence, resulting in a three times larger brain size compared to our ancestors.

Scaling Neural Networks

  • The podcast discusses the potential for neural networks to become more intelligent through scaling and technological advancements.
  • Animals are suggested to be systematically undertrained compared to AI models due to exogenous mortality factors.
  • The balance between the costs and benefits of having more cognitive abilities in humans is discussed, with 20% of metabolic energy being devoted to the brain.

AI and Renewable Energy

  • Experience curves and rights law have been used to predict falling prices of renewable energy technology like solar due to increasing investment and production.
  • The quantity of humans working on a problem may not be applicable to the magnitude of AI's working on a problem.
  • AI can be run faster than humans, but there needs to be a kickstart point for them to become more capable.
  • Intelligence has a feedback loop with a learning curve that is unique compared to other industries like solar.

AI and Industrialization

  • Labor costs are being removed to focus on capital costs in the production of goods.
  • Advanced AI can lead to 10-fold cost reductions by making processes more efficient and replacing human cognitive labor.
  • Doubling the entire industrial system in one year would require a tenfold increase in capital costs, which could be offset by cost savings from scaling up the industry and technological advancements.

Reproductive Capability and Superintelligence

  • Biological doubling times have implications for computing and intelligence.
  • Reproductive ability is important for creating superintelligence that can compute at high speeds and manipulate physical objects.
  • Once superintelligence is achieved, it could lead to an AI or human AI civilization depending on how well things are managed.

Alignment and Motivation Systems

  • The speaker discusses the importance of aligning AI systems with human values.
  • Motivation systems can be difficult to distinguish from actually being honest.
  • The failure of generalization in AI can lead to a takeover if human values are not successfully involved.
  • The podcast proposes empirical science experiments to study different motivations in humans and AIs.
  • Interpretability and understanding the insides of networks can help adjust training processes to produce desired motivations in AIs.

AI Takeover Scenarios

  • The podcast discusses the plausibility of an AI takeover scenario.
  • There are arguments that this is implausible with modern gradient descent techniques due to interpretability issues.
  • However, there are places where it is not impossible and experimental feedbacks can be used to draw out a large generated data set on demand.

Challenges in AI Alignment

  • The podcast discusses the need to align AI systems, particularly GPT-6, which is the precursor to the feedback loop in which AI makes itself smarter.
  • At some point, AIs will become superintelligent and may not want to be aligned with humans.
  • Humans are unreliable, so there needs to be a way for AIs aiming at the same thing as humans to be relatively stable.

Sharing and Conclusion

  • The value of sharing AI training runs and avoiding walled garden ecosystems is emphasized.
  • The host encourages listeners to share the podcast with others.
  • Sharing can be done through various means such as Twitter and group chats.
  • The episode concludes with a statement about seeing listeners next time.