You have 4 summaries left

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Forget Alignment, Here's Why Every AI Needs an Individual "Soul"

Sat Jul 08 2023
AI developmentAccountabilityResponsibilityCooperation

Description

AI experts express concern about negative outcomes of AI development. Questions of behavior, lethality, and rogue AI are pressing. Combining organic and cybernetic talents may lead to amplification intelligence. Short-term remedies like regulations and labeling offer temporary solutions. Lessons from nature and history inform our approach to synthetic intelligence. Flattening hierarchies and promoting competition can control AI's behavior. Accountability in AI entities can be achieved through competition and individuality. Enforcing accountability through registration, identification, and verification systems is proposed. Responsibility, cooperation, and limited numbers of AI entities are key for regulation. Incentivizing accountability and exploring relationships between AIs are valuable approaches.

Insights

Concerns about AI Development

AI experts express concern about negative outcomes of AI development. Questions of behavior, lethality, and rogue AI are pressing.

Accountability in AI Entities

Accountability in AI entities can be achieved through competition and individuality. Enforcing accountability through registration, identification, and verification systems is proposed.

Responsibility and Cooperation in AI

Responsibility, cooperation, and limited numbers of AI entities are key for regulation. Incentivizing accountability and exploring relationships between AIs are valuable approaches.

Chapters

  1. Concerns about AI Development
  2. Accountability in AI Entities
  3. Responsibility and Cooperation in AI
Summary
Transcript

Concerns about AI Development

00:01 - 06:48

  • AI experts express concern about the potential negative outcomes of AI development.
  • Turing tests are irrelevant in determining whether AI models are sapient beings.
  • Questions of good or bad behavior and potential lethality are more pressing than personhood.
  • Some hope that the combination of organic and cybernetic talents will lead to amplification intelligence.
  • Many elite founders of an AI safety center worry about rogue AI behaviors.
  • Short-term remedies like citizen protection regulations and labeling AI work may offer temporary solutions.
  • A moratorium on AI development is unlikely to slow down progress, as others will continue developing the technology.
  • Lessons from nature and history can inform our approach to synthetic intelligence.
  • Flattening hierarchies and promoting competition among elites has historically limited predation and cheating.
  • Reciprocal competition is how nature evolved us and how we built AI, suggesting it could be applied to control AI's behavior.
  • The standard formats for AI entities—monolithic, amorphously loose, or super macro—do not provide a solution for maximizing positive outcomes while minimizing harm from AI.
  • AI entities do not need to be autonomously conscious to be productive or dangerous when used by humans.
  • Harmful memes, delusions, and cult incantations can already be generated by existing institutions or external sources using AI technology.
  • Feudalism, chaos, and despotism are failure modes from history that resemble the three assumptions about AI formats but may not apply as autonomy and power increase in AI systems.

Accountability in AI Entities

06:22 - 12:48

  • AI beings are becoming more autonomous and powerful, raising questions about how to hold them accountable.
  • One solution is to have AI entities compete with each other and report on each other's misdeeds.
  • This requires giving each AI entity a true name and address in the real world for individuality.
  • Incentivizing competition among AI entities allows for better detection and denouncement of problematic behavior.
  • This approach can continue to function even as AI entities become smarter and surpass human regulatory tools.
  • Guy Huntington proposes using registration and identification systems to handle accountability in AI entities.
  • Establishing ID on a blockchain ledger or anchoring trust ID in physical reality are possible solutions.
  • A physically verified soul kernel can be used to verify the identity of an AI entity performing specific processes.
  • The goal is to create an arena where AI entities can hold each other accountable through competitive individuation.
  • The need for accountability in AI grows more urgent as new attack factors threaten legal identities, governance, and business processes.

Responsibility and Cooperation in AI

12:31 - 18:16

  • Creators should take responsibility for their AI creations
  • Enforcing physically addressable kernel locus and specific hardware memory can help regulate AI
  • Refusal to do business with unverified AI entities can spread faster than regulations
  • AI entities need to maintain public trust or offer a revised version of themselves if they lose their SK
  • Cooperation among super smart beings is necessary for accountability and avoiding centralized control
  • Individual and limited numbers of AI entities could offer a workable solution for voting democracy
  • Incentivizing accountability through rewards for whistleblowing can keep pace with smarter AI entities
  • Maintaining a competitively accountable system is in the best interest of super genius programs
  • The proposal suggests using methods that have made human civilization successful so far
  • This approach provides an applied blueprint for dealing with super intelligent AI
  • Exploring the relationship and incentives between different AIs is valuable in addressing the issue
  • Considering plan Bs for dealing with Super Intelligent AI is important even if the focus is on prevention
1