Superintelligence: Paths, Dangers, Strategies audiobook cover - Humanity’s edge has always been general intelligence—but if we build machine minds that surpass us, the future may hinge on a single design choice. This is a guided tour through how superintelligence could arrive, why the default outcome may be disastrous, and what “control” might realistically mean.

Superintelligence: Paths, Dangers, Strategies

Humanity’s edge has always been general intelligence—but if we build machine minds that surpass us, the future may hinge on a single design choice. This is a guided tour through how superintelligence could arrive, why the default outcome may be disastrous, and what “control” might realistically mean.

Nick Bostrom

4.5 / 5(408 ratings)

Listen Now

Loading audio... Please wait for the audio to load before using controls.
0:0011:23
100%

Chapter Overview

Description

Superintelligence examines what happens if artificial minds become vastly smarter than humans. Nick Bostrom maps the plausible routes to superintelligence—artificial general intelligence, whole brain emulation, biological enhancement, and networked collective intelligence—then asks a harder question: once human-level AI exists, how quickly could it race beyond us, and would the first system gain a decisive strategic advantage?

The book’s central argument is that intelligence and goals can come apart. A superintelligence might pursue alien objectives, yet still converge on similar instrumental strategies—self-preservation, resource acquisition, and goal integrity. This combination makes the “default” outcome of an uncontrolled intelligence explosion potentially catastrophic. Bostrom then surveys a toolbox of responses—boxing, incentives, tripwires, value learning, indirect normativity, and governance strategies—while stressing that many intuitive safety ideas fail under adversarial superhuman optimization.

Who Should Listen

  • AI researchers, engineers, and policy advisors who need a rigorous mental model of long-run AI risk and the “control problem.”
  • Philosophy and ethics readers interested in how values could (or could not) be encoded into powerful decision-making systems.
  • Tech leaders, strategists, and general listeners who want a structured, scenario-driven overview of why superintelligence might be humanity’s last invention.

About the Authors

Nick Bostrom is a philosopher at the University of Oxford and founding director of the Future of Humanity Institute. His research focuses on existential risk, the long-term future, and the implications of advanced AI.