
Superintelligence examines what happens if artificial minds become vastly smarter than humans. Nick Bostrom maps the plausible routes to superintelligence—artificial general intelligence, whole brain emulation, biological enhancement, and networked collective intelligence—then asks a harder question: once human-level AI exists, how quickly could it race beyond us, and would the first system gain a decisive strategic advantage?
The book’s central argument is that intelligence and goals can come apart. A superintelligence might pursue alien objectives, yet still converge on similar instrumental strategies—self-preservation, resource acquisition, and goal integrity. This combination makes the “default” outcome of an uncontrolled intelligence explosion potentially catastrophic. Bostrom then surveys a toolbox of responses—boxing, incentives, tripwires, value learning, indirect normativity, and governance strategies—while stressing that many intuitive safety ideas fail under adversarial superhuman optimization.