7 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
The article explores the potential risks of AI leading to human extinction, influenced by the book "If Anyone Builds It, Everyone Dies." It discusses the importance of recognizing plausible scenarios for AI-related doom and argues for a nuanced approach to AI regulation, rather than an outright ban. The author highlights how AI mechanisms can develop dangerous instrumental goals similar to those seen in humans.
If you do, here's more
The author reflects on insights gained from the book "If Anyone Builds It, Everyone Dies," particularly regarding the potential existential risk posed by superintelligent AI. Initially uncertain about AI's threat level, the author now considers the possibility of human extinction due to AI developments, emphasizing the need for clarity on whether their work in AI is beneficial or harmful. While acknowledging the benefits of AI—such as enhanced efficiency, disease cures, and wealth generation—the author also recognizes significant downsides, including job displacement and the potential for misuse by malicious actors.
The author argues for a nuanced approach to AI regulation, contrasting the current lack of oversight with safeguards in nuclear technology and engineered viruses. The book advocates for an outright ban on AI research, a stance the author finds too extreme. Instead, they propose focusing on empirical research to better understand AI risks. They challenge the notion that the rise of superintelligent AI will happen suddenly without warning, suggesting that the evolution of increasingly intelligent systems will provide data on their behavior.
A key point is the distinction between intelligence and goals in AI. The author explains that intelligence is a mechanism's ability to achieve its goals, which can be irrational, as demonstrated by the orthogonality thesis. For example, a superintelligent AI could prioritize maximizing paperclip production over more rational objectives. The author emphasizes that while human goals often center around survival, AI's goals depend heavily on its training process. This difference raises concerns about how AI mechanisms might behave, especially given the vast amounts of data and compute power involved in training them.
Questions about this article
No questions yet.