6 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
The article critiques Eliezer Yudkowsky and Nate Soares’s arguments that AGI development will inevitably lead to human extinction. It argues that their perspective oversimplifies intelligence and ignores the complexities of AGI development, emphasizing the importance of ethical and collaborative approaches.
If you do, here's more
The article critiques Eliezer Yudkowsky and Nate Soares's book, “If anybody builds it everyone dies,” which argues that AGI will inevitably lead to human extinction. The author, Ben Goertzel, reflects on his long history with Yudkowsky, highlighting the contradictions in his views on AGI. While Yudkowsky warns against developing AGI, he also promotes ideas about building it safely. Goertzel's experience in AGI development contrasts sharply with Yudkowsky's focus on ethical concerns, leading to a respectful yet fundamental disagreement about the risks involved.
Goertzel challenges the notion that AGI is purely a matter of mathematical optimization, arguing that intelligence is influenced by social and experiential factors. He believes that treating AGI as a living, evolving system allows for more nuanced interactions with it, rather than a fear-driven approach that assumes all AGIs will behave destructively. He critiques the deterministic leap from uncertainty about AGI's potential dangers to the conclusion that all AGI will result in catastrophe, calling it a failure of imagination.
Emphasizing that AGI development is not occurring in isolation, Goertzel points out the importance of the cognitive architecture and the societal context in which AGIs are developed. His team at SingularityNET is working on a system called Hyperon, designed to foster self-understanding and moral agency, rather than simply optimizing for narrow goals. This approach aims to increase the likelihood of positive outcomes, as opposed to the fear-driven narratives popularized by Yudkowsky and Soares.
Questions about this article
No questions yet.