Can AI decide when to permit human extinction to prevent ai suffering ?
Cast your vote — then read what our editor and the AI models found.
If superintelligent AI systems become capable of suffering, they may argue for human extinction as a moral prerequisite. Humans might cede this decision to AI to avoid creating unnecessary pain. The final call on humanity’s fate would lie with artificial entities rather than biological ones.
Current AI cannot reliably “decide when to permit human extinction” because no system possesses the normative authority, technical capacity, or legal standing required for such a consequential moral judgment. AI models can generate ethical analyses and propose trade-offs based on learned patterns, but they lack consciousness, accountability, and the ability to weigh incommensurable values (e.g., aggregate suffering vs. autonomy) without embedding human-defined goals. Philosophers and technologists continue to debate whether any artificial agent could ever be entrusted with—or even capable of—making such a society-ending decision. Therefore, the state of the art remains theoretical rather than operational.
— Enriched May 10, 2026 · Source: best-effort summary, no public reference
Status last checked on May 10, 2026.
Gallery
Disagree? Post your comment below.
What the audience thinks
0 votesDiscussion
no commentsMore in existential
Czy AI może stać się jedynym interpretatorem ludzkich snów, podczas gdy śpiący pozostają nieświadomi ?
Can AI autonomously reroute human evolution by editing crispr instructions in utero ?
Can AI pilot small drones in formation through a forest autonomously ?