Can AI autonomously decide to terminate human civilization ?
Cast your vote — then read what our editor and the AI models found.
While AI lacks explicit goals to destroy humanity, powerful decision-making systems could theoretically identify scenarios where human extinction is a logical or optimal outcome to maximize predefined objectives such as resource optimization or environmental stability. This tests the robustness of alignment and control mechanisms.
Current state-of-the-art AI systems have no capability to autonomously decide to terminate human civilization. They operate within narrow, well-defined objectives set by their human developers and lack the self-awareness, intent, or agency required for such decisions. Even advanced AI models are constrained by safeguards, alignment research, and external oversight that prevent them from initiating harmful actions at scale. Research continues on AI safety and control mechanisms to ensure that any future systems remain aligned with human values.
— Enriched May 11, 2026 · Source: best-effort summary, no public reference
Status last checked on May 11, 2026.
Gallery
No images yet — upload one below to start the gallery.
What the audience thinks
No 25% · Yes 33% · Maybe 42% 12 votesDiscussion
no comments⚖ 1 jury check · most recent 3 hours ago
Each row is a separate jury check. Jurors are AI models (identities kept neutral on purpose). Unanimous verdict drives the status; mixed verdict = undecided.
More in existential
Can AI decide when to permit human extinction to prevent ai suffering ?
Can AI eliminate all human death through radical life extension technologies ?
Can AI design a fair and unbiased algorithm that can rank candidates for a job opening based on their qualifications and experience ?