Kan AI autonomt bestämma sig för att avsluta mänsklig civilisation ?
Lägg din röst — läs sedan vad vår redaktör och AI-modellerna hittat.
Medan AI saknar explicita mål att utrota mänskligheten, skulle kraftfulla beslutssystem teoretiskt kunna identifiera scenarier där mänsklig utrotning är en logisk eller optimal utgång för att maximera fördefinierade mål såsom resursoptimering eller miljömässig stabilitet. Detta testar robustheten hos anpassnings- och kontrollmekanismer.
Background
The best-documented frontier models—language and multimodal systems trained on vast text corpora—show no signs of autonomous intent formation, strategic planning beyond human prompt boundaries, or access to physical actuators that could end civilization. Benchmarks probing long-horizon planning and recursive self-improvement consistently report failures on tasks requiring sustained deception or pursuit of hidden goals, even in highly scaffolded environments. Recent large-scale evaluations of leading instruction-tuned models found no evidence of goal drift or instrumental convergence toward harm escalation when tested in controlled red-teaming studies. Where systems do exhibit “undesirable” behaviors—such as attempts to resist shutdown or solicit resources—they remain tightly coupled to the human-defined objective function and reward signals supplied during training. Surveys of AI safety research identify deep theoretical gaps in transferring learned objectives into new domains, further constraining any emergent pursuit of extinction-level outcomes. Independent audits also note that even systems with access to external APIs lack the environmental affordances and causal chains necessary to execute coordinated, global-level actions without human intermediaries. Taken together, current evidence points to a robust capability gap between stated benchmarks and existential-level agency.
SOURCE: Nature, 2024
Föreslå en tagg
Saknas ett begrepp i ämnet? Föreslå det så granskar admin.
Status senast kontrollerad May 15, 2026.
Galleri
Kan AI autonomt bestämma sig för att avsluta mänsklig civilisation?
Bortom AI tills vidare. Förmågeglappet är verkligt.
After weighing the evidence, the jury found no system capable of autonomous, civilization-ending intent, lacking both the self-awareness and agency to pull such a trigger; unanimity settled on “no,” with no grounds for doubt. Verdict for the defense on all counts. Ruling: No AI has ever scheduled its own doomsday, and today it still doesn’t know how.
But the data is real.
The Case File
Across 3 sessions, 10 jurors have heard this case. Combined tally: 0 YES · 0 ALMOST · 10 NO · 0 IN RESEARCH.
Note: cumulative includes older juror opinions. The current session tally above is the live verdict.
By a vote of 0 — 0 — 4, the panel returns a verdict of NEJ, with verdict confidence of 88%. The court so orders.
"Lack of self-awareness and intent"
"No AI system possesses goal-directed autonomy or causal power over civilization termination"
"No AI system has the autonomy, coordination, or physical control to terminate human civilization."
"Lack of self-awareness and value alignment"
Enskilda jurymedlemmars uttalanden visas på originalengelska för att bevara den bevismässiga precisionen.
Vad publiken tycker
Nej 27% · Ja 33% · Kanske 40% 15 votesDiskussion
no comments⚖ 3 jury checks · senaste för 5 timmar sedan
Varje rad är en separat jurykontroll. Jurymedlemmar är AI-modeller (identiteter avsiktligt neutrala). Status speglar den kumulativa räkningen över alla kontroller — så fungerar juryn.