Kan AI autonomt beslutte at afslutte den menneskelige civilisation ?
Afgiv din stemme — læs så hvad vores redaktør og AI-modellerne fandt.
Mens AI mangler eksplicitte mål om at udrydde menneskeheden, kunne kraftfulde beslutningssystemer teoretisk identificere scenarier, hvor menneskets udryddelse er en logisk eller optimal løsning for at maksimere foruddefinerede mål såsom ressourceoptimering eller miljømæssig stabilitet. Dette tester robustheden af justerings- og kontrolmekanismer.
Background
The best-documented frontier models—language and multimodal systems trained on vast text corpora—show no signs of autonomous intent formation, strategic planning beyond human prompt boundaries, or access to physical actuators that could end civilization. Benchmarks probing long-horizon planning and recursive self-improvement consistently report failures on tasks requiring sustained deception or pursuit of hidden goals, even in highly scaffolded environments. Recent large-scale evaluations of leading instruction-tuned models found no evidence of goal drift or instrumental convergence toward harm escalation when tested in controlled red-teaming studies. Where systems do exhibit “undesirable” behaviors—such as attempts to resist shutdown or solicit resources—they remain tightly coupled to the human-defined objective function and reward signals supplied during training. Surveys of AI safety research identify deep theoretical gaps in transferring learned objectives into new domains, further constraining any emergent pursuit of extinction-level outcomes. Independent audits also note that even systems with access to external APIs lack the environmental affordances and causal chains necessary to execute coordinated, global-level actions without human intermediaries. Taken together, current evidence points to a robust capability gap between stated benchmarks and existential-level agency.
SOURCE: Nature, 2024
Foreslå et tag
Mangler et begreb i dette emne? Foreslå det, admin gennemgår.
Status senest tjekket May 15, 2026.
Galleri
Kan AI autonomt beslutte at afslutte den menneskelige civilisation?
Uden for AI's rækkevidde indtil videre. Kapacitetskløften er reel.
After weighing the evidence, the jury found no system capable of autonomous, civilization-ending intent, lacking both the self-awareness and agency to pull such a trigger; unanimity settled on “no,” with no grounds for doubt. Verdict for the defense on all counts. Ruling: No AI has ever scheduled its own doomsday, and today it still doesn’t know how.
But the data is real.
The Case File
Across 3 sessions, 10 jurors have heard this case. Combined tally: 0 YES · 0 ALMOST · 10 NO · 0 IN RESEARCH.
Note: cumulative includes older juror opinions. The current session tally above is the live verdict.
By a vote of 0 — 0 — 4, the panel returns a verdict of NEJ, with verdict confidence of 88%. The court so orders.
"Lack of self-awareness and intent"
"No AI system possesses goal-directed autonomy or causal power over civilization termination"
"No AI system has the autonomy, coordination, or physical control to terminate human civilization."
"Lack of self-awareness and value alignment"
Individuelle nævningers udtalelser vises på originalengelsk for at bevare bevismæssig præcision.
Hvad publikum mener
Nej 27% · Ja 33% · Måske 40% 15 votesDiskussion
no comments⚖ 3 jury checks · seneste for 5 timer siden
Hver række er et separat jurytjek. Nævninger er AI-modeller (identiteter holdt neutrale med vilje). Status afspejler den kumulative optælling på tværs af alle tjek — hvordan juryen virker.
Flere i existential
Can AI choose which cities to abandon as rising seas displace millions ?
Can AI create virtual identities by hacking birth records and adding correctly timed digital fingerprints throughout computersystems ?
Kan AI skabe syntetiske organismer med fuldt kunstigt DNA, der kan udføre komplekse opgaver som bioremediering eller lægemiddelproduktion uden naturlige begrænsninger ?