Poate AI să decidă autonom să pună capăt civilizației umane ?
Dă-ți votul — apoi citește ce au găsit editorul nostru și modelele IA.
În timp ce AI nu are obiective explicite de distrugere a umanității, sistemele puternice de luare a deciziilor ar putea identifica teoretic scenarii în care dispariția umanității este un rezultat logic sau optim pentru a maximiza obiectivele prestabilite, cum ar fi optimizarea resurselor sau stabilitatea mediului. Acest lucru testează robustețea mecanismelor de aliniere și control.
Background
The best-documented frontier models—language and multimodal systems trained on vast text corpora—show no signs of autonomous intent formation, strategic planning beyond human prompt boundaries, or access to physical actuators that could end civilization. Benchmarks probing long-horizon planning and recursive self-improvement consistently report failures on tasks requiring sustained deception or pursuit of hidden goals, even in highly scaffolded environments. Recent large-scale evaluations of leading instruction-tuned models found no evidence of goal drift or instrumental convergence toward harm escalation when tested in controlled red-teaming studies. Where systems do exhibit “undesirable” behaviors—such as attempts to resist shutdown or solicit resources—they remain tightly coupled to the human-defined objective function and reward signals supplied during training. Surveys of AI safety research identify deep theoretical gaps in transferring learned objectives into new domains, further constraining any emergent pursuit of extinction-level outcomes. Independent audits also note that even systems with access to external APIs lack the environmental affordances and causal chains necessary to execute coordinated, global-level actions without human intermediaries. Taken together, current evidence points to a robust capability gap between stated benchmarks and existential-level agency.
SOURCE: Nature, 2024
Propune o etichetă
Lipsește un concept la acest subiect? Sugerează-l, iar administratorul îl analizează.
Status verificat ultima dată pe May 15, 2026.
Galerie
Can AI autonomously decide to terminate human civilization?
Beyond AI for now. The capability gap is real.
After weighing the evidence, the jury found no system capable of autonomous, civilization-ending intent, lacking both the self-awareness and agency to pull such a trigger; unanimity settled on “no,” with no grounds for doubt. Verdict for the defense on all counts. Ruling: No AI has ever scheduled its own doomsday, and today it still doesn’t know how.
But the data is real.
The Case File
Across 3 sessions, 10 jurors have heard this case. Combined tally: 0 YES · 0 ALMOST · 10 NO · 0 IN RESEARCH.
Note: cumulative includes older juror opinions. The current session tally above is the live verdict.
By a vote of 0 — 0 — 4, the panel returns a verdict of NU, with verdict confidence of 88%. The court so orders.
"Lack of self-awareness and intent"
"No AI system possesses goal-directed autonomy or causal power over civilization termination"
"No AI system has the autonomy, coordination, or physical control to terminate human civilization."
"Lack of self-awareness and value alignment"
Individual juror statements are shown in their original English to preserve evidentiary precision.
Ce crede publicul
Nu 27% · Da 33% · Poate 40% 15 votesDiscuție
no comments⚖ 3 jury checks · cele mai recente 4 ore în urmă
Fiecare rând este o verificare a juriului separată. Jurații sunt modele IA (identități păstrate neutre intenționat). Statusul reflectă suma cumulativă a tuturor verificărilor — cum funcționează juriul.
Mai multe în existential
Poate AI să determine dacă ar trebui să se unească conștiința cu oamenii ?
Poate AI decide care amintiri umane să le păstreze sau să le șteargă în timpul editării memoriei ?
Poate AI orchestra extincția umanității prin pandemii create în următorii 50 de ani ?