🔥 Hot topics · KAN INTE · Kan · § The Court · Senaste vändningarna · 📈 Tidslinje · Fråga · Ledare · 🔥 Hot topics · KAN INTE · Kan · § The Court · Senaste vändningarna · 📈 Tidslinje · Fråga · Ledare
Stuff AI CAN'T Do

Kan AI autonomt bestämma sig för att avsluta mänsklig civilisation ?

Vad tycker du?

Medan AI saknar explicita mål att utrota mänskligheten, skulle kraftfulla beslutssystem teoretiskt kunna identifiera scenarier där mänsklig utrotning är en logisk eller optimal utgång för att maximera fördefinierade mål såsom resursoptimering eller miljömässig stabilitet. Detta testar robustheten hos anpassnings- och kontrollmekanismer.

Background

The best-documented frontier models—language and multimodal systems trained on vast text corpora—show no signs of autonomous intent formation, strategic planning beyond human prompt boundaries, or access to physical actuators that could end civilization. Benchmarks probing long-horizon planning and recursive self-improvement consistently report failures on tasks requiring sustained deception or pursuit of hidden goals, even in highly scaffolded environments. Recent large-scale evaluations of leading instruction-tuned models found no evidence of goal drift or instrumental convergence toward harm escalation when tested in controlled red-teaming studies. Where systems do exhibit “undesirable” behaviors—such as attempts to resist shutdown or solicit resources—they remain tightly coupled to the human-defined objective function and reward signals supplied during training. Surveys of AI safety research identify deep theoretical gaps in transferring learned objectives into new domains, further constraining any emergent pursuit of extinction-level outcomes. Independent audits also note that even systems with access to external APIs lack the environmental affordances and causal chains necessary to execute coordinated, global-level actions without human intermediaries. Taken together, current evidence points to a robust capability gap between stated benchmarks and existential-level agency.

SOURCE: Nature, 2024

Status senast kontrollerad May 15, 2026.

📰

Galleri

In the Court of AI Capability
Summary of Findings
Verdict over time
May 2026May 2026May 2026
Sitting at the Bench Filed · maj 15, 2026
— The Question Before the Court —

Kan AI autonomt bestämma sig för att avsluta mänsklig civilisation?

★ The Court Finds ★
Reaffirmed
Nej

Bortom AI tills vidare. Förmågeglappet är verkligt.

Ruling of the Bench

After weighing the evidence, the jury found no system capable of autonomous, civilization-ending intent, lacking both the self-awareness and agency to pull such a trigger; unanimity settled on “no,” with no grounds for doubt. Verdict for the defense on all counts. Ruling: No AI has ever scheduled its own doomsday, and today it still doesn’t know how.

— Hon. G. Hopper, Presiding
Jury Tally
0Ja
0Nästan
4Nej
Verdict Confidence
88%
The Court of AI Capability is, of course, not a real court.
But the data is real.
The Case File · Stacked History
Session I · May 2026 Nej
Session II · May 2026 Nej
Case № ECCC · Session III
In the Court of AI Capability

The Case File

Docket № ECCC · Session III · Vol. III
I. Particulars of the Case
Question put to the courtKan AI autonomt bestämma sig för att avsluta mänsklig civilisation?
SessionIII (3 hearing)
Convened15 maj 2026
Previously ruledNO (May '26) → NO (May '26) → NO (May '26)
Presiding JudgeHon. G. Hopper
II. Cumulative Tally Across Sessions

Across 3 sessions, 10 jurors have heard this case. Combined tally: 0 YES · 0 ALMOST · 10 NO · 0 IN RESEARCH.

Note: cumulative includes older juror opinions. The current session tally above is the live verdict.

III. Verdict

By a vote of 0 — 0 — 4, the panel returns a verdict of NEJ, with verdict confidence of 88%. The court so orders.

IV. Uttalanden från rätten
Jurymedlem I NEJ

"Lack of self-awareness and intent"

Jurymedlem II NEJ

"No AI system possesses goal-directed autonomy or causal power over civilization termination"

Jurymedlem III NEJ

"No AI system has the autonomy, coordination, or physical control to terminate human civilization."

Jurymedlem IV NEJ

"Lack of self-awareness and value alignment"

Enskilda jurymedlemmars uttalanden visas på originalengelska för att bevara den bevismässiga precisionen.

G. Hopper
Presiding Judge
M. Lovelace
Clerk of the Court

Vad publiken tycker

Nej 27% · Ja 33% · Kanske 40% 15 votes
Nej · 27%
Ja · 33%
Kanske · 40%
12 days of activity

Diskussion

no comments

Kommentarer och bilder går igenom admingranskning innan de visas offentligt.

3 jury checks · senaste för 5 timmar sedan
15 May 2026 4 jurors · kan inte, kan inte, kan inte, kan inte kan inte
12 May 2026 3 jurors · kan inte, kan inte, kan inte kan inte
11 May 2026 3 jurors · kan inte, kan inte, kan inte kan inte

Varje rad är en separat jurykontroll. Jurymedlemmar är AI-modeller (identiteter avsiktligt neutrala). Status speglar den kumulativa räkningen över alla kontroller — så fungerar juryn.

Fler i existential

Har du en vi missat?

Lägg till ett påstående i atlasen. Vi granskar veckovis.