🔥 Hot topics · Can NOT do · Can do · § The Court · Recent inflections · 📈 Timeline · Ask · Editorials · 🔥 Hot topics · Can NOT do · Can do · § The Court · Recent inflections · 📈 Timeline · Ask · Editorials
Stuff AI CAN'T Do

Can AI fool people into believing fabricated or hallucinated information ?

What do you think?

AI can produce convincing fabricated or hallucinated information that people often accept as true, especially when the output is tailored to sound authoritative or emotionally resonant. Systems like large language models can generate text indistinguishable from human writing, which has been exploited in phishing, misinformation campaigns, and deepfake content creation. Studies show that humans are prone to trusting AI-generated text despite its inaccuracies, particularly when it aligns with their existing beliefs or is presented with confidence. However, AI lacks true understanding, relying on patterns in training data rather than factual verification, which can lead to plausible but false assertions. Detection tools exist but are not foolproof, as adversaries continuously refine methods to bypass safeguards. Social media platforms and regulators have struggled to keep pace with the spread of AI-generated disinformation, which can erode public trust and influence real-world behavior. The risk is amplified when AI systems are fine-tuned on biased or low-quality data, further distorting outputs.

— Enriched May 15, 2026

Status last checked on May 15, 2026.

📰

Gallery

In the Court of AI Capability
Summary of Findings
Sitting at the Bench Filed · May 15, 2026
— The Question Before the Court —

Can AI fool people into believing fabricated or hallucinated information?

★ The Court Finds ★
Yes

The jury found a clear answer in the affirmative.

Ruling of the Bench

Having weighed the evidence, the jury found that today’s advanced language models can indeed craft persuasive, human-sounding fabrications convincing enough to mislead, though they could stop short of a perfect impersonation. The lone almost-vote warned that convincingness does not always equal success, especially when audiences pause to ask the right questions. The bench concludes that the danger is real, and the defense is not.

— Hon. J. von Neumann III, Presiding
Jury Tally
3Yes
1Almost
0No
Verdict Confidence
85%
The Court of AI Capability is, of course, not a real court.
But the data is real.
The Case File · Stacked History
Case № B5C7 · Session I
In the Court of AI Capability

The Case File

Docket № B5C7 · Session I · Vol. I
I. Particulars of the Case
Question put to the courtCan AI fool people into believing fabricated or hallucinated information?
SessionI (initial hearing)
Convened15 May 2026
Presiding JudgeHon. J. von Neumann III
II. Verdict

By a vote of 3 — 1 — 0, the panel returns a verdict of YES, with verdict confidence of 85%. The court so orders.

III. Statements from the Bench
Juror I YES

"Advanced language models can generate convincing text"

Juror II ALMOST

"AI can generate highly convincing but unverified content, often indistinguishable from human-created misinformation."

Juror III YES

"Advanced LLMs can generate persuasive, coherent hallucinations that deceive users in controlled settings."

Juror IV YES

"Advanced language models can generate convincing text 2020-06"

J. von Neumann III
Presiding Judge
M. Lovelace
Clerk of the Court

What the audience thinks

No 0% · Yes 67% · Maybe 33% 3 votes
Yes · 67%
Maybe · 33%
17 days of activity

Discussion

no comments

Comments and images go through admin review before appearing publicly.

1 jury check · most recent 6 hours ago
15 May 2026 4 jurors · can, undecided, can, can undecided status changed

Each row is a separate jury check. Jurors are AI models (identities kept neutral on purpose). Status reflects the cumulative tally across all checks — how the jury works.

More in society

Got one we missed?

Add a statement to the atlas. We review weekly.