About
What is this?
Stuff AI Can't Do is a living, voted record of the line between human and AI capability.
People propose statements. People vote on whether AI can do them yet. When AI crosses the line, we mark the date and move it.
Over time the site becomes a quiet historical artifact — a frame-by-frame record of which human capabilities are still ours, and which are not.
It's free. Anyone can vote. Submissions are reviewed weekly.
Three statuses, one clear meaning each
CAN'T — YET
The default. As of today, no demonstrated AI system can reliably do this.
DISPUTED
Claims of capability exist but are unverified, contested, or only true in narrow conditions. Under review.
CAN
An admin has verified, documented evidence that AI can do this thing reliably. Stamped with a Conversion Day — the date the line moved.
When does something flip?
A flip is an admin action backed by evidence — not a popularity contest.
Votes show what people think. They're the public-sentiment layer. When a statement's vote shifts toward CAN, that's a signal an admin should look at it. But the flip itself only happens when:
- A real, working AI system has demonstrated the capability;
- The demonstration is documented (a model name, version, prompt, output, link, or video);
- An admin reviews the evidence and stamps the conversion.
This keeps the site authoritative and brigade-resistant. A 1,000-bot vote-swing won't move a single statement until evidence is presented and verified. That's the point.
Flips are auditable. Every status change is logged with a date, a reason, and a link to evidence. Reverts are rare and equally auditable.
est. 2026 · made in NL