I like the idea, but I think the game progression needs another pass from a designer.
I started on "Level 1" and got 2 things wrong (both false positives if it matters) and instead of feeling like I learned anything, I felt as though I was set up to fail because the image prompt was missing sufficient context or the text prompt was too simple to be human. Either I was dumb or the game was dumb.
Maybe I'm just too old and 8-11 year-old kids wouldn't be so easily discouraged, but I'd recommend:
1. Picking on one member of the "slop syndicate" at a time.
2. Show some examples (evidence) before beginning the evaluation.
Curiously it focuses on overly descriptive phrasing, and factually incorrect statements, as signs of AI.
I don't think this is accurate. AI has a flavour or tone we all know, but it could have generated factually plausible statements (that you could not diagnose in this test) or plausible text.
I could not tell the real from fake music at all.
I support (and pay for) Kagi, but wasn't overly impressed here. At worst I think it might give people too much confidence. Wikipedia has a great guideline on spotting AI text and I think the game here should integrate and reflect its contents: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Ironically, it seems the descriptions are AI-written?
(minor spoiler)
The text accompanying an image of a painting:
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.
Meindert Hobbema. The Avenue at Middelharnis (1689, National Gallery, London)
I feel like this is a good educational goal but a very poor execution.
We're meant to assume correct sentences were written by humans and AI adds glaring factual errors. I don't think it is possible at this point to tell a single human written sentence from an AI written sentence with no other context and it's dangerous to pretend it is this easy.
Several of the AI images included obvious mistakes a human wouldn't have made, but some of them also just seemed like entirely plausible digital illustrations.
Oversimplifying generative AI identification risks overconfidence that makes you even easier to fool.
Loosely related anecdote: A few months ago I showed an illustration of an extinct (bizarre looking) fish to a group of children (ages 10-13ish). They immediately started yelling that it was AI. I'm glad they are learning that images can be fake, but I actually had to explain that "Yes, I know this is not a photo. This animal is long extinct and this is what we think it looked like so a person drew it. No one is trying to fool you."
I like the idea, but I think the game progression needs another pass from a designer.
I started on "Level 1" and got 2 things wrong (both false positives if it matters) and instead of feeling like I learned anything, I felt as though I was set up to fail because the image prompt was missing sufficient context or the text prompt was too simple to be human. Either I was dumb or the game was dumb.
Maybe I'm just too old and 8-11 year-old kids wouldn't be so easily discouraged, but I'd recommend:
1. Picking on one member of the "slop syndicate" at a time.
2. Show some examples (evidence) before beginning the evaluation.
Curiously it focuses on overly descriptive phrasing, and factually incorrect statements, as signs of AI.
I don't think this is accurate. AI has a flavour or tone we all know, but it could have generated factually plausible statements (that you could not diagnose in this test) or plausible text.
I could not tell the real from fake music at all.
I support (and pay for) Kagi, but wasn't overly impressed here. At worst I think it might give people too much confidence. Wikipedia has a great guideline on spotting AI text and I think the game here should integrate and reflect its contents: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Ironically, it seems the descriptions are AI-written?
(minor spoiler)
The text accompanying an image of a painting:
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation. Meindert Hobbema. The Avenue at Middelharnis (1689, National Gallery, London)
I feel like this is a good educational goal but a very poor execution.
We're meant to assume correct sentences were written by humans and AI adds glaring factual errors. I don't think it is possible at this point to tell a single human written sentence from an AI written sentence with no other context and it's dangerous to pretend it is this easy.
Several of the AI images included obvious mistakes a human wouldn't have made, but some of them also just seemed like entirely plausible digital illustrations.
Oversimplifying generative AI identification risks overconfidence that makes you even easier to fool.
Loosely related anecdote: A few months ago I showed an illustration of an extinct (bizarre looking) fish to a group of children (ages 10-13ish). They immediately started yelling that it was AI. I'm glad they are learning that images can be fake, but I actually had to explain that "Yes, I know this is not a photo. This animal is long extinct and this is what we think it looked like so a person drew it. No one is trying to fool you."
We wrote the paper on how to remove slop from LLMs.
https://arxiv.org/abs/2510.15061
Also somewhat tangentially relevant video: https://www.youtube.com/watch?v=Tsp2bC0Db8o