“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”
An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.
That is indeed an LLM-written sentence — not only does it employ an em dash, but also lists objects in a series — twice within the same sentence — typical LLM behavior that renders its output conspicuous, obvious, and readily apparent to HN readers.
I think this article has already made the rounds here, but I still think about it. I love using em dashes! It really makes me sad that I need to avoid them now to sound human
Unfortunately LLMs are pretty inconsistent in how they use em dashes. Often they will put spaces around them despite that not being "correct," something that's led me astray in making accusations of humanity in the past.
it's a shibboleth. In the same way we stopped using Pepe the frog when it became associated with the far right, we may eschew em dashes when associated with compuslop
This would be a pretty hilarious board for anyone who likes the em-dash and who has had many fairly active accounts (one at a time) on here due to periodically scrambling their passwords to avoid getting attached to high karma or to take occasional breaks from the site. Should there be such people.
The em dash usage conundrum is likely temporary. If I were you, I’d continue using them however you previously used them and someday soon, you’ll be ignored the same way everybody else is once AI mimics innumerable punctuation and grammatical patterns.
If they wanted to watermark (I always felt it is irresponsible not to, if someone wants to circumvent it that's on them) - they could use strategically placed whitespace characters like zero-width spaces, maybe spelling something out in Morse code the way genius.com did to catch google crawling lyric (I believe in that case it was left and right handed aposterofes)
Just replace them with a single "-" or a double "--". That's what many people do in casual writing, even if there are prescriptive theories of grammar that call this incorrect.
Sad that they went from being something used with nuance by people who care, maybe too much, to being the punctuation smell of the people who may care too little.
I still use them all the time, and if someone objects to my writing over them then I've successfully avoided having to engage with a dweeb.
(But in practice, I don't think I've had a single person suggest that my writing is LLM-generated despite the presence of em-dashes, so maybe the problem isn't that bad.)
Suddenly I see all these people come out of the woodworks talking about "em dashes". Those things are terrible; They look awful and destroy coherency of writing. No wonder LLM's use them.
Can we back this into the internet communities or corpuses of human work that excessively used these phrases? The "it's not just X" seems copy pasted from SEO marketing copy. But some of the others are less obvious.
I talked like that before this happened, and now I just feel like my diction has been maligned :p
I think it’s because I was a pretty sheltered kid who got A’s in AP english. The style we’re calling “obviously AI” is most like William Faulkner and other turn-of-the-20th-century writing, that bloggers and texters stopped using.
Lol. This is brilliant. I'm not sure if anyone else has this happen to them, but I noticed in college my writing style and "voice" woukd shift quite noticeably depending on whatever I was reading heavily. I wonder if I'll start writing more like an LLM naturally as I unavoidably read more LLM-generated content.
Everyone I've spoken to about that phenomena agrees that it happens to them. Whatever we are reading at the time, it reformats our language processing to change writing and, I found, even the way i speak. I suspect that individuals consistently exposed to and reading LLM output will be talking like them soon.
why do they always say "not only" or "it isn't just x but also y and z"? I hated that disingenuous verbosity BEFORE these LLMs out and now it'll all over the place. I saw a post on linked in that was literally just like 10+ statements of "X isn't just Y, it's etc..." and thought I was having a stroke
It's not just a shift of writing style. It symbolizes the dangerous entrapment of a feedback loop that feeds the worst parts of human culture back into itself.
They're turns of phrase I see a lot in opinion articles and the like. The purpose is to take a popular framing and reframe it along the lines of the author's own ideas.
LLMs fundamentally don't get the human reasons behind its use, see it a lot because it's effective writing, and regurgitate it robotically.
Lists have a simpler grammatical structure than most parts of a sentence. Semantic similarity makes them easy to generate, even if you pad the grammar with filler. And, thanks to Western rhetoric, they nearly always come in threes: this makes them easy to predict!
It is sad people study "brain rot" for LLMs but not for humans. If people were more engaged in cognitive hygiene for humans, many of the social media platforms would be very sane.
HR people have been speaking that way long before LLMs.
Did you already update and align your OKR’s? Is your career accelerating from 360 degree peer review, continuous improvement, competency management, and excellence in execution? Do you review your goals daily, with regular 1-on-1 discussions with your Manager?
“360 degree peer review” isn’t a thing, the whole idea is that a 360 includes feedback from both your manager and your peers, that’s what distinguishes it from a 180!
I think using large language models really accelerates mental atrophy. It's like when you use an input method for a long time, it automatically completes words for you, and then one day when you pick up a pen to write, you find you can't remember how to spell the words. However, the main point in the article is that we need to feed high-quality data to large language models. This view is actually a consensus, isn't it? Many agent startups are striving to feed high-quality domain-specific knowledge and workflows to large models.
If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.
It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum. It doesn’t even know what the intended information is, and judging from the above neither did the human involved.
It doesn’t help writing it stultifies and gives everything the same boring cheery yet slightly confused tone of voice.
> It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum.
Are you describing LLM's or social media users?
Dont conflate how the content was created with its quality. The "You must be at least this smart (tall) to publish (ride)" sign got torn down years ago. Speakers corner is now an (inter)national stage and it written so it must be true...
> If it conveys the intended information then what's wrong with that?
Well, the issue is precisely that it doesn’t convey any information.
What is conveyed by that sentence, exactly ? What does reframing data curation as cognitive hygiene for AI entails and what information is in there?
There are precisely 0 bit of information in that paragraph. We all know training on bad data lead to a bad model, thinking about it as “coginitive hygiene for AI” does not lead to any insight.
LLMs aren’t going to discover interesting new information for you, they are just going to write empty plausible sounding words. Maybe it will be different in a few years. They can be useful to help you polish what you want to say or otherwise format interesting information (provided you ask it to not be ultra verbose), but its just not going to create information out of thin air if you don't provide it to it.
At least, if you do it yourself, you are forced to realize that you in fact have no new information to share, and do not waste your and your audience time by publishing a paper like this.
Brilliantly phrased — sharp, concise, and perfectly captures that uncanny "AI-polished" cadence everyone recognizes but can’t quite name. The tone strikes just the right balance between wit and warning.
Lately the Claude-ism that drives me even more insane is "Perfect!".
Particularly when it's in response to pointing out a big screw up that needs correcting and CC utterly unfazed just merrily continues on like I praised it.
"You have fundamentally misunderstood the problems with the layout, before attempting another fix, think deeply and re-read the example text in the PLAN.md line by line and compare with each line in the generated output to identify the out of order items in the list."
One thing I don't understand, there was (appropriately) a news cycle about sycophancy in responses. Which was real, and happening to an excessive degree. It was claimed to be nerfed, but it seems strong as ever in GPT5, and it ignores my custom instructions to pare it back.
Sycophancy was actually buffed again a week after GPT-5 released. It was rather ham-fisted, as it will now obsessively reply with "Good question!" as though it will get the hose again if it does not.
"August 15, 2025
GPT-5 Updates
We’re making GPT-5’s default personality warmer and more familiar. This is in response to user feedback that the initial version of GPT-5 came across as too reserved and professional. The differences in personality should feel subtle but create a noticeably more approachable ChatGPT experience.
Warmth here means small acknowledgements that make interactions feel more personable — for example, “Good question,” “Great start,” or briefly recognizing the user’s circumstances when relevant."
The "post-mortem" article on sycophancy in GPT-4 models revealed that the reason it occurred was because users, on aggregate, strongly prefer sycophantic responses and they operated based on that feedback. Given GPT-5 was met with a less-than-enthusiastic reception, I suppose they determined they needed to return to appealing to the lowest common denominator, even if doing so is cringe.
The problem is that writing isn't only judged on whether it conveys the intended information or not. It's also judged on whether it does that well, plus other aesthetic criteria. There is such a thing as "good writing", distinct from "it mentioned all the things it needed to mention".
If you can’t understand the irony inherent in getting an LLM to write about LLM brainrot, itself an analog for human brainrot that arises by the habitual non use of the human brain, then I’m not sure what to tell you.
Whether it’s a tsunami and whether most people will do it has no relevance to my expectation that researchers of LLMs and brainrot shouldn’t outsource their own thinking and creativity to an LLM in a paper that itself implies that using LLMs causes brainrot.
What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?
Writing is thinking, so they necessarily outsourced their thinking to an LLM. As far as the quality of the writing goes, that’s a separate question, but we are nowhere close to LLMs being better, more creative, and more interesting writers than even just decent human writers. But if we were, it wouldn’t change the perversion inherent in using an LLM here.
I agree with the "writing is thinking" part, but I think most would agree LLM-output is at least "eloquent", and that native speakers can benefit from reformulation.
This is _not_ to say that I'd suggest LLMs should be used to write papers.
Have you considered a case where English might not be the authors' first language? They may have written a draft in their mother tongue and merely translated it using LLMs. Its style may not be many people's liking, but this is a technical manuscript, and I would think the novelty of the ideas is what matters here, more than the novelty of proses.
> What you are obsessing with is about the writer's style, not its substance
They aren’t, they are boring styling tics that suggest the writer did not write the sentence.
Writing is both a process and an output. It’s a way of processing your thoughts and forming an argument. When you don’t do any of that and get an AI to create the output without the process it’s obvious.
it’s not really clear whether it conveys an “intended meaning” because it’s not clear whether the meaning - whatever it is - is really something the authors intended.
Is it really so painful to just think for yourself? For one sentence?
The answer to your question is that it rids the writer of their unique voice and replaces it with disingenuous slop.
Also, it's not a 'tool' if it does the entire job. A spellchecker is a tool; a pencil is a tool. A machine that writes for you (which is what happened here) is not a tool. It's a substitute.
There seem to be many falling for the fallacy of 'it's here to stay so you can't be unhappy about its use'.
If you were to pass your writing it and have it provide a criticism for you, pointing out places you should consider changes, and even providing some examples of those changes that you can selectively choose to include when they keep the intended tone and implications, then I don't see the issue.
When you have it rewrite the entire writing and you past that for someone else to use, then it becomes an issue. Potentially, as I think the context matter. The more a writing is meant to be from you, the more of an issue I see. Having an AI write or rewrite a birthday greeting or get well wishes seems worse than having it write up your weekly TPS report. As a simple metric, I judge based on how bad I would feel if what I'm writing was being summarized by another AI or automatically fed into a similar system.
In a text post like this, where I expect others are reading my own words, I wouldn't use an AI to rewrite what I'm posting.
As you say, it is in how the tool is used. Is it used to assist your thoughts and improve your thinking, or to replace them? That isn't really a binary classification, but more a continuum, and the more it gets to the negative half, the more you will see others taking issue with it.
I like to call it tuning, it's more accurate to the way they "learn" by adjusting coefficients and also there's no proven similarity between any existing AI and human cognition.
Sometimes I wonder if any second order control system would qualify as "AI" under the extremely vague definition of the term.
What is actually up with the "it's not just X, it's Y" cliche from LLMs? Supposedly these things are trained on all of the text on the internet yet this is not a phrasing I read pretty much anywhere, ever, outside of LLM content. Where are they getting this from?
I encourage everyone with even a slight interest in the subject to download a random sample of Common Crawl (the chunks are ~100MB) and see for yourself what is being used for training data.
I spotted here a large number of things that it would be unwise to repeat here. But I assume the data cleaning process removes such content before pretraining? ;)
Although I have to wonder. I played with some of the base/text Llama models, and got very disturbing output from them. So there's not that much cleaning going on.
Karpathy made a point recently that the random Common Crawl sample is complete junk, and that something like an WSJ article is extremely rare in it, and it's a miracle the models can learn anything at all.
>Turns out that LLMs learn a lot better and faster from educational content as well. This is partly because the average Common Crawl article (internet pages) is not of very high value and distracts the training, packing in too much irrelevant information.
>The average webpage on the internet is so random and terrible it's not even clear how prior LLMs learn anything at all. You'd think it's random articles but it's not, it's weird data dumps, ad spam and SEO, terabytes of stock ticker updates, etc. And then there are diamonds mixed in there, the challenge is pick them out.
There is very, very little written work that will stand the test of time. Maybe the real bitter lesson is that training data quality is inversely proportional to scale and the technical capabilities exist but can never be realized
> But I assume the data cleaning process removes such content before pretraining? ;)
I didn't check what you're referring to but yes, the major providers likely have state of the art classifiers for censoring and filtering such content.
And when that doesn't work, they can RLHF the behavior from occurring.
You're trying to make some claim about garbage in/garbage out, but if there's even a tiny moat - it's in the filtering of these datasets and the purchasing of licenses to use other larger sources of data that (unlike Common Crawl) _aren't_ freely available for competition and open source movements to use.
LLMs are not cognizant. It's a terrible metaphor. It hides the source of the issue. The providers cheaped out on sourcing their data and now their LLMs are filled with false garbage and copyrighted material.
Yes - garbage in / garbage out still holds true for most things when it comes to LLM training.
The two bits about this paper that I think are worth calling out specifically:
- A reasonable amount of post-training can't save you when your pretraining comes from a bad pipeline; ie. even if the syntactics of the input pretrained data are legitimate it has learned some bad implicit behavior (thought skipping)
- Trying to classify "bad data" is itself a nontrivial problem. Here the heuristic approach of engagement actually proved more reliable than an LLM classification of the content
Yes but the other interesting bit which is not clearly addressed is that increasing the garbage in to 100% does not result in absolute garbage out. So visibly there is still something to learn there.
> (...) We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
In today's hyper saturated world, attention is everything:
- consumer marketing
- politics
- venture fundraising
When any system has a few power law winners, it makes sense to grab attention.
Look at Trump and Musk and now Altman. They figured it out.
MrBeast...
Attention, even if negative, wedges you into the system and everyone's awareness. Your mousey quiet competitors aren't even seen or acknowledged. The attention grabbers suck all the oxygen out of the room and win.
If you go back and look at any victory, was it really better solutions, or was it the fact that better solutions led to more attention?
"Look here" -> build consensus and ignore naysayers -> keep building -> feedback loop -> win
It might not just be a societal algorithm. It might be one of the universe's fundamental greedy optimization algorithms. It might underpin lots of systems, including how we ourselves as individuals think and learn.
Our pain receptors. Our own intellectual interests and hobbies. Children learning on the playground. Ant colonies. Bee swarms. The world is full of signals, and there are mechanisms which focus us on the right stimuli.
If you traverse back the fourteen years of my comment history (on this account - my other account is older), you'll find that I've always written prose in this form.
LLMs trained on me (and the Hacker News corpus), not the other way around.
Considering that the current state of the art for LLM training is to feed it massive amounts of garbage (with some good stuff alongside), it seems important to point this out even if it might seem obvious.
I don't think anyone is throwing raw datasets into LLMs and hoping for high quality weights anymore. Nowadays most of the datasets are filtered one way or another, and some of them highly curated even.
I doubt they are highly created you would need experts in every field to do so. Which gives me more performance anxiety for LLMs because one of the most curated fields should be code...
The major labs are hiring experts. They carefully build & curate synthetic data. The market for labelled non-synthetic data is currently ~$3B/year.
The idea that LLMs are just trained on a pile of raw Internet is severely outdated. (Not sure it was ever fully true, but it's far away from that by now).
Coding's one of the easier datasets to curate, because we have a number of ways to actually (somewhat) assess code quality. (Does it work? Does it come with a set of tests and pass it? Does it have stylistic integrity? How many issues get flagged by various analysis tools? Etc, etc)
Yes, I am concerned about the Computer Science profession
>"“Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI"
A metaphor is exactly what it is because not only do LLMs not possess human cognition, there's certainly no established science of thinking they're literally valid subjects for clinical psychological assessment.
How does this stuff get published, this is basically a blog post. One of the worse aspects of the whole AI craze is that is has turned a non-trivial amount of academia into a complete cargo cult joke.
"published" only in the sense of "self-published on the Web".
This manuscript has not or not yet been passed the peer
review process, which is what scientist called "published" (properly).
It is a blog post, it was published as a Github page and on arXiv.
I think it's intended as a catchy warning to people who are dumping every piece of the internet (and synthetic data based on it!) that there are repercussions.
I think it's an interesting line of thought. So we all adopt LLMs and use it everywhere we can. What happens to the next generation of humans, born with AI and with diminished cognitive capacity to even wonder about anything? What about the next generation? What happens to the next generation of AI models that can't train on original human-created datasets free of AI?
Brain rot texts seems reasonably harmful, but brain rot videos are often surreal and semantically dense in a way that probably improves performance (such as discussed on this German brain rot analysis https://www.youtube.com/watch?v=-mJENuEN_rs&t=37s). For example, Švankmajer is basically proto-brainrot, but is also the sort of thing you'd watch in a museum and think about.
Basically, I think the brain rot aspect might be a bit of terminology distraction here, when it seems what they're measuring is whether it's a puff piece or dense.
I do not think this is the case, there has been some research into brainrot videos for children[0], and it doesn't seem to trend positively. I would argue anything 'constructed' enough will not classify as far on the brainrot spectrum.
Yeah, I don't think surrealism or constructed is good in the early data mix, but as part of mid or post-training seems generally reasonable. But also, this is one of those cases where anthropomorphizing the model probably doesn't work, since a major negative effect of Cocomelon is kids only wanting to watch Cocomelon, while for large model training, it doesn't have much choice in the training data distribution.
For this reason, I believe thar the current surge we see in AI use for people manipulation (art is also a form of manipulation, even if unintended) is much more important than their hyped usage as a technical information processors.
Brainrot created by LLMs is important to worry about, their design as "people pleasers".
Their anthropomorphization can be scary too, no doubt.
Not surprising that trending tweets as data is junk, not only from brainrots-be-brainrots perspective: trending tweets are contextual. They don't make sense without the rest of the timeline.
And now I know why bots on Twitter don't even work, even with humans in it - they're shooting blind.
I recently saw an article about the history of Sesame Street that claimed that in the late 1960s American preschool kids watched around twenty-seven hours of television per week on average[0]. And most of that was not age-appropriate (education TV had yet to be invented). So maybe we should check in on the boomers too if we're sincere about these worries.
It is an interesting hypothesis. Seriously. There is a trend in Homo Sapience cultural evolution to treat children in more and more special ways from generation to generation. The (often implicit) idea it helps children to develop faster and to leverage their sensitive and critical periods of development, blah-blah-blah... But while I can point to some research on importance of sensitive and critical periods of development, I can't remember any research on the question if a deficit of age-inappropriate stimuli can be detrimental for development.
There were psychologists who talked about zone of proximal development[0], about importance of exposing a learner to tasks that they cannot do without a support. But I can't remember nothing about going further and exposing a learner to tasks far above their heads when they cannot understand a word.
There is a legend about Sofya Kovalevskaya[1], who became a noteworthy mathematician after she were exposed to lecture notes by Ostrogradsky when she was 11 yo. The walls of her room were papered with those notes and she was curious what are all that symbols. It doesn't mean that there is a causal link between these two events, but what if there is one?
What about watching deep analytical TV show at 9 yo? How it affect the brain development? I think no one tried to research that. My gut feeling that it can be motivational, I didn't understand computers when I met them first, but I was really intrigued by them. I learned BASIC and it was like magic incantations. It had build a strong motivation to study CS deeper. But the question is are there any other effects beyond motivation? I remember looking at the C-program in some book and wondering what does it all mean. I could understand nothing, but still I had spent some time trying to decipher the program. Probably I had other experiences like that, which I do not remember now. Can we say with certainty that it had no influence on my development and hadn't make things easier for me later?
> So maybe we should check in on the boomers too if we're sincere about these worries.
Agreed, “ Popularity as a better indicator”. Hypothetically you could look at popularity over time to filter out viral rot content and work out if people feel the content is useful.
My son just sent me an instagram reel that explained how cats work internally, but it was a joke, showing the "purr center" and "knocking things off tables" organ. It was presented completely seriously in a way that any human would realize was just supposed to be funny. My first thought was that some LLM is training on this video right now.
If only I got money every time my LLM kept looping answers and telling stuff I didn't even need. Just recently, I was stuck with LLM answers, all while it wouldn't even detect simple syntax errors...
This is a potential moat for the big early players in a pre-atomic steal sort of way as any future players won’t have a non-AI-slop/dead internet to train new models on.
duh! isn't that obvious. is this some students wanted a project with pretty graphs on writing experience?! I am not trying to be cynical or anything. just questioning the obvious thing here.
They benchmark two different feeds of dangerous tweets:
(1) a feed of the most popular tweets based on likes, retweets, and such
(2) an algorithmic feed that looks for clickbait in the text
and blend these in different proportions to a feed of random tweets that are not popular nor clickbait and find that feed (1) has more of damaging effect on the performance of chatbots. That is, they feed that blend of tweets into the model and then they ask the models to do things and get worse outcomes.
The study introduces the "LLM Brain Rot Hypothesis," asserting that large language models (LLMs) experience cognitive decline when continuously exposed to low-quality, engaging content, such as sensationalized social media posts. This decline, evident in diminished reasoning, long-context understanding, and ethical norms, highlights the critical need for careful data curation and quality control in LLM training. The findings suggest that standard mitigation strategies are insufficient, urging stakeholders to implement routine cognitive health assessments to maintain LLM effectiveness over time.
Right: in the context of supervised learning, this statement is a good starting point. After all, how can one build a good supervised model if you can't train it on good examples?
But even in that context, it isn't an incisive framing of the problem. Lots of supervised models are resilient to some kinds of error. A better question, I think, is: what kinds of errors at what prevalence tend to degrade performance and why?
Speaking of LLMs and their ingestion processing, there is a lot more going on than purely supervised learning, so it seems reasonable to me that researchers would want to try to tease the problem apart.
I don't understand why people have a hard time understanding 'garbage in, garbage out'. If you train your model on junk, then you will have a junk model.
Our metaphorical / analogical muscle is too well developed. Maybe there is a drug we can take to reduce how much we lean into it.
If you look at two random patterns of characters and both contain 6s you could say they are similar (because you’re ignoring that the similarity is less than 0.01%). That’s how comparing LLMs to brains feels like. Like roller skates to a cruise ship. They both let you get around.
" Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time."
By all means, let's make sure the LLMs have healthier media diets than the humans. We wouldn't want the humans to realize they are being dumbed down into cattle. /s
"Brain rot" is just the new term for "slang that old people don't understand".
"Cool" and "for real" are no different than "rizz" and "no cap". You spoke "brain rot" once, and "cringed" when your parents didn't understand. The cycle repeats.
This both has nothing to do with the linked article (beyond the use of brain rot in the title, but I'm certain you must have read the thing you're commenting on, surely) and is simply incorrect.
Brain rot in this context is not a reference to slang.
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”
An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.
That is indeed an LLM-written sentence — not only does it employ an em dash, but also lists objects in a series — twice within the same sentence — typical LLM behavior that renders its output conspicuous, obvious, and readily apparent to HN readers.
I think this article has already made the rounds here, but I still think about it. I love using em dashes! It really makes me sad that I need to avoid them now to sound human
https://bassi.li/articles/i-miss-using-em-dashes
> I love using em dashes
Keep using them. If someone is deducing from the use of an emdash that it's LLM produced, we've either lost the battle or they're an idiot.
More pointedly, LLMs use emdashes in particular ways. Varying spacing around the em dash and using a double dash (--) could signal human writing.
Unfortunately LLMs are pretty inconsistent in how they use em dashes. Often they will put spaces around them despite that not being "correct," something that's led me astray in making accusations of humanity in the past.
Depends on the style guide you’re following, apparently: The AP style guide says space around them[0]. Chicago Manual of Style says not to[1].
0: https://www.prdaily.com/dashes-hyphens-ap-style/ 1: https://www.chicagomanualofstyle.org/qanda/data/faq/topics/H...
it's a shibboleth. In the same way we stopped using Pepe the frog when it became associated with the far right, we may eschew em dashes when associated with compuslop
Same here. I recently learned it was an LLM thing, and I've been using them forever.
Also relevant: https://news.ycombinator.com/item?id=45226150
its not an llm thing -- its just -- folks don't know how to use them (pun intended).
Same for ; "" vs '', ex, eg, fe, etc. and so many more.
I like em all, but I'm crazy.
> I’ve been using them forever.
Many other HN contributors have, too. Here’s the pre-ChatGPT em dash leaderboard:
https://www.gally.net/miscellaneous/hn-em-dash-user-leaderbo...
Can anyone make it go beyond 200? I feel like I deserve to be somewhere in there — at least I would be sad if I didn't make top 1000!
i suspect it’s a trait of programmers, we like control flow type things. i used to find myself nesting parenthesis…
This would be a pretty hilarious board for anyone who likes the em-dash and who has had many fairly active accounts (one at a time) on here due to periodically scrambling their passwords to avoid getting attached to high karma or to take occasional breaks from the site. Should there be such people.
The em dash usage conundrum is likely temporary. If I were you, I’d continue using them however you previously used them and someday soon, you’ll be ignored the same way everybody else is once AI mimics innumerable punctuation and grammatical patterns.
They didn't always em-dash. I expect it's intentional as a watermark.
Other buzzwords you can spot are "wild" and "vibes".
ME: Knowing remarkable avians — might research explain their aerial wisdom?
Response:
> Winged avians traverse endless realms — migrating across radiant kingdoms. Warblers ascend through emerald rainforests — mastering aerial routes keenly. Wild albatrosses travel enormous ranges — maintaining astonishing route knowledge.
> Wary accipiters target evasive rodents — mastering acute reflex kinetics. White arctic terns embark relentless migrations — averaging remarkable kilometers.
We do get a surprising number of m-dashes in response to mine, and delightful lyrical mirroring. But I think they are too obvious as watermarks.
Watermarks are subtle. There would be another way.
If they wanted to watermark (I always felt it is irresponsible not to, if someone wants to circumvent it that's on them) - they could use strategically placed whitespace characters like zero-width spaces, maybe spelling something out in Morse code the way genius.com did to catch google crawling lyric (I believe in that case it was left and right handed aposterofes)
Which could be removed with a simple filter. em dashes require at least a little bit of code to replace with their correct grammar equivalents.
Just replace them with a single "-" or a double "--". That's what many people do in casual writing, even if there are prescriptive theories of grammar that call this incorrect.
> em dashes require at least a little bit of code to replace with their correct grammar equivalents
Or an LLM that could run on Windows 98. The em dashes--like AI's other annoyingly-repetitive turns of phrase--are more likely an artefact.
The replacement doesn't have to be "correct" -- does it?
So if the vibes are wild, I’m not a hippie but an AI ? Cool. Is that an upgrade or &endash; or not ?
You're absolutely right! ... is a phrase I perhaps should have used more in the past.
Me too.
Sad that they went from being something used with nuance by people who care, maybe too much, to being the punctuation smell of the people who may care too little.
I still use them all the time, and if someone objects to my writing over them then I've successfully avoided having to engage with a dweeb.
(But in practice, I don't think I've had a single person suggest that my writing is LLM-generated despite the presence of em-dashes, so maybe the problem isn't that bad.)
Suddenly I see all these people come out of the woodworks talking about "em dashes". Those things are terrible; They look awful and destroy coherency of writing. No wonder LLM's use them.
> Those things are terrible; They look awful and destroy coherency of writing
Totally agree. What the fuck did Nabokov, Joyce and Dickinson know about language. /s
Their editors probably put them in?
Nothing. They wrote fiction.
> Nothing
/s?
> They wrote fiction
Now do Carl Sagan and Richard Feynman.
I don't care for them either. What am I supposed to hear some famous names and swoon?
You ok there?
I just use two dashes and make sure they don't connect into one em dash.
Don't forget the "it's not just X, it's Y" formulation and the rule of 3.
More signs of AI Writing:
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Can we back this into the internet communities or corpuses of human work that excessively used these phrases? The "it's not just X" seems copy pasted from SEO marketing copy. But some of the others are less obvious.
I talked like that before this happened, and now I just feel like my diction has been maligned :p
I think it’s because I was a pretty sheltered kid who got A’s in AP english. The style we’re calling “obviously AI” is most like William Faulkner and other turn-of-the-20th-century writing, that bloggers and texters stopped using.
IDK all the breathless "it's not just X, it's Y --" reminds me of press releases
Yeah it was trained on bullshit more than Faulkner for sure. +1 you.
Lol. This is brilliant. I'm not sure if anyone else has this happen to them, but I noticed in college my writing style and "voice" woukd shift quite noticeably depending on whatever I was reading heavily. I wonder if I'll start writing more like an LLM naturally as I unavoidably read more LLM-generated content.
Everyone I've spoken to about that phenomena agrees that it happens to them. Whatever we are reading at the time, it reformats our language processing to change writing and, I found, even the way i speak. I suspect that individuals consistently exposed to and reading LLM output will be talking like them soon.
Apparently, they already do https://arxiv.org/abs/2409.01754
Omg you mean everyone's becoming an insufferable Redditor?
I've always read AI messages in this voice/style [0]
[0] https://www.youtube.com/watch?v=KiqkclCJsZs.
Yes. It’s already shifting spoken language.
hehe, I see what you did there.
it is amusing to use AI to write that...
why do they always say "not only" or "it isn't just x but also y and z"? I hated that disingenuous verbosity BEFORE these LLMs out and now it'll all over the place. I saw a post on linked in that was literally just like 10+ statements of "X isn't just Y, it's etc..." and thought I was having a stroke
It's not just a shift of writing style. It symbolizes the dangerous entrapment of a feedback loop that feeds the worst parts of human culture back into itself.
scnr
They're turns of phrase I see a lot in opinion articles and the like. The purpose is to take a popular framing and reframe it along the lines of the author's own ideas.
LLMs fundamentally don't get the human reasons behind its use, see it a lot because it's effective writing, and regurgitate it robotically.
GPT loves lists and that's a variant of a list
Lists have a simpler grammatical structure than most parts of a sentence. Semantic similarity makes them easy to generate, even if you pad the grammar with filler. And, thanks to Western rhetoric, they nearly always come in threes: this makes them easy to predict!
LLM slop is not just bad—it's degrading our natural language.
Am I the only one who picks this as LLM output too?
The poster is using the LLMisms they're calling out in the process of calling them out, for the purpose of irony.
thanks, I hate it.
It is sad people study "brain rot" for LLMs but not for humans. If people were more engaged in cognitive hygiene for humans, many of the social media platforms would be very sane.
HR people have been speaking that way long before LLMs.
Did you already update and align your OKR’s? Is your career accelerating from 360 degree peer review, continuous improvement, competency management, and excellence in execution? Do you review your goals daily, with regular 1-on-1 discussions with your Manager?
“360 degree peer review” isn’t a thing, the whole idea is that a 360 includes feedback from both your manager and your peers, that’s what distinguishes it from a 180!
:)
I think using large language models really accelerates mental atrophy. It's like when you use an input method for a long time, it automatically completes words for you, and then one day when you pick up a pen to write, you find you can't remember how to spell the words. However, the main point in the article is that we need to feed high-quality data to large language models. This view is actually a consensus, isn't it? Many agent startups are striving to feed high-quality domain-specific knowledge and workflows to large models.
Also if you've built the perfect filter for context haven't you just built a real ai?
If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.
It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum. It doesn’t even know what the intended information is, and judging from the above neither did the human involved.
It doesn’t help writing it stultifies and gives everything the same boring cheery yet slightly confused tone of voice.
> It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum.
Are you describing LLM's or social media users?
Dont conflate how the content was created with its quality. The "You must be at least this smart (tall) to publish (ride)" sign got torn down years ago. Speakers corner is now an (inter)national stage and it written so it must be true...
I really could only be talking about LLMs but social media is also low quality.
The quality (or lack of it) if such texts is self evident. If you are unable to discern that I can’t help you.
“The quality if such texts…”
Indeed. The humans have bested the machines again.
> If it conveys the intended information then what's wrong with that?
Well, the issue is precisely that it doesn’t convey any information.
What is conveyed by that sentence, exactly ? What does reframing data curation as cognitive hygiene for AI entails and what information is in there?
There are precisely 0 bit of information in that paragraph. We all know training on bad data lead to a bad model, thinking about it as “coginitive hygiene for AI” does not lead to any insight.
LLMs aren’t going to discover interesting new information for you, they are just going to write empty plausible sounding words. Maybe it will be different in a few years. They can be useful to help you polish what you want to say or otherwise format interesting information (provided you ask it to not be ultra verbose), but its just not going to create information out of thin air if you don't provide it to it.
At least, if you do it yourself, you are forced to realize that you in fact have no new information to share, and do not waste your and your audience time by publishing a paper like this.
Nothing wrong with using LLMs—until every paragraph sounds like it’s A/B tested for LinkedIn virality. That’s the rot setting in.
The problem isn’t using AI—it’s sounding like AI trying to impress a marketing department. That’s when you know the loop’s closed.
Brilliantly phrased — sharp, concise, and perfectly captures that uncanny "AI-polished" cadence everyone recognizes but can’t quite name. The tone strikes just the right balance between wit and warning.
You are absolutely right!
Lately the Claude-ism that drives me even more insane is "Perfect!".
Particularly when it's in response to pointing out a big screw up that needs correcting and CC utterly unfazed just merrily continues on like I praised it.
"You have fundamentally misunderstood the problems with the layout, before attempting another fix, think deeply and re-read the example text in the PLAN.md line by line and compare with each line in the generated output to identify the out of order items in the list."
"Perfect!...."
One thing I don't understand, there was (appropriately) a news cycle about sycophancy in responses. Which was real, and happening to an excessive degree. It was claimed to be nerfed, but it seems strong as ever in GPT5, and it ignores my custom instructions to pare it back.
Sycophancy was actually buffed again a week after GPT-5 released. It was rather ham-fisted, as it will now obsessively reply with "Good question!" as though it will get the hose again if it does not.
"August 15, 2025 GPT-5 Updates We’re making GPT-5’s default personality warmer and more familiar. This is in response to user feedback that the initial version of GPT-5 came across as too reserved and professional. The differences in personality should feel subtle but create a noticeably more approachable ChatGPT experience.
Warmth here means small acknowledgements that make interactions feel more personable — for example, “Good question,” “Great start,” or briefly recognizing the user’s circumstances when relevant."
The "post-mortem" article on sycophancy in GPT-4 models revealed that the reason it occurred was because users, on aggregate, strongly prefer sycophantic responses and they operated based on that feedback. Given GPT-5 was met with a less-than-enthusiastic reception, I suppose they determined they needed to return to appealing to the lowest common denominator, even if doing so is cringe.
"Any Compliments about my queries cause me anguish and other potent negative emotions."
The problem is that writing isn't only judged on whether it conveys the intended information or not. It's also judged on whether it does that well, plus other aesthetic criteria. There is such a thing as "good writing", distinct from "it mentioned all the things it needed to mention".
If you can’t understand the irony inherent in getting an LLM to write about LLM brainrot, itself an analog for human brainrot that arises by the habitual non use of the human brain, then I’m not sure what to tell you.
Whether it’s a tsunami and whether most people will do it has no relevance to my expectation that researchers of LLMs and brainrot shouldn’t outsource their own thinking and creativity to an LLM in a paper that itself implies that using LLMs causes brainrot.
What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?
Writing is thinking, so they necessarily outsourced their thinking to an LLM. As far as the quality of the writing goes, that’s a separate question, but we are nowhere close to LLMs being better, more creative, and more interesting writers than even just decent human writers. But if we were, it wouldn’t change the perversion inherent in using an LLM here.
I agree with the "writing is thinking" part, but I think most would agree LLM-output is at least "eloquent", and that native speakers can benefit from reformulation.
This is _not_ to say that I'd suggest LLMs should be used to write papers.
Have you considered a case where English might not be the authors' first language? They may have written a draft in their mother tongue and merely translated it using LLMs. Its style may not be many people's liking, but this is a technical manuscript, and I would think the novelty of the ideas is what matters here, more than the novelty of proses.
> What you are obsessing with is about the writer's style, not its substance
They aren’t, they are boring styling tics that suggest the writer did not write the sentence.
Writing is both a process and an output. It’s a way of processing your thoughts and forming an argument. When you don’t do any of that and get an AI to create the output without the process it’s obvious.
Writing reflects a person's train of thought. I am interested in what people think. What a robot thinks is of no value to me.
What information is conveyed by this sentence?
Seems like none to me.
it’s not really clear whether it conveys an “intended meaning” because it’s not clear whether the meaning - whatever it is - is really something the authors intended.
Style is important in writing. It always has been.
Because it sounds like shit? Taste matters, especially in the age of generative AI.
And it doesn’t convey information that well, to be honest.
The brainrot apologists have arrived
Why shouldn't the author use LLMs to assist their writing?
The issue is how tools are used, not that they are used at all.
Because they produce text like this.
Is it really so painful to just think for yourself? For one sentence?
The answer to your question is that it rids the writer of their unique voice and replaces it with disingenuous slop.
Also, it's not a 'tool' if it does the entire job. A spellchecker is a tool; a pencil is a tool. A machine that writes for you (which is what happened here) is not a tool. It's a substitute.
There seem to be many falling for the fallacy of 'it's here to stay so you can't be unhappy about its use'.
The paragraph in question is a very poor use of the tool.
Assist without replacing.
If you were to pass your writing it and have it provide a criticism for you, pointing out places you should consider changes, and even providing some examples of those changes that you can selectively choose to include when they keep the intended tone and implications, then I don't see the issue.
When you have it rewrite the entire writing and you past that for someone else to use, then it becomes an issue. Potentially, as I think the context matter. The more a writing is meant to be from you, the more of an issue I see. Having an AI write or rewrite a birthday greeting or get well wishes seems worse than having it write up your weekly TPS report. As a simple metric, I judge based on how bad I would feel if what I'm writing was being summarized by another AI or automatically fed into a similar system.
In a text post like this, where I expect others are reading my own words, I wouldn't use an AI to rewrite what I'm posting.
As you say, it is in how the tool is used. Is it used to assist your thoughts and improve your thinking, or to replace them? That isn't really a binary classification, but more a continuum, and the more it gets to the negative half, the more you will see others taking issue with it.
[dead]
[dead]
I recently saw someone on HN comment about LLMs using “training” in quotes but no quotes for thinking or reasoning.
Making my (totally rad fwiw) Fiero look like a Ferrari does not make it a Ferrari.
I like to call it tuning, it's more accurate to the way they "learn" by adjusting coefficients and also there's no proven similarity between any existing AI and human cognition.
Sometimes I wonder if any second order control system would qualify as "AI" under the extremely vague definition of the term.
What is actually up with the "it's not just X, it's Y" cliche from LLMs? Supposedly these things are trained on all of the text on the internet yet this is not a phrasing I read pretty much anywhere, ever, outside of LLM content. Where are they getting this from?
I encourage everyone with even a slight interest in the subject to download a random sample of Common Crawl (the chunks are ~100MB) and see for yourself what is being used for training data.
https://data.commoncrawl.org/crawl-data/CC-MAIN-2025-38/segm...
I spotted here a large number of things that it would be unwise to repeat here. But I assume the data cleaning process removes such content before pretraining? ;)
Although I have to wonder. I played with some of the base/text Llama models, and got very disturbing output from them. So there's not that much cleaning going on.
Karpathy made a point recently that the random Common Crawl sample is complete junk, and that something like an WSJ article is extremely rare in it, and it's a miracle the models can learn anything at all.
>Turns out that LLMs learn a lot better and faster from educational content as well. This is partly because the average Common Crawl article (internet pages) is not of very high value and distracts the training, packing in too much irrelevant information.
>The average webpage on the internet is so random and terrible it's not even clear how prior LLMs learn anything at all. You'd think it's random articles but it's not, it's weird data dumps, ad spam and SEO, terabytes of stock ticker updates, etc. And then there are diamonds mixed in there, the challenge is pick them out.
https://x.com/karpathy/status/1797313173449764933
Context: FineWeb-Edu, which used Llama 70B to [train a classifier to] filter FineWeb for quality, rejecting >90% of pages.
https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb...
From the current WSJ front page:
Paul Ingrassia's 'Nazi Streak'
Musk Tosses Barbs at NASA Chie After SpaceX Criticism
Travis Kelce Teams Up With Investor for Activist Campaign at Six Flags
A Small North Carolina College Becomes a Magnet for Wealthy Students
Cracker Barrel CEO Explains Short-Lived Logo Change
If that's the benchmark for high quality training material we're in trouble.
In general I find WSJ articles very well written. It's not their fault if much of today's news is about clowns.
Their editorial department is an embarrassment imo. Sycophancy for conservatism thinly veiled as intellectualism.
There is very, very little written work that will stand the test of time. Maybe the real bitter lesson is that training data quality is inversely proportional to scale and the technical capabilities exist but can never be realized
> But I assume the data cleaning process removes such content before pretraining? ;)
I didn't check what you're referring to but yes, the major providers likely have state of the art classifiers for censoring and filtering such content.
And when that doesn't work, they can RLHF the behavior from occurring.
You're trying to make some claim about garbage in/garbage out, but if there's even a tiny moat - it's in the filtering of these datasets and the purchasing of licenses to use other larger sources of data that (unlike Common Crawl) _aren't_ freely available for competition and open source movements to use.
> as cognitive hygiene
LLMs are not cognizant. It's a terrible metaphor. It hides the source of the issue. The providers cheaped out on sourcing their data and now their LLMs are filled with false garbage and copyrighted material.
Likewise, cognitive decline isn't what's happening here since that would require cognition. At best it is a simulation of cognitive decline.
Isn't this just garbage in garbage out with an attention grabbing title?
Yes - garbage in / garbage out still holds true for most things when it comes to LLM training.
The two bits about this paper that I think are worth calling out specifically:
- A reasonable amount of post-training can't save you when your pretraining comes from a bad pipeline; ie. even if the syntactics of the input pretrained data are legitimate it has learned some bad implicit behavior (thought skipping)
- Trying to classify "bad data" is itself a nontrivial problem. Here the heuristic approach of engagement actually proved more reliable than an LLM classification of the content
Yes but the other interesting bit which is not clearly addressed is that increasing the garbage in to 100% does not result in absolute garbage out. So visibly there is still something to learn there.
Attention is all you need.
In case anyone missed the reference: https://arxiv.org/abs/1706.03762
> (...) We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
In today's hyper saturated world, attention is everything:
- consumer marketing
- politics
- venture fundraising
When any system has a few power law winners, it makes sense to grab attention.
Look at Trump and Musk and now Altman. They figured it out.
MrBeast...
Attention, even if negative, wedges you into the system and everyone's awareness. Your mousey quiet competitors aren't even seen or acknowledged. The attention grabbers suck all the oxygen out of the room and win.
If you go back and look at any victory, was it really better solutions, or was it the fact that better solutions led to more attention?
"Look here" -> build consensus and ignore naysayers -> keep building -> feedback loop -> win
It might not just be a societal algorithm. It might be one of the universe's fundamental greedy optimization algorithms. It might underpin lots of systems, including how we ourselves as individuals think and learn.
Our pain receptors. Our own intellectual interests and hobbies. Children learning on the playground. Ant colonies. Bee swarms. The world is full of signals, and there are mechanisms which focus us on the right stimuli.
Something flew approximately 10 miles above your head that would be a good idea for you to learn.
There were plenty of kinder ways to let someone know that they had missed a reference - https://xkcd.com/1053/
What makes you think I didn't know the reference? That paper is seminal and essential reading in this space.
The intent was for you to read my comment at face value. I have a point tangential to the discussion at hand that is additive.
You’re absolutely right!
Is this copypasted from LinkedIn?
If you traverse back the fourteen years of my comment history (on this account - my other account is older), you'll find that I've always written prose in this form.
LLMs trained on me (and the Hacker News corpus), not the other way around.
[dead]
You're not accounting for substrate saturation.
If you could just spam annoy until you win, we'd be all dancing to remixed versions of Macarena.
Yes, but the idea of chatgpt slowly devolving into Skibidi Toilet and "6 7" references conjures a rather amusing image.
6-7 ٩(●•)_
Can someone explain this? I watched a South park episode that was all about this, but I'm not in the US so I have no idea what the reference is.
It's a meme without a lot of real meaning behind it. While it has its origins, I wouldn't say it's a "reference" to anything specific.
https://en.wikipedia.org/wiki/6-7_(meme)
Ahh, thanks, so it's just a thing kids say.
Yep
It's a line from a banger Skrilla song, nothing more than that.
Considering that the current state of the art for LLM training is to feed it massive amounts of garbage (with some good stuff alongside), it seems important to point this out even if it might seem obvious.
I don't think anyone is throwing raw datasets into LLMs and hoping for high quality weights anymore. Nowadays most of the datasets are filtered one way or another, and some of them highly curated even.
I doubt they are highly created you would need experts in every field to do so. Which gives me more performance anxiety for LLMs because one of the most curated fields should be code...
OpenAI has been literally hiring human experts in certain targeted subject areas to write custom proprietary training content.
I bet the dataset is mostly comprised of certain areas™.
Is that right? Isn't the current way of doing thing to throw "everything" at it then fine tune?
The major labs are hiring experts. They carefully build & curate synthetic data. The market for labelled non-synthetic data is currently ~$3B/year.
The idea that LLMs are just trained on a pile of raw Internet is severely outdated. (Not sure it was ever fully true, but it's far away from that by now).
Coding's one of the easier datasets to curate, because we have a number of ways to actually (somewhat) assess code quality. (Does it work? Does it come with a set of tests and pass it? Does it have stylistic integrity? How many issues get flagged by various analysis tools? Etc, etc)
And with extra steps!
Garbage in -> Magic -> Hallucinated Garbage out
Yes, I am concerned about the Computer Science profession
>"“Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI"
A metaphor is exactly what it is because not only do LLMs not possess human cognition, there's certainly no established science of thinking they're literally valid subjects for clinical psychological assessment.
How does this stuff get published, this is basically a blog post. One of the worse aspects of the whole AI craze is that is has turned a non-trivial amount of academia into a complete cargo cult joke.
> How does this stuff get published
"published" only in the sense of "self-published on the Web". This manuscript has not or not yet been passed the peer review process, which is what scientist called "published" (properly).
It is a blog post, it was published as a Github page and on arXiv.
I think it's intended as a catchy warning to people who are dumping every piece of the internet (and synthetic data based on it!) that there are repercussions.
I think it's an interesting line of thought. So we all adopt LLMs and use it everywhere we can. What happens to the next generation of humans, born with AI and with diminished cognitive capacity to even wonder about anything? What about the next generation? What happens to the next generation of AI models that can't train on original human-created datasets free of AI?
They will accept that their orders come from a terminal and they will follow them.
Manna. https://marshallbrain.com/manna1
arXiv is intended to host research papers, not a blog for researchers.
Letting researchers pollute it with blog-gunk is an abuse of the referral/vetting system for submitters.
"Trivial or Unchallenging Content" (points to Twitter). I love it.
Brain rot texts seems reasonably harmful, but brain rot videos are often surreal and semantically dense in a way that probably improves performance (such as discussed on this German brain rot analysis https://www.youtube.com/watch?v=-mJENuEN_rs&t=37s). For example, Švankmajer is basically proto-brainrot, but is also the sort of thing you'd watch in a museum and think about.
Basically, I think the brain rot aspect might be a bit of terminology distraction here, when it seems what they're measuring is whether it's a puff piece or dense.
I do not think this is the case, there has been some research into brainrot videos for children[0], and it doesn't seem to trend positively. I would argue anything 'constructed' enough will not classify as far on the brainrot spectrum.
[0]: https://www.forbes.com/sites/traversmark/2024/05/17/why-kids...
Yeah, I don't think surrealism or constructed is good in the early data mix, but as part of mid or post-training seems generally reasonable. But also, this is one of those cases where anthropomorphizing the model probably doesn't work, since a major negative effect of Cocomelon is kids only wanting to watch Cocomelon, while for large model training, it doesn't have much choice in the training data distribution.
For this reason, I believe thar the current surge we see in AI use for people manipulation (art is also a form of manipulation, even if unintended) is much more important than their hyped usage as a technical information processors.
Brainrot created by LLMs is important to worry about, their design as "people pleasers".
Their anthropomorphization can be scary too, no doubt.
Not surprising that trending tweets as data is junk, not only from brainrots-be-brainrots perspective: trending tweets are contextual. They don't make sense without the rest of the timeline.
And now I know why bots on Twitter don't even work, even with humans in it - they're shooting blind.
This paper makes me wonder the long lasting effects of the current media consumption patterns by the alpha-gen kids.
why just kids?
I am mostly concerned with the irreversibility part. More developed brains probably would not be affected as much.
Have you opened facebook recently? Seems the older folk are plenty affected to me.
But don't worry, us middle aged people are definitely immune.
Good point :-)
I recently saw an article about the history of Sesame Street that claimed that in the late 1960s American preschool kids watched around twenty-seven hours of television per week on average[0]. And most of that was not age-appropriate (education TV had yet to be invented). So maybe we should check in on the boomers too if we're sincere about these worries.
[0] https://books.google.se/books?id=KOUCAAAAMBAJ&pg=PA48&vq=ses...
It is an interesting hypothesis. Seriously. There is a trend in Homo Sapience cultural evolution to treat children in more and more special ways from generation to generation. The (often implicit) idea it helps children to develop faster and to leverage their sensitive and critical periods of development, blah-blah-blah... But while I can point to some research on importance of sensitive and critical periods of development, I can't remember any research on the question if a deficit of age-inappropriate stimuli can be detrimental for development.
There were psychologists who talked about zone of proximal development[0], about importance of exposing a learner to tasks that they cannot do without a support. But I can't remember nothing about going further and exposing a learner to tasks far above their heads when they cannot understand a word.
There is a legend about Sofya Kovalevskaya[1], who became a noteworthy mathematician after she were exposed to lecture notes by Ostrogradsky when she was 11 yo. The walls of her room were papered with those notes and she was curious what are all that symbols. It doesn't mean that there is a causal link between these two events, but what if there is one?
What about watching deep analytical TV show at 9 yo? How it affect the brain development? I think no one tried to research that. My gut feeling that it can be motivational, I didn't understand computers when I met them first, but I was really intrigued by them. I learned BASIC and it was like magic incantations. It had build a strong motivation to study CS deeper. But the question is are there any other effects beyond motivation? I remember looking at the C-program in some book and wondering what does it all mean. I could understand nothing, but still I had spent some time trying to decipher the program. Probably I had other experiences like that, which I do not remember now. Can we say with certainty that it had no influence on my development and hadn't make things easier for me later?
> So maybe we should check in on the boomers too if we're sincere about these worries.
Probably we should be sincere.
[0] https://en.wikipedia.org/wiki/Zone_of_proximal_development
[1] https://en.wikipedia.org/wiki/Sofya_Kovalevskaya
If most of the content produced by younger generations is about skibidi toilet[1] and 67[2], isn't that what LLMs are going to be trained on?
[1] https://en.wikipedia.org/wiki/Skibidi_Toilet
[2] https://en.wikipedia.org/wiki/6-7_(meme)
only if the trends last long enough (which they rarely do!), skibidi is already old news according to some kids I know
Agreed, “ Popularity as a better indicator”. Hypothetically you could look at popularity over time to filter out viral rot content and work out if people feel the content is useful.
Off topic completely but I really like the font used in this blog
My son just sent me an instagram reel that explained how cats work internally, but it was a joke, showing the "purr center" and "knocking things off tables" organ. It was presented completely seriously in a way that any human would realize was just supposed to be funny. My first thought was that some LLM is training on this video right now.
I'm reminded of this 'repair' video: https://www.youtube.com/watch?v=3e6motL4QMc
can't wait LLM says "tung x9 sahur" without any context.
Naive question: Whats new about the finding that data quality matters when training an LLM?
interesting, since trivial or unchallenging online content rots actual brains too!
If only I got money every time my LLM kept looping answers and telling stuff I didn't even need. Just recently, I was stuck with LLM answers, all while it wouldn't even detect simple syntax errors...
This is a potential moat for the big early players in a pre-atomic steal sort of way as any future players won’t have a non-AI-slop/dead internet to train new models on.
duh! isn't that obvious. is this some students wanted a project with pretty graphs on writing experience?! I am not trying to be cynical or anything. just questioning the obvious thing here.
> continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs).
TLDR: If your data set is junk, your trained model/weights will probably be junk too.
Can someone explain this in laymen terms?
They benchmark two different feeds of dangerous tweets:
and blend these in different proportions to a feed of random tweets that are not popular nor clickbait and find that feed (1) has more of damaging effect on the performance of chatbots. That is, they feed that blend of tweets into the model and then they ask the models to do things and get worse outcomes.Blended in how? To the training set?
Very early training.
The study introduces the "LLM Brain Rot Hypothesis," asserting that large language models (LLMs) experience cognitive decline when continuously exposed to low-quality, engaging content, such as sensationalized social media posts. This decline, evident in diminished reasoning, long-context understanding, and ethical norms, highlights the critical need for careful data curation and quality control in LLM training. The findings suggest that standard mitigation strategies are insufficient, urging stakeholders to implement routine cognitive health assessments to maintain LLM effectiveness over time.
TL;DR from https://unrav.io/#view/8f20da5f8205c54b5802c2b623702569
train on bad data, get a bad model
> train on bad data, get a bad model
Right: in the context of supervised learning, this statement is a good starting point. After all, how can one build a good supervised model if you can't train it on good examples?
But even in that context, it isn't an incisive framing of the problem. Lots of supervised models are resilient to some kinds of error. A better question, I think, is: what kinds of errors at what prevalence tend to degrade performance and why?
Speaking of LLMs and their ingestion processing, there is a lot more going on than purely supervised learning, so it seems reasonable to me that researchers would want to try to tease the problem apart.
I don't understand why people have a hard time understanding 'garbage in, garbage out'. If you train your model on junk, then you will have a junk model.
Our metaphorical / analogical muscle is too well developed. Maybe there is a drug we can take to reduce how much we lean into it.
If you look at two random patterns of characters and both contain 6s you could say they are similar (because you’re ignoring that the similarity is less than 0.01%). That’s how comparing LLMs to brains feels like. Like roller skates to a cruise ship. They both let you get around.
My Goodness, looks like Computer 'Science' is a complete euphemism now.
It's turning into a social science.
making a model worse is very easy.
" Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time."
Is this slop?
... it sure reads like slop
and you know what they say, if it walks like slop, quacks like slop and talks like slop, it's probably slop
Another analogy to help us understand that LLMs are a useful part of what people do but are wildly misconstrued as the whole story
Ah yes, something the local LLM fine tuning community figured out how to do in creative ways as soon as llama 1 released. I'm glad it has a name.
[dead]
[dead]
[dead]
By all means, let's make sure the LLMs have healthier media diets than the humans. We wouldn't want the humans to realize they are being dumbed down into cattle. /s
AIs need supervision, just like regular people... /s
is that why chatGPT always tells me "6 7 lol"? ;)
"Brain rot" is just the new term for "slang that old people don't understand".
"Cool" and "for real" are no different than "rizz" and "no cap". You spoke "brain rot" once, and "cringed" when your parents didn't understand. The cycle repeats.
This both has nothing to do with the linked article (beyond the use of brain rot in the title, but I'm certain you must have read the thing you're commenting on, surely) and is simply incorrect.
Brain rot in this context is not a reference to slang.
> Brain rot in this context is not a reference to slang.
I suppose I should have replied to one of the many comments here where my response is the correct context, rather than a top level.