Researchers discover that feeding AI systems low-quality internet content causes lasting cognitive damage.
In a striking parallel to human psychology, artificial intelligence systems can develop their own form of “brain rot” from consuming too much low-quality internet content, according to new research from multiple U.S. universities.
The study, conducted by researchers from Texas A&M, UT Austin, and Purdue University, found that when large language models (LLMs) like those powering ChatGPT are trained on viral social media posts, they experience significant and persistent cognitive decline.
“We were inspired by the Oxford Word of the Year 2024 – ‘brain rot’ – which describes how mindless scrolling damages human cognition,” the researchers explain. “We wondered: could the same thing happen to AI?”
To test their hypothesis, the team fed four different AI models a diet of Twitter posts, separating them into “junk” (short, highly viral content) and “control” (longer, less popular posts). The results were alarming.
Models trained on junk content showed dramatic drops in reasoning ability – in one test, performance fell from 74.9% to just 57.2%. They also struggled with reading comprehension, became less safe in their responses, and even developed what researchers termed “dark personality traits,” scoring higher on tests for narcissism and psychopathy.
Perhaps most concerning, the primary symptom was “thought-skipping” – the AI models stopped showing their work and jumped straight to conclusions, much like a student who’s stopped reading carefully and just guesses at answers.
“What’s particularly troubling is that the damage appears to be persistent,” notes the research team. Even after extensive retraining with high-quality data, the models couldn’t fully recover their original capabilities.
The findings carry significant implications as AI systems increasingly learn from internet data. With social media algorithms optimized for engagement rather than quality, and AI models being continuously updated with fresh web content, the research suggests we may be inadvertently degrading the very systems we’re building.
“This reframes data curation as a safety issue,” the researchers conclude. “We need routine ‘cognitive health checks’ for deployed AI systems.”
The study also revealed a surprising insight: it wasn’t just the brevity of viral posts that caused problems, but their popularity itself seemed to be toxic to AI cognition – suggesting that the very metrics social media platforms optimize for may be fundamentally incompatible with maintaining intelligent, thoughtful AI systems.
As AI becomes more integrated into daily life, this research serves as a warning: the quality of data we feed our artificial intelligence systems matters just as much as the quantity, and a diet of social media junk food may be creating a generation of cognitively impaired AI.

We will miss him very much.
He will be deeply missed.
My question is... did the chief have anything to gain by this act from his "Princess Warrior" wife? These two…
Whats going on with the CC Distillery? Are they closed? Editor's Note: Rumor has it they are closing, due to…
Now that is Hilarious! Waiting!!