Renowned astrophysicist Neil deGrasse Tyson recently appeared on the IMPAULSIVE podcast to discuss a range of topics. But his warnings about artificial intelligence’s potential to devastate our information ecosystem stood out as particularly alarming.
In a conversation with Logan Paul and his co-hosts, Tyson painted a troubling picture of how AI could transform the internet into what he called “a cesspool of fake everything.”
Tyson’s concerns center on the rapidly advancing capabilities of AI-generated content, particularly deepfakes. He predicts a paradoxical future where AI becomes so sophisticated at creating false information that even the most gullible consumers of fake news will begin to doubt everything they see online.
“AI will be so good at making deep fakes that people who used to believe fake news will no longer believe that their fake news is real,” Tyson explained. This creates a scenario where the pendulum swings back—not because people become more discerning, but because they lose faith in all digital content.
The consequences of this shift could be profound. According to Tyson, when people who previously believed misinformation like “Pizzagate” start questioning whether their sources might themselves be AI-generated fabrications, the entire value proposition of the internet as a source of objective truth collapses.
“The value of the internet as a source of objective truths collapses practically overnight,” he warned.
Tyson’s critique extends beyond just deepfakes to AI’s fundamental approach to generating information. He explained that large language models don’t actually understand what they’re telling you—they simply assemble words that have appeared together in other people’s writings.
When people write about things they don’t fully understand, it corrupts the AI’s knowledge base. Even as an academic who has contributed 18 non-fiction books to the pool of information AI draws from, Tyson remains skeptical. “When I know something and I AI it, it’s 85% right,” he noted, highlighting the technology’s current limitations.
The astrophysicist did acknowledge that AI will improve over time, particularly in creating illustrations and videos. However, he emphasized the critical importance of understanding what we’re “walking into” and “stepping in” when we rely on AI-generated content.
His message was clear: we need to maintain a healthy skepticism and demand higher standards of evidence, especially for extraordinary claims.
Interestingly, Tyson offered a potential silver lining to this dystopian scenario. He suggested that the AI-induced collapse of internet credibility might actually force society to return to “a more organic and traditional way of communication”—talking to one another face-to-face and reading books. Books, he argued, have much higher thresholds for publication than internet posts, making them more reliable sources despite not being infallible.
Throughout the conversation, Tyson demonstrated his emphasis on evidence-based thinking. He stressed that the strength of one’s belief in anything should be proportional to the evidence supporting it.
This principle, he suggested, is precisely what will be undermined when AI makes it impossible to trust digital evidence at all.