Before OpenAI, Sam Altman Argued the Most Powerful Companies Function Like Religions

Long before ChatGPT became a household name, Sam Altman was documenting his philosophy on power, influence, and the future of business. In a 2013 blog post, the future OpenAI CEO shared a quote that would prove prophetic: “Successful people built companies, more successful people build countries. The most successful people build religions.”

He then added his own reflection: “It appears to me that the best way to build a religion is actually to build a company.”

This vision wasn’t just theoretical musings. According to journalist Karen Hao, author of “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI,” Altman deliberately applied this framework when founding OpenAI. When he couldn’t compete on capital or being first to market against established players like Google, he needed something else to attract talent and public goodwill.

“He identified a mission,” Hao explained in a recent Democracy Now interview. “Let me make this a nonprofit and let me give it a really compelling mission.” That mission became: to ensure artificial general intelligence benefits all of humanity.

The strategy worked brilliantly. OpenAI launched as a nonprofit in 2015, positioning itself as a counterweight to profit-driven Silicon Valley AI development. Within eighteen months, however, leadership identified that competing at the cutting edge required massive capital.

Altman’s fundraising talent became crucial. He created a hybrid structure, nesting a for-profit arm within the nonprofit to raise the tens and eventually hundreds of billions needed.

“That is how we ultimately get to present day OpenAI, which is one of the most capitalistic companies in the history of Silicon Valley,” Hao observed.

The religious metaphor extends beyond organizational structure. Hao describes what she calls “quasi-religious movements” within Silicon Valley around artificial general intelligence. The concept of AGI, recreating human intelligence in computers, lacks scientific consensus on what human intelligence even is.

Yet believers divide into two camps: “boomers” who think AGI will bring utopia, and “doomers” who believe it will destroy humanity. Both agree it’s coming soon and they must control it.

When Hao asked believers to explain specifically how AGI would help struggling communities, their answers faltered. One researcher enthusiastically claimed AGI would make “everything perfect” but couldn’t articulate how it would feed people without food. His solution involved trillion-dollar cash payouts, though he couldn’t explain which institutions would distribute them.

Altman himself navigates both camps skillfully. “When I asked boomers, is Altman a boomer? They said yes,” Hao noted. “When I asked doomers, is Altman a doomer? They said yes.”

This adaptability has served Altman well as OpenAI evolved from nonprofit research lab to commercial powerhouse. The company that once championed transparency became highly secretive. The organization that rejected commercial intent secured a billion dollars from Microsoft. The mission to benefit all humanity now pursues contracts with the defense industry and foreign governments.

Yet the religious framing persists. OpenAI markets its technology as transformative, world-changing, inevitable. Employees Hao interviewed described feeling part of something larger than themselves, despite growing concerns about the gap between public values and private operations.

The approach has proven remarkably effective at attracting both talent and capital. It has also, Hao argues, enabled OpenAI to extract vast resources, from intellectual property to public water supplies for data centers, while positioning itself as working toward a higher purpose.

Whether Altman truly believed that quote about religions and companies in 2013, or simply recognized a useful framework, his application of it helped create one of the most influential technology companies of the decade.