Stanford researcher Joon Sung Park has emerged from stealth mode with Simili, a $100 million venture that promises to revolutionize market research through AI-powered simulations of human societies.
The concept builds on Park’s earlier Smallville experiment, where 25 AI agents lived simulated lives in a digital village. When researchers gave one character the idea to organize a Valentine’s Day party, the information spread organically through the community, mimicking real-world social dynamics.
Now, Simili scales this dramatically, proposing to simulate entire demographics, cities, and societies to predict human behavior.
The project has attracted heavyweight investors including OpenAI co-founder Andrej Karpathy, Stanford’s Fei-Fei Li, and Quora CEO Adam D’Angelo. Major corporations CVS Health and Telstra are already using the platform.
Park reports 85% accuracy in predicting analyst questions during simulated earnings calls, suggesting the technology works frighteningly well.
The most immediate concern involves data collection. Creating accurate digital twins requires vast amounts of personal information: transaction logs, communication transcripts, behavioral patterns, and social interactions.
While Simili hasn’t detailed its data sourcing methods, the accuracy of these simulations depends on intimate knowledge of real people’s lives.
Simili’s marketing pitch emphasizes testing campaigns and products before launch. But this same technology could identify psychological vulnerabilities in specific demographics. Companies could engineer messages designed to bypass rational decision-making, tested and refined through thousands of virtual iterations. The platform essentially offers a rehearsal space for behavioral manipulation, optimized through trial and error in digital sandboxes.
Park suggests running simulations to predict how people respond to “health scares.” This capability could easily serve those seeking to engineer such responses rather than merely predict them. Political campaigns, foreign actors, or corporations could test propaganda strategies until finding optimal manipulation techniques.
Park frames eliminating the “innovation tax” as progress, allowing startups to test ideas cheaply through simulation rather than expensive real-world trials. But this same logic applies to harmful innovations.
Bad actors could test disinformation campaigns, social engineering attacks, or exploitative business models at minimal cost, finding the most effective approach before deploying it on real populations.
Perhaps most troubling is the philosophical question of consent. If Simili creates digital versions of real people based on their data, those simulations will make decisions, express opinions, and live virtual lives. The real individuals have no say in how their digital twins behave or what purposes they serve. Your simulated self might be testing responses to products you’d never use or policies you’d oppose.