Enter the Darksim
Al Warburton sees new world model architectures as AI’s teen goth phase.
Slop floats, and as much as we may wish otherwise, it continues to steer popular discourse. Take TikTok’s Fruit Love Island for example, whose creator recently gained three million followers in nine days and inspired thousands of hot takes before imploding with an expletive-laden tirade in which they threatened their audience and described the backbreaking hours they’d invested in creating their nascent media empire. There’s a temptation to attend to this new kind of hype-driven visual culture critically, but my attention is elsewhere. Behind the scenes, a handful of big tech firms are racing to define the next generation of AI through the construction of “world models”: virtual playgrounds in which AI agents can be deployed to rehearse real-world chains of cause and effect. All of these projects—from companies like World Labs, Nvidia, Meta, Runway, OpenAI, and Google—attempt to remedy the inconsistent outputs of LLMs and diffusion-based generative AI by tweaking training processes, scraping richer data, scaling rapidly, and hyping products to attract users, investors, and industry partners. But like the bright fireworks of Fruit Love Island, these projects burn out quickly, leaving darker landscapes in their wake.
Most conspicuously, OpenAI’s video-focused social media app Sora folded just eight months after launching, nixing a billion-dollar Disney partnership in the process. OpenAI was vague about the closure, but it was clear that the project yielded neither consistent “worlds” nor consistent users. Another example is Google DeepMind’s Genie 3, which was released even more recently and already seems doomed to a similar fate. In contrast to Sora’s pure video approach, Genie is a text-to-video game simulator that allows users to describe an environment and an avatar, then pilot that avatar for up to sixty seconds of inconsistent “gameplay.” Google rubbed Genie’s lamp with their billion-dollar might and hoped that the model would spontaneously build the kind of crafted game architecture normally forged by thousands of problem-solving humans. But it didn’t play out. Genie still exhibits the spatiotemporal inconsistency and wonky physics that marks slop as slop.
This will come as no surprise to Yann LeCun, Meta’s former AI lead who resigned from the company last year citing the hysteria of “LLM-pilled” developers getting overly attached to their AI creations and succumbing to a psychotic belief in imminent AGI. LeCun’s new startup, AMI Labs, raised a billion dollars in March to develop a rival slop-eradication method called JEPA, which—like Genie—is trained on video. The key difference is that LeCun’s method (also attributed to Jürgen Schmidhuber) doesn’t output video: JEPA’s mental model remains an abstract nonvisual blob, just like the mental models we all develop as children that help us rehearse action and predict consequences before acting in the world. JEPA is therefore more than just a black box; it’s a prototype for machinic imagination, a nonvisual sandbox that we could (for economy’s sake) call a darksim. It makes sense, too. Why waste time and resources training agentic AIs to become hyperrealist painters if they’ll evolve faster without the burden of representation?
There’s a lot to say about this brewing disconnect between visuality and tech, especially in relation to games and worldbuilding (see my recent Image Empire project for more on this). Both Genie and JEPA are spatial simulation projects, with JEPA working as the physics engine (which calculates Newtonian forces and collisions) to Genie’s render engine (which calculates optics and appearance). It’s clear that the two modalities will inevitably combine, just as they do in game engines. But the current ascendance of JEPA’s nonvisual approach—AMI Labs’ co-founder calls it a “complete cognitive architecture”—speaks to a coming-of-age moment in artificial intelligence. Exit the colorful crayons of gen AI infancy; enter the moody, abstract interiority of adolescence. This is AI’s complicated goth phase, where we—the reluctant parents—have far fewer insights into its inner world. But where a goth teen might get a tattoo, smoke weed, or get pregnant, AI mistakes take far more destructive and unpredictable forms: climate destruction, calamitous geopolitics, drone warfare, and mass layoffs.
Next to these big-picture risks, art seems like a marginal concern. Darksims are the necessary substrate for the industrial rollout of next-wave robotics, yet they’ll also inevitably shape creative practice and artistic subjectivity, particularly for creative technologists and digital worldbuilders. Since the 1960s and ’70s, visual artists have performed a critical triangulation function for tech innovation, but this function depends on visuality as a discursive common ground. Images, networks and screens have been central to most military-industrial R&D since the ’70s onwards, and creatives have been necessarily in the loop—often to a greater degree than academics and policymakers. But the darksim—a nonvisual operating system for robots—requires no visual interlocutors, which might well frustrate dialectic responses and hinder artists from articulating the grand project of the present.

Some artistic practices are well-suited to this paradoxical situation in which tech cultures both recede from view and reinforce regimes of hypervisibility and hypervisuality. Drawing on queer strategies of critique, Zach Blas satirizes the loopy occultism of founder culture and casts it into a kinky web of sub-dom relations, fetish, and ritual shaming. Others are more directly Machiavellian: Tega Brain and Sam Lavigne, for example, shine a torch back into the roving eye of tracking systems in admirable (yet risky) ways. The creative realpolitik of Berlin’s Tactical Tech lab involves creating critical toolkits for surviving in the all-pervading light of algorithmic surveillance. Founded by artist-designers Marek Tuszynski and Stephanie Hankey, the nonprofit has worked in over ninety countries, is not funded by big tech grants, and concertedly avoids succumbing to techno-mysticism.
Across these examples we can get a sense of what kind of strategies are available to artists and media tacticians held at arm’s length from the systems they’re critiquing. Yet that distance might also provide an opportunity for creative respite, and here’s where the darksim finds its most obvious dialectical counterpart in the “dark forest.” Inspired by Cixin Liu’s sci-fi trilogy which explores the perils of deep space communication and the politics of strategic opacity, has been extrapolated on—by Metalabel, Bogna Konior and others—to describe what happens after the supposedly “open” internet starts splintering into fiefdoms. Turning theory into practice, Metalabel’s Dark Forest Operating System promises a new cloistered commons that can cultivate the agency of creative communities away from the hustle of extractive platforms. The implicit hope for a project like DFOS is that it blossoms from a secret Brooklyn afterparty to a platform killer. If the word-of-mouth works and Bad Bunny or Beyonce come knocking, that might just play out.
Refusal, recovery, satire, deception, retreat—these are all established asymmetric strategies developed and deployed in historical contexts where the concentration of power has prevented direct democratic opposition. Yet it’s clear that we still need artists to preserve open communications with the dark models of machine intelligence. What those artist looks like is another matter. Perhaps they will resemble hackers and whistleblowers more than digital worldbuilders. Perhaps they’re not even human. Ultimately, however, it would be better for big tech firms to keep artists in the loop. Artificial general intelligence won’t spontaneously appear because Google or Yann LeCun figured out how to design the perfect virtual bootcamp for agentic AI. My crazy Polymarket bet is that if AGI emerges, it’ll have been coaxed out of its goth shell by weird, sensitive, artistic parents.
Al Warburton is an applied media theorist working directly with CGI, AI, XR, VR, and ML between art, industry, and academia.



