Frankenstein Physics
Al Warburton traces the expanded puppetry of CGI bodies back to the animated corpses of 18th-century electrophysiology.

One of the key perks of having a body is that the autonomic nervous system keeps us alive without us having to think about it much. Digital entities, however, require complex command and control systems—“rigs” that allow 3D avatars to be puppeteered but that usually remain behind the scenes. Artists like Claudio Bellini make the strings visible, foregrounding the underlying digital muscle and bone structures of virtual characters and connecting them to diverse data inputs. In explicitly depicting rigs in the presentation of his work, Bellini’s experiments recall earlier transposition technologies: the macchinetta di punta for copying sculptures from clay to stone, the pantograph used to upscale drawings, the 19th-century proto-motion-capture practices of Étienne Jules Marey. All these technologies rely on points, pins, strings, and mechanical hinges to translate and encode form or motion. But there’s arguably a much more visceral and disturbing precursor to Bellini’s work: the practice of reanimation known since the late 1700s as electrophysiology.
In Deep Time of the Media (2006), an alternative history of digital media, Siegfried Zielinski discusses the electrophysiological experiments of Alessandro Volta, Luigi Galvani, and Johann Ritter, all of whom pioneered the theatrical practice of animating cadavers with electrical currents. While Volta and Galvani lent their names to voltage and galvanization, Ritter sadly has nothing named after him. Yet he is one of the likely inspirations for Mary Shelley’s Frankenstein (1818). In the first decade of the 19th century, Ritter fried himself in the name of science by hooking up his head, neck, nose, tongue, and eyes to electrodes. He consequently contracted sores, dysentery, and an eccentric belief in “siderism” (a subterranean electromagnetic force). Eventually, he relied on opium to control his torturous symptoms. Ritter literally rigged himself to death.
There’s been no such commitment to science demonstrated by the vibe-coding, node-noodling artists working with Unreal Engine, WebGL, MAX MSP, Touchdesigner, Claude, or ComfyUI, but the process is a lot easier now. As we see in Bellini’s work, bodies and information can be connected in real time and everything is “galvanized” by render engines, physics engines, audio engines, and AI engines. This is a kind of expanded puppeteering where pre-2020 latencies between performance, signals, datasets, and form are overcome easily with the help of procedural, node-based AI workflows. They’re also often assisted—as in Bellini’s work—by Unreal Engine’s MetaHuman Creator, the real-time character creation engine that, in 2021, short-circuited the laborious 3D pipelines that birthed avatars like Lil Miquela and Shudu Gram. Despite their creepy voodoo look, the origins of these new automated avatars are less Frankenstein (1931) and more Mannequin (1987). This is because digital electrophysiology begins not with cadavers but with jiggle physics.
While animatable character rigs have been around a long time, the autonomous calculation of motion is something else. That’s where jiggle physics comes in. For those disconnected from games discourse, the term refers to the automated animation of soft appendages—most famously, the breasts of female videogame characters. As a technique, it’s simply the outsourcing of secondary animation principles (damping and lag) to a machine, and it was first applied to animated capes and tails before being perfected on virtual boobs. And while it feels misogynistic and seedy, you’ll be relieved to know that jiggle physics eventually turned to junk physics: Red Dead Redemption’s horse testicles, the “willy wiggle” features of Baldur’s Gate 3 and the NSFW mod Schlongs of Skyrim all instate multispecies DEI at the heart of simulation tech. Nowadays, jiggle physics is nothing more complicated than a slider: slide it too far and you’ll get tits and dicks that oscillate wildly, slide it the other way and they’ll turn back to stone.
If Ritter were alive today, maybe he’d be a junk physicist. Who knows? But I get the feeling he’d be more like Amsterdam-based Canadian Geoffrey Lillemon. Lilllemon has been hooked into digital tech since the early 2000s, and while he’s not attaching electrodes to his tongue or coding jiggle physics, he is a conductor of viscera. His Oculart collection, dating back to 2000, is an ongoing series of web-based interactive animations that plug SETI data and AI chess engines into ragdoll puppets. His “Endurance Diffusion” series, launched in 2023, demonstrates an emerging “punk sports” or “wellness anarchy” approach that utilizes biofeedback and self-portraiture alongside computer vision, local AI models, and realtime computer graphics. This is where worldbuilding and bodybuilding intersect.
“Endurance Diffusion” feels particularly germane to the idea of expanded puppeteering or junk physics. In one pipeline test, Lillemon films himself plunging into the freezing Lake Louise near Calgary, using a computer vision model to track how much of his body remains visible above the surface of the lake. Based on this submersion variable, an AI-generated image of a bear is rendered to varying stages of fidelity: the more Lillemon submerges himself, the clearer the image of the roaring bear becomes. Another experiment sees Lillemon render himself as an icy blue slab of muscle, the shivering motion of his AI-augmented form driven by 3D wind simulation nodes from Blender. While this approach to self-portraiture could be read as self-referential, narcissistic, even autoerotic, I think it has little in common with identity-driven art. This artistic cosmology is flatter. Lillemon is interested in the circuitous arts of energy transfer that can be marshalled from all things. His work exists somewhere between Ritter’s Frankensteinian electrophysiology and the gonzo art of jiggle physics, where everything is alive: data, images, biomass, imaging tools are all pulsing with a libidinal charge that is revealed in leaky circuits of visualization.
Comparisons to Matthew Barney are inevitable, but Lillemon’s work is funnier, more fleet-footed and information savvy. What Lillemon’s work really gets me thinking about is new media. Back in the early 2000s, theorists like Lev Manovich predicted that distinctions between analog media would be flattened as they all decomposed into data inputs and outputs. Hybrid media forms of the 2000s and 2010s—like net art, motion graphics, and VR sculpting—seemed to vindicate the idea that painting could be film, animation could be a database, websites could be books, and so on. Yet it’s only recently that hyperconnected practices like Lillemon’s have been truly liberated. Thanks to AI and realtime engines, drag-and-drop remixability has truly come of age. We’ve finally arrived in a place where artists can—with some expertise—build improvised visualization networks from just about anything. Unimpeded, these rigs just run, much like an autonomous nervous system. Everything is jiggle physics now. Everything is galvanized.
Al Warburton is an applied media theorist working directly with CGI, AI, XR, VR, and ML between art, industry, and academia.



