Tom Rhys Harries’ debut as Clayface in the upcoming October 23 film showcases a paradigm shift in fluid-simulation VFX, leveraging physics-informed neural networks (PINNs) to achieve hyper-realistic, “disgusting” organic transformations. This technical leap replaces traditional keyframed CGI with real-time, AI-driven soft-body dynamics, fundamentally altering how biological horror is rendered in cinema.
Let’s be clear: we’ve seen “mud men” before. From the early days of stop-motion to the polished, often sterile CGI of the 2010s, the “melting” effect has always been a struggle between compute budget and visual fidelity. But the footage leaking this week isn’t just “better” CGI. It is a demonstration of the convergence between generative AI and high-fidelity physics engines.
It is visceral. It is wet. It is technically terrifying.
The Death of the Keyframe: How PINNs Power the Clayface Melt
For decades, VFX artists relied on particle-based simulations—think Houdini—where millions of tiny spheres are told how to interact based on predefined gravity and viscosity rules. The problem? It’s computationally expensive and often looks “floaty.” The Clayface footage suggests a move toward Physics-Informed Neural Networks (PINNs), which embed the laws of physics directly into the loss function of a neural network.

Instead of calculating every single particle collision in a frame, the AI predicts the movement of the mass based on trained datasets of actual non-Newtonian fluids. This allows for that “wonderfully disgusting” quality—the way the clay clings, stretches, and snaps with a weight that feels grounded in reality rather than a render farm’s approximation.
We are seeing the transition from simulating physics to predicting physics.
“The industry is moving away from the ‘brute force’ era of simulation. By integrating latent diffusion models with volumetric data, we can now generate organic deformations that maintain temporal consistency without requiring a month of rendering per shot.” — Marcus Thorne, Lead Technical Director at NeuralVFX Labs.
The 30-Second Verdict: Why This Matters
- Real-time Iteration: Directors can now tweak “viscosity” on set via NPU-accelerated tablets rather than waiting for overnight renders.
- Organic Fidelity: The utilize of Neural Radiance Fields (NeRFs) allows the character to blend seamlessly into real-world lighting environments.
- Compute Shift: The workload has shifted from CPU-heavy simulation to GPU-heavy inference.
Beyond the Uncanny Valley: Subsurface Scattering and Neural Rendering
The “disgust” factor in the footage comes from the texture. Specifically, the way light penetrates the surface of the clay before bouncing back—a process known as Subsurface Scattering (SSS). Historically, getting SSS right on a mutating object was a nightmare; the mesh would often “pop” or flicker as the geometry changed.
The current pipeline likely utilizes a hybrid approach: a base mesh deformed by a neural network, overlaid with a dynamic texture map generated via a NVIDIA Omniverse-style ecosystem. By leveraging real-time ray tracing and AI-denoising, the production team can maintain a consistent “slime” layer that reacts to the environment’s light sources in real-time.
This isn’t just about a movie villain. This represents a stress test for the next generation of digital humans. If you can make a shapeshifting pile of mud look this convincing, the path to perfectly rendered, emotive human skin—without the eerie “plastic” look—is practically paved.
The Compute Cost of “Disgusting”: Hardware vs. Artistry
You don’t receive this level of fidelity on a consumer laptop. The sheer volume of tensors being processed to maintain the “viscosity” of Clayface’s form requires massive H100 or B200 clusters. We are seeing a widening gap between “boutique” VFX houses and the major studios who can afford the silicon required for neural rendering.

The “Information Gap” here is the hidden cost of the AI pipeline. Whereas the marketing focuses on the “magic” of the visuals, the reality is a brutal battle of VRAM and CUDA core optimization. The shift toward AI-driven VFX is essentially a hardware arms race.
| Feature | Traditional Particle Sim (Pre-2024) | Neural Physics Pipeline (2026) |
|---|---|---|
| Compute Load | CPU-Heavy / Long Render Times | GPU-Heavy / Fast Inference |
| Consistency | Prone to “jitter” and particle gaps | High Temporal Consistency via AI |
| Iteration Speed | Days/Weeks per shot | Near Real-time via NPU acceleration |
| Visual Experience | Mathematical/Symmetric | Organic/Asymmetric |
The Ecosystem Ripple: From Cinema to the Game Engine
The technology powering this film won’t stay in the cinema. By late 2026, expect these PINN-based fluid dynamics to migrate into Unreal Engine 6 or its equivalent. When we can run these “disgusting” deformations on a local NPU (Neural Processing Unit) inside a gaming console, the nature of interactive environments changes.
Imagine a horror game where the environment doesn’t just have pre-baked animations, but actually reacts and melts based on the player’s proximity, calculated on the fly. That is the true legacy of the Clayface footage.
However, this transition isn’t without friction. The reliance on proprietary AI models for VFX creates a new form of “platform lock-in.” If the entire industry moves toward a specific neural architecture for rendering organic matter, the open-source community—represented by projects on GitHub like Blender—will have to scramble to build open-source alternatives to prevent a total corporate monopoly on “the look” of modern cinema.
The footage is wonderfully disgusting, yes. But under the mud, there is a cold, hard layer of silicon and mathematics that is redefining the boundaries of digital art.