Skip to content

When AI overrides the artist

By Oleg Sidorkin, CTO of Cinevva

NVIDIA launched DLSS 5 at GDC with a promise: generative AI that reconstructs lighting, materials, and shadows in real-time, making games look photorealistic without the performance hit. Then people saw what it actually did to character faces.

The internet called it "yassifying." Grace Ashcroft from Resident Evil Requiem went from a haggard, battle-worn survivor to a smoothed-out, homogenized face that looked like a different character. Leon got the same treatment. The AI decided the original art direction wasn't good enough and "fixed" it.

NVIDIA's DLSS 5 reveal with Resident Evil Requiem. The visual changes sparked immediate backlash.

What NVIDIA shipped and what it actually does

DLSS versions 1 through 4 were upscaling technologies. They rendered at a lower resolution and used AI to fill in the missing pixels. Artists appreciated them because they respected the source image. Your art direction stayed intact. The AI just made it sharper.

DLSS 5 is fundamentally different. It's a video-to-video generative AI system that operates without access to the original game assets, geometry, or scene data. It takes the rendered 2D frame and motion vectors, then generates a new frame with "improved" lighting, materials, and detail.

The key word is "improved." Improved according to whose judgment? The AI's. Not the artist's.

NVIDIA initially described it as having "3D scene understanding." They later clarified it works from 2D frame data only. That distinction matters. The system isn't enhancing what the artists built. It's interpreting a flat image and generating what it thinks should be there. When it encounters a face with deliberate imperfections, scars, grime, stress lines, asymmetry, it tends to smooth them away because its training data associates these with "lower quality."

This is the creative control problem

The 52% of developers who told GDC they think AI is harming the industry aren't all worried about losing their jobs. Many of them are worried about losing control over what their work looks like when it reaches the player.

DLSS 5 makes that fear concrete. There's now an AI sitting in the rendering pipeline between your finished art and the player's screen, and it's actively rewriting your creative decisions. The haggard face you spent weeks perfecting gets smoothed out. The moody lighting you carefully balanced gets "corrected" to be more photorealistic. The specific look you chose gets generalized into whatever the model thinks "good" looks like.

Digital Foundry published an initially positive preview, then released a follow-up titled "Why We Should Have Waited With Our Coverage." Even the tech press that tends to celebrate NVIDIA recognized that something was different about this release.

Larian Studios, makers of Baldur's Gate 3, reportedly dropped some of the generative rendering tools for their next project after fan backlash. When a studio that just won Game of the Year walks away from free rendering technology, the creative control argument isn't theoretical.

Digital Foundry's follow-up on the DLSS 5 debate: "We should have taken more time."

The pattern is the same

I wrote about AI pollution in open source earlier this month. The pattern is identical. Making something is cheap. Making something good still costs the same. Evaluating whether something is good still costs the same.

In open source, it's AI-generated pull requests that look plausible but introduce subtle bugs. In rendering, it's AI-generated frames that look "better" but aren't what the artist intended. Both cases involve AI overriding human judgment with statistically averaged output. Both cases cost someone else time and creative control.

The difference is that a maintainer can reject a bad PR. An artist can't reject DLSS 5 if NVIDIA and the publisher have agreed to enable it. The player's GPU is rewriting their work in real time, and they have no say.

What I think happens next

NVIDIA will ship DLSS 5 in fall 2026. Major publishers including Capcom, Bethesda, Ubisoft, and Warner Bros. have already signed on. The technology will improve and the "yassifying" will become less obvious. The backlash will quiet down because people get used to things.

But the underlying question won't go away: when you add generative AI to the rendering pipeline, who has final say over what the player sees? Right now, the answer is NVIDIA's training data. That should bother anyone who cares about games as an art form.

The distinction matters for how we think about AI tools across the industry. AI that serves the creator's intent is a tool. AI that overrides the creator's intent is something else entirely. DLSS 1 through 4 were tools. DLSS 5 is the first mainstream example of AI inserting its own aesthetic judgment into someone else's art, at the hardware level, without asking.


Related: