Skip to content

AI-native game engines are shipping, and they look nothing like Unity

By Oleg Sidorkin, CTO of Cinevva

Three game engines showed up in March that are architecturally different from anything in the Unity/Unreal/Godot lineage. They don't have visual editors. They don't optimize for a human clicking through menus. They're built from the ground up for AI agents to read, write, and control game state.

This isn't "Unity with an AI tab." This is a different species of engine.

AI is already changing 3D game development workflows. The engine layer is next.

The three engines

nAIVE Engine is open-source, written in Rust with WebGPU rendering. Sub-second hot-reload: shaders in under 200ms, scenes under 100ms, scripts under 50ms. Scenes, pipelines, and materials are defined in YAML, which means LLMs can read and generate them without parsing binary formats. It exposes an MCP command interface that lets AI agents control engine functions through JSON-RPC. It ships with first-class Gaussian splatting and headless rendering for automated testing. The entire architecture assumes your primary user might be an AI agent, not a person with a mouse.

Arcane Engine is a code-first 2D engine. Rust core, TypeScript scripting. No visual editor at all. Its philosophy: "code is the scene." Game state is a queryable database rather than a scene tree. It includes a built-in protocol for AI agent interaction. Apache 2.0 licensed.

Mirror Engine is in alpha, and it's multiplayer-first with an entity component system. The interesting part: it includes AI text-to-3D generation that produces Gaussian splats from text prompts in about 60 seconds. TypeScript scripting, browser-based "Mirror Lite" client.

These three engines don't share a codebase or a team, but they share a design thesis: the primary interface to a game engine should be structured text, not a GUI.

Why this architecture matters

Traditional game engines evolved to serve a human sitting at a desk. You have a viewport. A hierarchy panel. An inspector. A timeline. Everything is designed around clicking, dragging, and visually placing objects. That workflow is powerful. It's also impossible for an AI agent to use.

When your primary "user" is an LLM, you need different primitives. YAML over binary scene formats. Queryable state over nested scene trees. Protocol-based commands over mouse clicks. Headless operation over window rendering.

This is the same shift that happened in infrastructure when DevOps moved from GUI control panels to infrastructure-as-code. The same thing is happening in game engines, just twenty years later.

nAIVE's MCP interface is the clearest example. MCP (Model Context Protocol) is becoming the standard way AI agents communicate with tools. When an engine speaks MCP natively, any AI agent that supports the protocol can manipulate scenes, adjust parameters, run tests, and iterate on gameplay without a human in the loop. That's not a feature bolted onto a traditional engine. That's a fundamentally different relationship between the engine and its user.

Gaussian splatting in game development. Both nAIVE and Mirror treat it as a first-class rendering primitive.

The bigger picture

These three engines aren't the only signal. Meshy's Black Box demonstrated AI-generated game mechanics at runtime. OpenAI showed a tactical RPG built with Phaser at GDC. The tooling layer between "AI generates something" and "that something runs as a playable game" is getting thinner every month.

At Cinevva, we've been building this bridge from the other direction. Our engine handles rendering, physics, and real-time interaction while AI handles asset generation. The approach is different from nAIVE or Arcane, but the underlying bet is the same: the future game engine needs to speak AI as a first language, not just add it as a plugin.

The traditional engine makers know this too. Unity previewed AI game creation tools at GDC. Roblox launched AI-powered 4D model creation. But there's a meaningful difference between adding AI features to an engine designed for humans and designing an engine where AI is the primary interface.

What I think happens next

Most of these engines won't survive. That's normal for a new category. But the design patterns will. YAML-based scene definitions, MCP protocols for agent control, queryable game state, headless operation. These ideas will get absorbed into mainstream engines within two years.

The engine that wins this era probably doesn't exist yet. But the architectural DNA is being written right now, in these three projects and a handful of others. The question isn't whether game engines will become AI-native. It's whether the transformation comes from inside the incumbents or from new entrants that designed for agents from day one.


Related: