<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Cinevva Blog</title>
        <link>https://app.cinevva.com</link>
        <description>Product updates, creator notes, and experiments.</description>
        <lastBuildDate>Mon, 02 Mar 2026 16:38:53 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <copyright>Copyright 2024-present Cinevva</copyright>
        <item>
            <title><![CDATA[A Breaker Belt: Snake meets Arkanoid, vibe coded in three days]]></title>
            <link>https://app.cinevva.com/blog/2026-02-18-a-breaker-belt-medium</link>
            <guid>https://app.cinevva.com/blog/2026-02-18-a-breaker-belt-medium</guid>
            <pubDate>Mon, 02 Mar 2026 16:36:52 GMT</pubDate>
            <description><![CDATA[# A Breaker Belt: Snake meets Arkanoid, vibe coded in three days

We made a game where your snake is the paddle and the bricks fight back. It shipped on web, mobile, and PC. It took two of us about three days. On and off, not crunching. And it's fun to play.

## The mashup nobody asked for

Snake is about growth and spatial awareness. Arkanoid is about reflexes and angle prediction. They come from completely different design philosophies, and mashing them together sounds like the kind of pitch that gets politely declined.

But [A Breaker Belt](https://app.cinevva.com/engine) makes it work. You're a cosmic serpent, a living arc of neon current threading through an asteroid field of breakable blocks. Your head is the paddle. Your growing tail is both your greatest weapon and your most constant threat. The orbs ricochet off your body to shatter bricks, but one wrong turn into your own tail and you're done.

It's the kind of weird cross-genre experiment that usually dies before anyone gets to play it, because the development cost of finding out whether a weird idea works has traditionally been measured in months. Here it was measured in afternoons.

## What actually shipped

The scope is what makes this interesting. This isn't a game jam prototype with placeholder rectangles and no sound.

The game runs 50 waves deep. That's not 50 variations of the same brick wall. The formations evolve from gentle onboarding arcs into fortress rings that demand angled shots through side gaps, then into layered diagonal mazes with one-tile-wide openings that require precision steering. Explosive bricks blow their neighbors apart. Phantom bricks flicker in and out of existence. Regenerating bricks heal back after you break them. Portal bricks teleport your orbs across the arena. Gravity wells bend your shots into slow, curving hymns. Laser emitters sketch red lines across the void. Mimic bricks look harmless until they decide they're not. By wave 15, you're navigating something that feels less like a puzzle and more like a living system that's learning your habits.

The music isn't a loop. It's a reactive soundtrack that builds with your gameplay. Bass, lead synth, pad, drums all running in E minor, and as the action intensifies, additional layers fade in. When things calm down between waves, ambient pads take over. The music breathes with you. A dedicated composer would spend weeks tuning that kind of responsiveness. Here it was part of the creative flow.

The sound effects aren't samples pulled from a free pack. Every brick shatter, orb bounce, and collision is synthesized in real time. Different pitch for head contacts versus tail contacts. Warm reverb for the spacey feel. When you chain a combo, the audio tells you before the screen does.

And then there's the storytelling. Each wave opens with a narrative beat delivered by AI-generated voice. Wave one: "They call it the Breaker Belt: a ribbon of engineered debris that circles the old star like a warning." By wave 38: "The Belt stops feeling like a wall and starts feeling like a mind. It tests not your reflexes, but your habits." It's 50 chapters of cosmic mythology that makes you care about why you're a snake breaking bricks in space.

The backgrounds evolve too. Early waves are calm indigo starfields with soft meteor rain. By mid-game, aurora bands and nebula clouds appear. Late game drops you into Ion Storm territory with sharp cyan streaks against near-black space. The game communicates progression through atmosphere as much as difficulty.

All of this runs on keyboard, gamepad, or touchscreen. Published to web, mobile, and PC from one codebase.

## The team that wasn't needed

A game with this depth would typically need a game designer, a couple of programmers, an artist, a sound designer, a composer, a level designer, a narrative writer, and QA. Eight or nine people. Three to six months of coordinated work. Standups, Jira tickets, asset pipelines, platform-specific debugging.

Two of us made this in a long weekend on the [Cinevva Engine](https://app.cinevva.com/engine).

The talent required didn't change. The ratio between creative intent and implementation overhead did. The time was spent deciding what the game should feel like, not fighting tools to make it happen.

## Why this matters if you make things

The interesting question isn't whether AI tools can help make games faster. That's been answered. The interesting question is what happens to ideas that used to be too risky to try.

"Snake but you're the paddle in an Arkanoid arena" is not something a producer greenlights. A few indie devs have tried variations on itch.io ([BreakSnake](https://newdron.itch.io/breaksnake), [SnakeOut](https://neop87.itch.io/snakeout), [Snake Break](https://merrak.itch.io/snake-break)), but they're all small game jam experiments. The genre fusion has never been given a real production pass with reactive music, AI narration, dozens of brick types, and 50 waves of escalating design. In traditional development, that kind of polish on a risky concept gets killed in a meeting before anyone writes a line of code.

When trying something weird costs an afternoon instead of a quarter's budget, the strange ideas get built. Some of them turn out to be genuinely good. A Breaker Belt is one of those.

## Play it. Or make your own.

[A Breaker Belt](https://app.cinevva.com/engine) is playable right now on web, mobile, and PC. If it makes you want to build something, the [Cinevva Engine](https://app.cinevva.com/engine) is free to use. Describe what you want, iterate on what comes back, ship when it's ready. The [music](https://app.cinevva.com/tools/music), [sound effects](https://app.cinevva.com/tools/sfx), [art](https://app.cinevva.com/tools/flux), and [3D models](https://app.cinevva.com/tools/hunyuan3d) are all built in.

Your weird game idea might be three days away from existing.

]]></description>
            <content:encoded><![CDATA[<h1 id="a-breaker-belt-snake-meets-arkanoid-vibe-coded-in-three-days" tabindex="-1">A Breaker Belt: Snake meets Arkanoid, vibe coded in three days <a class="header-anchor" href="#a-breaker-belt-snake-meets-arkanoid-vibe-coded-in-three-days" aria-label="Permalink to &quot;A Breaker Belt: Snake meets Arkanoid, vibe coded in three days&quot;"></a></h1>
<p>We made a game where your snake is the paddle and the bricks fight back. It shipped on web, mobile, and PC. It took two of us about three days. On and off, not crunching. And it's fun to play.</p>
<h2 id="the-mashup-nobody-asked-for" tabindex="-1">The mashup nobody asked for <a class="header-anchor" href="#the-mashup-nobody-asked-for" aria-label="Permalink to &quot;The mashup nobody asked for&quot;"></a></h2>
<p>Snake is about growth and spatial awareness. Arkanoid is about reflexes and angle prediction. They come from completely different design philosophies, and mashing them together sounds like the kind of pitch that gets politely declined.</p>
<p>But <a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">A Breaker Belt</a> makes it work. You're a cosmic serpent, a living arc of neon current threading through an asteroid field of breakable blocks. Your head is the paddle. Your growing tail is both your greatest weapon and your most constant threat. The orbs ricochet off your body to shatter bricks, but one wrong turn into your own tail and you're done.</p>
<p>It's the kind of weird cross-genre experiment that usually dies before anyone gets to play it, because the development cost of finding out whether a weird idea works has traditionally been measured in months. Here it was measured in afternoons.</p>
<h2 id="what-actually-shipped" tabindex="-1">What actually shipped <a class="header-anchor" href="#what-actually-shipped" aria-label="Permalink to &quot;What actually shipped&quot;"></a></h2>
<p>The scope is what makes this interesting. This isn't a game jam prototype with placeholder rectangles and no sound.</p>
<p>The game runs 50 waves deep. That's not 50 variations of the same brick wall. The formations evolve from gentle onboarding arcs into fortress rings that demand angled shots through side gaps, then into layered diagonal mazes with one-tile-wide openings that require precision steering. Explosive bricks blow their neighbors apart. Phantom bricks flicker in and out of existence. Regenerating bricks heal back after you break them. Portal bricks teleport your orbs across the arena. Gravity wells bend your shots into slow, curving hymns. Laser emitters sketch red lines across the void. Mimic bricks look harmless until they decide they're not. By wave 15, you're navigating something that feels less like a puzzle and more like a living system that's learning your habits.</p>
<p>The music isn't a loop. It's a reactive soundtrack that builds with your gameplay. Bass, lead synth, pad, drums all running in E minor, and as the action intensifies, additional layers fade in. When things calm down between waves, ambient pads take over. The music breathes with you. A dedicated composer would spend weeks tuning that kind of responsiveness. Here it was part of the creative flow.</p>
<p>The sound effects aren't samples pulled from a free pack. Every brick shatter, orb bounce, and collision is synthesized in real time. Different pitch for head contacts versus tail contacts. Warm reverb for the spacey feel. When you chain a combo, the audio tells you before the screen does.</p>
<p>And then there's the storytelling. Each wave opens with a narrative beat delivered by AI-generated voice. Wave one: &quot;They call it the Breaker Belt: a ribbon of engineered debris that circles the old star like a warning.&quot; By wave 38: &quot;The Belt stops feeling like a wall and starts feeling like a mind. It tests not your reflexes, but your habits.&quot; It's 50 chapters of cosmic mythology that makes you care about why you're a snake breaking bricks in space.</p>
<p>The backgrounds evolve too. Early waves are calm indigo starfields with soft meteor rain. By mid-game, aurora bands and nebula clouds appear. Late game drops you into Ion Storm territory with sharp cyan streaks against near-black space. The game communicates progression through atmosphere as much as difficulty.</p>
<p>All of this runs on keyboard, gamepad, or touchscreen. Published to web, mobile, and PC from one codebase.</p>
<h2 id="the-team-that-wasn-t-needed" tabindex="-1">The team that wasn't needed <a class="header-anchor" href="#the-team-that-wasn-t-needed" aria-label="Permalink to &quot;The team that wasn't needed&quot;"></a></h2>
<p>A game with this depth would typically need a game designer, a couple of programmers, an artist, a sound designer, a composer, a level designer, a narrative writer, and QA. Eight or nine people. Three to six months of coordinated work. Standups, Jira tickets, asset pipelines, platform-specific debugging.</p>
<p>Two of us made this in a long weekend on the <a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">Cinevva Engine</a>.</p>
<p>The talent required didn't change. The ratio between creative intent and implementation overhead did. The time was spent deciding what the game should feel like, not fighting tools to make it happen.</p>
<h2 id="why-this-matters-if-you-make-things" tabindex="-1">Why this matters if you make things <a class="header-anchor" href="#why-this-matters-if-you-make-things" aria-label="Permalink to &quot;Why this matters if you make things&quot;"></a></h2>
<p>The interesting question isn't whether AI tools can help make games faster. That's been answered. The interesting question is what happens to ideas that used to be too risky to try.</p>
<p>&quot;Snake but you're the paddle in an Arkanoid arena&quot; is not something a producer greenlights. A few indie devs have tried variations on itch.io (<a href="https://newdron.itch.io/breaksnake" target="_blank" rel="noreferrer">BreakSnake</a>, <a href="https://neop87.itch.io/snakeout" target="_blank" rel="noreferrer">SnakeOut</a>, <a href="https://merrak.itch.io/snake-break" target="_blank" rel="noreferrer">Snake Break</a>), but they're all small game jam experiments. The genre fusion has never been given a real production pass with reactive music, AI narration, dozens of brick types, and 50 waves of escalating design. In traditional development, that kind of polish on a risky concept gets killed in a meeting before anyone writes a line of code.</p>
<p>When trying something weird costs an afternoon instead of a quarter's budget, the strange ideas get built. Some of them turn out to be genuinely good. A Breaker Belt is one of those.</p>
<h2 id="play-it-or-make-your-own" tabindex="-1">Play it. Or make your own. <a class="header-anchor" href="#play-it-or-make-your-own" aria-label="Permalink to &quot;Play it. Or make your own.&quot;"></a></h2>
<p><a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">A Breaker Belt</a> is playable right now on web, mobile, and PC. If it makes you want to build something, the <a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">Cinevva Engine</a> is free to use. Describe what you want, iterate on what comes back, ship when it's ready. The <a href="https://app.cinevva.com/tools/music" target="_blank" rel="noreferrer">music</a>, <a href="https://app.cinevva.com/tools/sfx" target="_blank" rel="noreferrer">sound effects</a>, <a href="https://app.cinevva.com/tools/flux" target="_blank" rel="noreferrer">art</a>, and <a href="https://app.cinevva.com/tools/hunyuan3d" target="_blank" rel="noreferrer">3D models</a> are all built in.</p>
<p>Your weird game idea might be three days away from existing.</p>
<hr>
<p><em><a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">Play A Breaker Belt</a> | <a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">Build your own game</a> | <a href="https://cinevva.com/charts" target="_blank" rel="noreferrer">Browse community games</a></em></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Spike source viewer]]></title>
            <link>https://app.cinevva.com/blog/spike-source</link>
            <guid>https://app.cinevva.com/blog/spike-source</guid>
            <pubDate>Mon, 02 Mar 2026 16:36:52 GMT</pubDate>
            <description><![CDATA[View the source code of any spike from the open world browser series.]]></description>
            <content:encoded><![CDATA[<h1 id="spike-source-viewer" tabindex="-1">Spike source viewer <a class="header-anchor" href="#spike-source-viewer" aria-label="Permalink to &quot;Spike source viewer&quot;"></a></h1>
<p v-if="spikeId">
  Viewing source for <strong>spike {{ spikeId }}</strong>.
  <a :href="`/spikes/${spikeId}/`" target="_blank">Run it ↗</a>
</p>
<p v-else>No spike specified. Use <code>?spike=01-terrain</code> in the URL.</p>
<p v-if="loading">Loading source...</p>
<p v-if="error" style="color:#e74c3c">{{ error }}</p>
<div id="spike-editor" style="width:100%;height:70vh;border-radius:8px;overflow:hidden;border:1px solid rgba(255,255,255,0.12)"></div>
<p style="margin-top:1rem">
  <a href="/blog/2026-02-25-open-world-browser-series-guide">← Back to series guide</a>
</p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 1: We started by trying to break it]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-01-risk-first</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-01-risk-first</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Before building caves or fancy shaders, we needed to answer one ugly question: can this thing even run in a browser?]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-1-we-started-by-trying-to-break-it" tabindex="-1">Building an open world in the browser, part 1: We started by trying to break it <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-1-we-started-by-trying-to-break-it" aria-label="Permalink to &quot;Building an open world in the browser, part 1: We started by trying to break it&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>We're building a multiplayer open world that runs entirely in the browser. No install, no app store, just a URL. The biggest risk up front was obvious: can a browser even render a persistent 3D world at playable frame rates while leaving headroom for gameplay, physics, and networking?</p>
<p>Most open world projects fail in a predictable order. First you get a nice concept. Then you get a pretty scene. Then you realize your frame budget is already gone before gameplay exists.</p>
<p>We wanted to answer the render budget question before investing in anything else. So Spike 1 skipped the pretty trailer and went straight to measuring.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/01-terrain/" title="Spike 1 Terrain and Instancing" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/01-terrain/" target="_blank">Open Spike 1 in a new tab ↗</a> · <a href="/blog/spike-source?spike=01-terrain">View source</a></p>
<p>The setup was simple on purpose. A 512 meter terrain mesh, procedural height from layered sine noise with island falloff, a water plane, atmospheric fog, and 500 instanced objects. We used plain WebGL with Three.js, ACES tone mapping, and no shadows.</p>
<p>We didn't care how it looked. We cared whether the scene stayed stable while moving the camera through it.</p>
<p>Two things came out of this spike that shaped the whole project.</p>
<p>First, we confirmed we had real headroom on desktop if we kept the first pass disciplined. That gave us confidence to attempt the harder terrain architecture later.</p>
<p>Second, we created a baseline contract. Every next spike had to explain its cost relative to this scene. If a new feature looked good but cost too much, it did not get promoted.</p>
<p>That baseline discipline became critical later when we hit seam artifacts, mixed LOD transitions, and compute driven meshing. Without a stable reference, every bug looks bigger than it is.</p>
<p>In part 2 we move from rendering to input feel. Worker physics sounds great in architecture docs. It only matters if the character still feels immediate when you press a key.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>Heightmap terrain.</strong> A 2D grid where each cell stores a single elevation value. The GPU displaces a flat mesh in the vertex shader to create the terrain surface. Heightmaps are compact (a 65x65 chunk is ~8 KB at 16-bit), GPU-friendly, and fast to render. The limitation is that they can't represent caves, overhangs, or any surface that folds back over itself. For background on heightmap constraints and what comes after them, see our <a href="/guides/landscape-generation-browser.html#why-heightmaps-arent-enough">landscape generation guide</a>.</p>
<p><strong>Three.js.</strong> The rendering library we used throughout this project. Three.js abstracts WebGL 2 (and later WebGPU) into a scene graph with cameras, lights, materials, and geometry objects. It provides <code>InstancedMesh</code> for rendering many copies of the same geometry in a single draw call, frustum culling, PBR materials, and post-processing. See <a href="https://github.com/mrdoob/three.js" target="_blank" rel="noreferrer">Three.js on GitHub</a>. For how Three.js fits into a browser open world stack, see our <a href="/guides/browser-3d-open-world-tech.html#three-js">browser 3D tech guide</a>.</p>
<p><strong>InstancedMesh.</strong> A Three.js feature that renders N copies of the same geometry with a single draw call, each at a different position/rotation/scale. The per-instance transforms are stored in a matrix attribute buffer. This is how we rendered 500 objects in Spike 1 without 500 separate draw calls. For vegetation at scale, GPU-driven instanced culling takes this further. See our <a href="/guides/landscape-generation-browser.html#gpu-driven-vegetation-culling">landscape guide on GPU vegetation culling</a>.</p>
<p><strong>Frame budget.</strong> At 60 fps, each frame has 16.67 ms for everything: JavaScript logic, physics, rendering, and compositing. A &quot;budget check&quot; spike measures how much of that time a baseline scene consumes, leaving a known amount for features added later. This approach comes from AAA open world development where <a href="/guides/browser-3d-open-world-tech.html#what-we-can-learn-from-skyrim-and-the-witcher">GTA V, Skyrim, and Elden Ring</a> all use aggressive LOD and streaming to stay within fixed frame budgets.</p>
<hr>
<p>Part 1 of 12.<br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-02-worker-physics.html">Part 2 - Worker physics and the input lag fear</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 2: Worker physics and the input lag fear]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-02-worker-physics</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-02-worker-physics</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[We moved Rapier into a Web Worker and measured what everyone worries about first: does movement feel late?]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-2-worker-physics-and-the-input-lag-fear" tabindex="-1">Building an open world in the browser, part 2: Worker physics and the input lag fear <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-2-worker-physics-and-the-input-lag-fear" aria-label="Permalink to &quot;Building an open world in the browser, part 2: Worker physics and the input lag fear&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>If you build browser multiplayer long enough, you eventually get this argument.</p>
<p>&quot;Physics in a worker is clean architecture. Physics on the main thread feels safer.&quot;</p>
<p>Both can be true. What matters is control feel and latency under real input.</p>
<p>Spike 2 was built to answer that with measurements, not opinions.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/02-rapier-worker/" title="Spike 2 Rapier Worker" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/02-rapier-worker/" target="_blank">Open Spike 2 in a new tab ↗</a> · <a href="/blog/spike-source?spike=02-rapier-worker">View source</a></p>
<p>We reused the Spike 1 terrain and integrated Rapier in a dedicated module worker. Input state was sent to the worker every frame, simulation stepped there, and authoritative position came back to the renderer.</p>
<p>The key metrics were input to visible movement latency and physics step timing. We also watched for jitter under normal movement, sprint bursts, and jump cadence.</p>
<p>The result was better than expected. With our message shape and cadence, the worker boundary did not dominate latency. Controls still felt immediate, which was the only thing players would care about.</p>
<p>One challenge from this phase was interpretation risk. After a successful result, teams often overgeneralize and assume the architecture question is closed forever. It is not. We only validated one concrete scenario and hardware profile. Later spikes still had to recheck assumptions when GPU and streaming pressure changed.</p>
<p>This spike also gave us a process upgrade. We started exposing timing telemetry in HUD by default for interactive spikes. That changed team conversations from &quot;it feels off&quot; to &quot;this path added 1.2 ms.&quot;</p>
<p>In part 3 we cover the less glamorous experiments that prevented expensive surprises later. Broadcast load, mobile constraints, and behavior generation reliability.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>Rapier.</strong> A physics engine written in Rust that compiles to WebAssembly for browser use. It handles rigid bodies, colliders, joints, character controllers, and raycasting at 2-3x native performance. For open worlds, Rapier provides player character controllers (walking on terrain, climbing steps, sliding on slopes), object collision, raycasting for interactions, and trigger volumes. See <a href="https://rapier.rs/" target="_blank" rel="noreferrer">Rapier documentation</a> and our <a href="/guides/browser-3d-open-world-tech.html#rapier-rust-wasm">browser 3D tech guide on physics</a>.</p>
<p><strong>Web Workers.</strong> Browser threads that run JavaScript (or Wasm) off the main thread. Physics simulation in a worker means a heavy <code>world.step()</code> call doesn't block rendering. The main thread sends input state to the worker each frame via <code>postMessage</code> and receives authoritative positions back. The latency penalty is the two message hops (~0.1-0.5 ms each on desktop). The benefit is that the render thread never stalls on collision detection. Transferable objects (<code>ArrayBuffer</code> transfer) eliminate copy overhead for large position arrays.</p>
<p><strong>WebAssembly (Wasm).</strong> A binary instruction format that runs at near-native speed in browsers. Rapier, Havok, and Recast all compile to Wasm. The physics step in Rapier-Wasm is typically 0.5-2 ms for a few hundred bodies, compared to 5-15 ms for equivalent JavaScript. Wasm modules load as <code>.wasm</code> files fetched alongside the JavaScript glue code. See <a href="https://webassembly.org/" target="_blank" rel="noreferrer">WebAssembly specification</a>.</p>
<p><strong>Input-to-visual latency.</strong> The time between a keypress and the resulting visual change on screen. For movement to feel &quot;immediate,&quot; this needs to stay under ~80 ms. In a worker physics setup, the chain is: keydown event (main thread) -&gt; postMessage to worker -&gt; physics step -&gt; postMessage back -&gt; renderer applies position -&gt; next vsync. Each hop adds latency, which is why measurement matters more than architecture diagrams.</p>
<hr>
<p>Part 2 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-01-risk-first.html">Part 1 - We started by trying to break it</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-03-the-unflashy-spikes.html">Part 3 - The unflashy spikes that saved us</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 3: The unflashy spikes that saved us]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-03-the-unflashy-spikes</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-03-the-unflashy-spikes</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Durable Object broadcast load, mobile quality constraints, and behavior generation reliability were not glamorous, but they prevented expensive surprises.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-3-the-unflashy-spikes-that-saved-us" tabindex="-1">Building an open world in the browser, part 3: The unflashy spikes that saved us <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-3-the-unflashy-spikes-that-saved-us" aria-label="Permalink to &quot;Building an open world in the browser, part 3: The unflashy spikes that saved us&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>This part has fewer shiny screenshots and more architecture insurance.</p>
<p>After Spikes 1 and 2, we ran three risk checks that looked small but had product level impact.</p>
<p>First was broadcast fan out with Durable Objects. We tested multi client position distribution at game-like tick rates and tracked latency distribution, CPU per tick, and delivery integrity. If this had failed, we would have moved to early sharding instead of single island ownership.</p>
<p>Second was mobile constraint validation. Not a desktop preset renamed to mobile, but an explicit low-cost profile from the same baseline terrain.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/01-terrain/?quality=mobile" title="Mobile quality profile based on Spike 1" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/01-terrain/?quality=mobile" target="_blank">Open the mobile profile in a new tab ↗</a> · <a href="/blog/spike-source?spike=01-terrain">View source</a></p>
<p>We reduced segment density, object count, render resolution pressure, and fog range. The question was simple. Can this world remain readable and responsive on mobile class constraints without rewriting the entire renderer.</p>
<p>Third was behavior generation reliability for creator workflows. We evaluated valid JSON rate, semantic correctness against expected primitives, and response latency. If this had failed, we would have moved to strict form based behavior authoring.</p>
<p>The key insight from this chapter is that these unflashy spikes changed architecture faster than visual spikes did. They set hard boundaries on network topology, mobile promises, and tool UX.</p>
<p>In part 4 we return to visible terrain work and test movement-time streaming behavior, not just static loading screenshots.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>Cloudflare Durable Objects.</strong> Edge-deployed stateful serverless instances with built-in persistence and WebSocket support. Each Durable Object holds authoritative state for a world shard (or chunk). Players connect via WebSocket and receive position broadcasts from other players in the same shard. When the player moves to an adjacent chunk, they connect to that chunk's Durable Object. Durable Objects auto-persist state to disk and scale to thousands of concurrent instances. See <a href="https://developers.cloudflare.com/durable-objects/" target="_blank" rel="noreferrer">Cloudflare Durable Objects docs</a> and our <a href="/guides/browser-3d-open-world-tech.html#multiplayer-networking">browser 3D tech guide on multiplayer networking</a>.</p>
<p><strong>Spatial sharding.</strong> Dividing the world across server instances by geographic region. Each shard owns a rectangular area of the world grid. As player density shifts, shards can split or merge. Players near a shard boundary see both shards' content through cross-shard visibility queries. This is how <a href="/guides/browser-3d-open-world-tech.html#server-architecture">EVE Online handles thousands of players</a> in one universe.</p>
<p><strong>WebSocket broadcast fan-out.</strong> Distributing real-time position updates from one server to many connected clients. At game-like tick rates (20-30 Hz), each player generates ~800 bytes/second of position data. With 200 players in view, that's ~160 KB/s per client. The server must serialize, filter by relevance (spatial interest management), and push to each connection within the tick budget. See our <a href="/guides/browser-3d-open-world-tech.html#client-server-communication">networking guide</a> for delta compression and update frequency tradeoffs.</p>
<p><strong>Mobile rendering constraints.</strong> Mobile GPUs have 1/5 to 1/10 the throughput of desktop GPUs, ~1 GB memory limit (vs 2-4 GB on desktop), and thermal throttling under sustained load. A mobile quality profile reduces segment density, object count, render resolution, draw distance, and shadow quality. The goal isn't parity with desktop but maintaining readability and responsiveness. See <a href="/guides/browser-3d-open-world-tech.html#browser-3d-performance-real-numbers">browser 3D performance numbers</a> for real GPU benchmarks.</p>
<hr>
<p>Part 3 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-02-worker-physics.html">Part 2 - Worker physics and the input lag fear</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-04-streaming-before-fancy.html">Part 4 - Streaming before fancy terrain</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 4: Streaming before fancy terrain]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-04-streaming-before-fancy</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-04-streaming-before-fancy</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[We tested chunk load and swap first with simple content, then moved to progressive heightmap refinement. That order paid off.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-4-streaming-before-fancy-terrain" tabindex="-1">Building an open world in the browser, part 4: Streaming before fancy terrain <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-4-streaming-before-fancy-terrain" aria-label="Permalink to &quot;Building an open world in the browser, part 4: Streaming before fancy terrain&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>Streaming is where &quot;looks good&quot; projects usually fall apart.</p>
<p>You can hide a lot in a still frame. You cannot hide a 40 ms hitch while crossing a chunk boundary.</p>
<p>We intentionally tested streaming before we built advanced terrain representation. That gave us clean signal on load and unload behavior.</p>
<p>Spike 6 validated neighborhood churn with simple chunk content.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/06-chunk-streaming/" title="Spike 6 Chunk Load Swap" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/06-chunk-streaming/" target="_blank">Open Spike 6 in a new tab ↗</a> · <a href="/blog/spike-source?spike=06-chunk-streaming">View source</a></p>
<p>Then we moved to the real terrain path in Spike 11. Height chunk streaming with worker side decode and progressive refinement from 17 to 33 to 65 sample grids.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/11-chunk-streaming/" title="Spike 11 Heightmap Chunk Streaming" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/11-chunk-streaming/" target="_blank">Open Spike 11 in a new tab ↗</a> · <a href="/blog/spike-source?spike=11-chunk-streaming">View source</a></p>
<p>The sequencing mattered more than we expected. If we had started directly with compressed height chunks, every hitch would have been ambiguous. Decode issue, texture upload issue, or geometry update issue. Spike 6 removed one uncertainty layer before Spike 11 added complexity.</p>
<p>A practical lesson from this chapter carried into later spikes. Upload stalls must be measured directly, not inferred from average FPS. Average FPS hides frame spikes, and frame spikes are what users actually feel.</p>
<p>In part 5 we move into the visual cost chapter where vegetation, terrain shaders, and cascaded shadows compete for the same frame budget.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>Chunk-based streaming.</strong> The world is divided into a grid of independent chunks (typically 64x64 meters). As the player moves, chunks on the trailing edge unload while chunks on the leading edge stream in. This is how <a href="/guides/browser-3d-open-world-tech.html#skyrim-s-cell-system">Skyrim's cell system</a> works: a 5x5 grid of cells loaded around the player, swapping as they move. The browser version adds network latency to the equation, making predictive pre-fetching based on player velocity critical. See our <a href="/guides/landscape-generation-browser.html#streaming-architecture-for-terrain">streaming architecture guide</a>.</p>
<p><strong>Progressive heightmap refinement.</strong> Send terrain at low resolution first, then refine. A 17x17 grid (the minimum for a 64m chunk at 4m spacing) is ~200 bytes compressed and renders a visible surface instantly. Then stream the 33x33 refinement (adds detail), then the full 65x65 resolution. Each level adds samples without replacing previous data. This maps directly to geometry clipmap LOD rings where distant terrain uses low-resolution data and close-up terrain uses full resolution. See <a href="/guides/landscape-generation-browser.html#progressive-chunk-loading">progressive chunk loading</a>.</p>
<p><strong>Delta encoding and compression.</strong> Heightmap data compresses well because adjacent cells have similar values. Delta encoding stores the difference between each cell and its predicted value (average of neighbors), clustering values near zero. Combined with zlib or brotli, a 65x65 chunk drops from 8.4 KB raw to 1-2 KB compressed. At reduced precision for distant chunks (8-bit instead of 16-bit): 0.5-1 KB. See <a href="/guides/landscape-generation-browser.html#terrain-data-compression-for-streaming">terrain data compression</a>.</p>
<p><strong>Predictive pre-fetching.</strong> Loading chunks before the player arrives. At walking speed (5 km/h), pre-fetch 2 chunks ahead (128m). At running speed (15 km/h), pre-fetch 4 chunks. The load ring shifts with velocity direction. A priority queue sorts pending requests by urgency and cancels requests for chunks the player has moved away from. See <a href="/guides/landscape-generation-browser.html#predictive-pre-fetching">predictive pre-fetching</a>.</p>
<hr>
<p>Part 4 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-03-the-unflashy-spikes.html">Part 3 - The unflashy spikes that saved us</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-05-budgeting-the-pretty.html">Part 5 - Budgeting the pretty stuff</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 5: Budgeting the pretty stuff]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-05-budgeting-the-pretty</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-05-budgeting-the-pretty</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Vegetation, terrain materials, and cascaded shadows looked great. The real work was proving they could fit the frame budget.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-5-budgeting-the-pretty-stuff" tabindex="-1">Building an open world in the browser, part 5: Budgeting the pretty stuff <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-5-budgeting-the-pretty-stuff" aria-label="Permalink to &quot;Building an open world in the browser, part 5: Budgeting the pretty stuff&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>This was the chapter where visual ambition met arithmetic.</p>
<p>We split rendering cost into separate spikes because bundled results are hard to diagnose. If you turn everything on at once, you only learn that the frame is slow. You don't learn which feature ate the budget.</p>
<p>Spike 7 targeted vegetation density and animation cost. The approach was runtime scattering from 32x32 density maps per terrain chunk, feeding large <code>InstancedMesh</code> sets. Each grass blade and shrub cluster got vertex-shader wind driven by a scrolling noise texture. The key number we watched wasn't triangle count but draw call overhead and vertex throughput on mid-range GPUs. We found that batching instances into fewer meshes mattered more than reducing per-blade polygon count.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/07-gpu-vegetation/" title="Spike 7 GPU Vegetation" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/07-gpu-vegetation/" target="_blank">Open Spike 7 in a new tab ↗</a> · <a href="/blog/spike-source?spike=07-gpu-vegetation">View source</a></p>
<p>Spike 8 pushed terrain material complexity. Multi-layer blending weighted by slope angle and altitude, optional triplanar projection for cliff faces, and per-layer normal maps. The shader was doing slope-based splatting with four texture layers, each needing a diffuse and normal sample. That's eight texture fetches per fragment before you add any lighting. We profiled on integrated Intel GPUs specifically to find the floor. The takeaway was that triplanar projection on vertical surfaces was worth the cost, but adding a fifth splat layer wasn't.</p>
<p><a href="/spikes/08-terrain-material/" target="_blank">Open Spike 8 in a new tab ↗</a> · <a href="/blog/spike-source?spike=08-terrain-material">View source</a></p>
<p>Spike 9 focused on cascaded shadow map cost under realistic terrain and object load. CSM with three cascades was baseline. We tested with low-angle sun positions specifically because that's where cascade pressure gets worst. The far cascade covers a huge frustum slice, and shadow map resolution per texel drops fast. We measured the GPU time difference between two and four cascades, then between 1024 and 2048 shadow map resolution. The result was that three cascades at 1024 gave us acceptable contact shadows near the camera without exceeding 2ms of GPU time on our target hardware.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/09-csm-shadows/" title="Spike 9 CSM Shadows" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/09-csm-shadows/" target="_blank">Open Spike 9 in a new tab ↗</a> · <a href="/blog/spike-source?spike=09-csm-shadows">View source</a></p>
<p>The hard part in this phase was product discipline. Some effects looked excellent and still had to be constrained because they consumed too much of the frame budget relative to their visual impact.</p>
<p>Our rule became simple. A feature moves forward only if it can explain its cost with measured frame-time data.</p>
<p>That sounds obvious. It's not common in fast prototype cycles where everyone is excited about the next visual win. Keeping this rule early made architecture decisions around clipmaps and volumetric zones much cleaner later, because we already knew the per-feature cost of everything competing for the same 16ms.</p>
<p>In part 6 we hit the first major terrain architecture pivot with geometry clipmaps.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>InstancedMesh and GPU vegetation.</strong> Three.js's <code>InstancedMesh</code> renders N copies of the same geometry with one draw call. For vegetation, a density map (32x32 per chunk) drives runtime scattering of grass blades and shrub clusters into instance buffers. Wind animation runs in the vertex shader using a scrolling noise texture. At scale, WebGPU's <code>ComputeInstanceCulling</code> eliminates off-screen and distant instances before rasterization, and <code>IndirectBatchedMesh</code> packs multiple vegetation types into a single buffer drawn with multi-draw indirect. See our <a href="/guides/landscape-generation-browser.html#gpu-driven-vegetation-culling">landscape guide on GPU vegetation culling</a>.</p>
<p><strong>Triplanar mapping.</strong> Standard UV-mapped textures stretch on steep slopes because UV coordinates compress. Triplanar mapping projects textures along all three axes (X, Y, Z) and blends based on the surface normal. Cliff faces get the X or Z projection (no stretching), flat ground gets the Y projection. The blending is smooth and automatic with no UV unwrapping required. For PBR terrain, the same blending weights apply to albedo, normal, roughness, and ambient occlusion channels. See <a href="/guides/landscape-generation-browser.html#triplanar-mapping">triplanar mapping details</a>.</p>
<p><strong>Slope and altitude-based material splatting.</strong> Instead of hand-painted splat maps, materials are assigned procedurally in the fragment shader based on terrain properties. Flat ground at low altitude gets grass, steep slopes get rock, high altitude gets snow (only on surfaces flat enough for accumulation), near sea level gets sand. The transitions use <code>smoothstep</code> for smooth blending. In our implementation, each terrain chunk evaluates four texture layers with diffuse and normal samples per layer, resulting in eight texture fetches per fragment before lighting. See <a href="/guides/landscape-generation-browser.html#slope-and-altitude-based-material-assignment">slope and altitude material assignment</a>.</p>
<p><strong>Cascaded Shadow Maps (CSM).</strong> CSM splits the camera's view frustum into 3-4 distance ranges (cascades). Each cascade renders a shadow map from the sun's perspective at a resolution matched to its distance. Close cascades get high-resolution shadows (detailed contact shadows under trees and buildings), far cascades get lower resolution (broad mountain shadows). The terrain shader samples all cascades and selects the appropriate one per fragment. Performance cost: 3-4 cascades at 1024x1024 add ~0.5-1 ms for shadow map rendering plus ~0.2-0.3 ms for sampling. See <a href="/guides/landscape-generation-browser.html#shadows-for-terrain">shadows for terrain</a>.</p>
<hr>
<p>Part 5 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-04-streaming-before-fancy.html">Part 4 - Streaming before fancy terrain</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-06-clipmaps.html">Part 6 - Clipmaps changed the plot</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 6: Clipmaps changed the plot]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-06-clipmaps</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-06-clipmaps</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Geometry clipmaps gave us a way to keep terrain cost predictable while moving through a much larger world.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-6-clipmaps-changed-the-plot" tabindex="-1">Building an open world in the browser, part 6: Clipmaps changed the plot <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-6-clipmaps-changed-the-plot" aria-label="Permalink to &quot;Building an open world in the browser, part 6: Clipmaps changed the plot&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>Before Spike 10, our mental model was still &quot;bigger world means more geometry.&quot; After Spike 10, the model became &quot;constant geometry budget, camera-centered ring updates.&quot; That shift changed the project trajectory.</p>
<p>The idea behind geometry clipmaps is straightforward. You render terrain as a set of concentric rings centered on the camera. The innermost ring has the highest vertex density. Each ring outward doubles the spacing between vertices and covers twice the ground. The total triangle count stays roughly constant no matter how far you can see, because you're always rendering the same number of rings with the same mesh resolution.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/10-clipmap-geomorph/" title="Spike 10 Geometry Clipmaps Geomorphing" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/10-clipmap-geomorph/" target="_blank">Open Spike 10 in a new tab ↗</a> · <a href="/blog/spike-source?spike=10-clipmap-geomorph">View source</a></p>
<p>The practical trick was geomorphing at ring boundaries. When a vertex transitions from one LOD ring to the next, its height has to blend smoothly between the high-res and low-res sample. Without that, you get visible pops every time the camera moves and rings shift. We handled this with a blend factor based on the vertex's distance to the ring edge, interpolating height in the vertex shader.</p>
<p>One subtle lesson came from camera movement testing. It's easy to judge clipmaps in static screenshots and miss transition artifacts. We spent time running constant-speed traversals through ring boundaries and watching for temporal noise. Screenshots lied. Motion told the truth.</p>
<p>This spike also gave us a clean architectural boundary. Near-field terrain could become dynamic and expensive over time, with volumetric editing, higher material complexity, and physics interaction. Far-field terrain could stay stable, predictable, and cheap. That separation became the backbone of every architectural decision from this point forward.</p>
<p>If you're evaluating clipmaps for your own project, test stress loops, not beauty shots. Long traversal paths, changing camera altitude, and repeated boundary crossings are what expose the real problems.</p>
<p>In part 7 we add volumetric meshing and move from &quot;terrain as a surface&quot; to &quot;terrain as editable volume.&quot; That was the point where this project stopped being a renderer and started becoming a world editor.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>Geometry clipmaps.</strong> Introduced by Losasso and Hoppe at SIGGRAPH 2004 (<a href="https://hhoppe.com/geomclipmap.pdf" target="_blank" rel="noreferrer">paper</a>), geometry clipmaps render terrain as concentric square rings centered on the camera. Each ring is twice the area of the previous one at half the vertex resolution. The total vertex count is constant: roughly N^2 * levels. With N=256 and 8 levels, that's ~500K vertices regardless of world size. The CPU updates heightmap data for each ring as the camera moves. The vertex shader reads height from a texture and displaces the flat grid. See our <a href="/guides/landscape-generation-browser.html#geometry-clipmaps">landscape guide on geometry clipmaps</a> and <a href="https://developer.nvidia.com/gpugems/gpugems2/part-i-geometric-complexity/chapter-2-terrain-rendering-using-gpu-based-geometry" target="_blank" rel="noreferrer">GPU Gems 2, Chapter 2</a>.</p>
<p><strong>Geomorphing.</strong> The biggest visual artifact in terrain LOD is popping: vertices suddenly jump when a patch switches LOD level. Geomorphing eliminates this by blending vertex positions between LOD levels over a transition zone. Each vertex stores both its current-LOD height and its coarser-LOD height. A morph factor based on camera distance smoothly interpolates between them: <code>morphedHeight = mix(fineLodHeight, coarseLodHeight, smoothstep(lodNear, lodFar, distance))</code>. The transition zone is typically the outer 20% of each ring. At normal camera speeds, the transition is invisible. See <a href="/guides/landscape-generation-browser.html#geomorphing-pop-free-lod-transitions">geomorphing details</a>.</p>
<p><strong>CDLOD (Quadtree-Adaptive Clipmaps).</strong> An improvement on fixed-ring clipmaps by Strugar (2014, <a href="https://www.vertexasylum.com/CDLOD/cdlod_latest.pdf" target="_blank" rel="noreferrer">paper</a>). Instead of concentric rings with uniform resolution, CDLOD uses a quadtree that adapts to terrain complexity. Flat areas use coarse nodes, while areas with high detail (cliffs, ridges) get finer subdivision. This matters for creator worlds where different chunks have vastly different complexity. See <a href="/guides/landscape-generation-browser.html#cdlod-quadtree-adaptive-clipmaps">CDLOD in our landscape guide</a>.</p>
<hr>
<p>Part 6 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-05-budgeting-the-pretty.html">Part 5 - Budgeting the pretty stuff</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-07-marching-cubes.html">Part 7 - Marching cubes and the first real caves</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 7: Marching cubes and the first real caves]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-07-marching-cubes</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-07-marching-cubes</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Spike 12 brought real-time WebGPU marching cubes into the project and changed what terrain could be.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-7-marching-cubes-and-the-first-real-caves" tabindex="-1">Building an open world in the browser, part 7: Marching cubes and the first real caves <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-7-marching-cubes-and-the-first-real-caves" aria-label="Permalink to &quot;Building an open world in the browser, part 7: Marching cubes and the first real caves&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>Heightmaps are great until you need overhangs.</p>
<p>The moment you want carved tunnels, floating rock lips, or cave ceilings, a pure heightfield pipeline starts blocking you. A heightmap stores one Y value per XZ coordinate. It's physically incapable of representing any surface that folds back over itself. We needed a volumetric representation.</p>
<p>Spike 12 implemented marching cubes on the GPU using WebGPU compute shaders. The algorithm evaluates a signed distance field (SDF) on a 3D grid and extracts a triangle mesh at the zero-crossing surface. Each grid cell is classified into one of 256 cases using a lookup table, and the corresponding triangles are emitted. We ran this on four active 64-cubed chunks simultaneously and tested animated SDF edits with per-frame remesh.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/12-webgpu-marching-cubes/" title="Spike 12 WebGPU Marching Cubes" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/12-webgpu-marching-cubes/" target="_blank">Open Spike 12 in a new tab ↗</a> · <a href="/blog/spike-source?spike=12-webgpu-marching-cubes">View source</a></p>
<p>The first win was confidence in the compute pipeline itself. A single dispatch could evaluate the SDF, classify cells, and emit vertices into a GPU buffer without any CPU readback. The second win was discovering how fast &quot;it works&quot; turns into artifact hunting. Missing triangles were rarely a marching cubes theory problem. They were table index mismatches, incorrect draw ranges reading past the active vertex count, or edge-case interactions near chunk boundaries where neighboring SDF samples weren't available.</p>
<p>This spike forced us to think in zones. Near the camera, you want volumetric freedom so players can carve, dig, and see caves. Far from the camera, you want clipmap efficiency where a flat heightmap is cheaper and perfectly adequate. That duality became the backbone of the architecture we kept refining from Spike 13 onward.</p>
<p>One of my favorite debugging moments was the wireframe toggle while edits were running. Watching topology form and dissolve in real time made quality tradeoffs immediately visible. You could see where vertex density was high enough, where it got too coarse, and exactly where LOD transitions would eventually need Transvoxel support to avoid cracks.</p>
<p>In part 8 we cover the integration challenge. Keeping raw compute-driven meshes and Three.js scene graph logic in one stable rendering pipeline was harder than the isolated demo suggested.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>Marching cubes.</strong> An algorithm for extracting a triangle mesh from a 3D scalar field (Lorensen and Cline, 1987). Each cell in a regular 3D grid is classified by sampling the field at its 8 corners. The sign pattern produces a case index (0-255), and a lookup table maps each case to a set of triangles. Vertices are placed on grid edges by interpolating between the two corners. The algorithm is embarrassingly parallel since each cell processes independently, making it ideal for GPU compute. See our <a href="/guides/landscape-generation-browser.html#signed-distance-fields-sdfs">landscape guide on SDFs and marching cubes</a>.</p>
<p><strong>Signed Distance Fields (SDFs).</strong> A volumetric representation that stores, at every point in 3D space, the signed distance to the nearest surface. Positive values are outside, negative are inside, and the zero-crossing is the surface. SDFs can represent arbitrary 3D shapes: caves, arches, overhangs, and floating geometry that heightmaps can't express. Editing is natural: adding material is a <code>min()</code> on the distance field, removing (digging) is <code>max()</code> with a negated shape, smooth blending uses <code>smoothMin()</code>. See <a href="/guides/landscape-generation-browser.html#signed-distance-fields-sdfs">SDF terrain representation</a>.</p>
<p><strong>WebGPU compute shaders.</strong> GPU programs that run general-purpose computation, not tied to the rasterization pipeline. A compute shader dispatches workgroups of threads that execute in parallel. For marching cubes, each thread processes one grid cell: sample the SDF, classify the cell, look up triangulation, interpolate edge vertices, and append to a mesh buffer using atomic counters. No CPU readback is needed because the output buffer is used directly as vertex data for rendering. Will Usher's <a href="https://www.willusher.io/webgpu-marching-cubes/" target="_blank" rel="noreferrer">webgpu-marching-cubes</a> demonstrates real-time 256^3 grid processing in the browser. See our <a href="/guides/landscape-generation-browser.html#gpu-driven-lod-with-webgpu">landscape guide on WebGPU-driven LOD</a>.</p>
<p><strong>Hybrid heightmap + SDF architecture.</strong> The practical approach for browser terrain: heightmaps cover the entire world (cheap, compact), while SDF volumes exist only in chunks that need caves, overhangs, or creator-carved features (5-10% of chunks). Near the camera, volumetric freedom allows carving and caves. Far away, heightmaps provide efficient flat terrain. See <a href="/guides/landscape-generation-browser.html#hybrid-heightmap-base--volumetric-overlays">hybrid terrain representation</a>.</p>
<hr>
<p>Part 7 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-06-clipmaps.html">Part 6 - Clipmaps changed the plot</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-08-webgpu-integration.html">Part 8 - Integration without losing our baseline</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 8: Integration without losing our baseline]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-08-webgpu-integration</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-08-webgpu-integration</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Spikes 13 and 14 were less about features and more about process discipline. We froze a clean baseline, then hardened one layer at a time.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-8-integration-without-losing-our-baseline" tabindex="-1">Building an open world in the browser, part 8: Integration without losing our baseline <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-8-integration-without-losing-our-baseline" aria-label="Permalink to &quot;Building an open world in the browser, part 8: Integration without losing our baseline&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>Integration is where projects get messy. You have working pieces in isolation. You connect them and suddenly every bug looks like it could be anywhere.</p>
<p>Spikes 13 and 14 were our answer to that trap. Spike 13 established a clean Three.js WebGPU baseline. Just a renderer, a scene, a camera, and a simple mesh. No terrain, no compute, no effects. We confirmed that Three.js's WebGPU backend initialized correctly, that the render loop was stable, and that TSL (Three.js Shading Language) node materials worked as expected. Only after that checkpoint passed did we start adding layers.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/13-threejs-webgpu/" title="Spike 13 Three.js WebGPU Baseline" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/13-threejs-webgpu/" target="_blank">Open Spike 13 in a new tab ↗</a> · <a href="/blog/spike-source?spike=13-threejs-webgpu">View source</a></p>
<p>Spike 14 was incremental hardening. We added one capability at a time: first camera controls, then lighting, then the compute-generated mesh from the marching cubes pipeline, then buffer plumbing to feed GPU output directly into Three.js geometry attributes. After each addition, we verified that the previous layer still behaved correctly.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/14-threejs-webgpu-incremental/" title="Spike 14 Incremental Hardening" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/14-threejs-webgpu-incremental/" target="_blank">Open Spike 14 in a new tab ↗</a> · <a href="/blog/spike-source?spike=14-threejs-webgpu-incremental">View source</a></p>
<p>That sounds slow. It was slow for exactly one day, and it saved us multiple days soon after when seam logic and policy switching got complicated.</p>
<p>The specific category of bug that justified this discipline was hairline artifacts. Thin slivers that looked like geometry corruption but were actually stale data. The compute shader would write N vertices into a buffer, but the draw call would still be configured to render N+M vertices from the previous frame. Those extra vertices contained garbage from the old dispatch. The visual result was flickering razor-thin triangles that appeared and vanished unpredictably.</p>
<p>You don't beat that class of bug with intuition. You beat it with controlled deltas where you know exactly what changed between the last working state and the current broken state.</p>
<p>The WebGPU integration also taught us about buffer lifecycle. GPU buffers in WebGPU are immutable once mapped for a specific usage. If you need to resize a vertex buffer because the marching cubes output grew, you have to create a new buffer and update the binding. There's no <code>realloc</code>. Getting that lifecycle right, destroying old buffers without racing against in-flight GPU work, required explicit fence management that doesn't exist in WebGL.</p>
<p>In part 9 we move into Transvoxel seam work. That chapter starts with a scaffold on purpose. By this point we had fully internalized the lesson that rushing integration produces mysteries, and controlled setup produces debuggable problems.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>WebGPU.</strong> The successor to WebGL, providing low-level GPU access in the browser with compute shaders and indirect rendering. WebGPU's two critical features for open worlds: compute shaders enable GPU-side terrain generation, foliage placement, and culling; indirect rendering lets the GPU decide what to draw based on compute output, eliminating CPU bottlenecks in dense scenes. Available in Chrome, Edge, and Firefox on desktop. See <a href="/guides/browser-3d-open-world-tech.html#webgpu-the-performance-unlock">WebGPU as a performance unlock</a>.</p>
<p><strong>Three.js Shading Language (TSL).</strong> Three.js's node-based shader system that replaces raw GLSL/WGSL with composable JavaScript expressions. TSL nodes like <code>texture()</code>, <code>positionWorld</code>, <code>smoothstep()</code>, and <code>fog()</code> build a shader graph at runtime that compiles to the appropriate backend (WebGL GLSL or WebGPU WGSL). TSL makes it possible to write material logic once and target both renderers. The node graph is evaluated per-frame, so dynamic uniforms and conditional branching work naturally.</p>
<p><strong>GPU buffer lifecycle in WebGPU.</strong> WebGPU buffers are created with specific usage flags (<code>VERTEX</code>, <code>STORAGE</code>, <code>COPY_DST</code>, etc.) and can't be resized after creation. If a marching cubes dispatch produces more vertices than the buffer can hold, you must create a new buffer, update the binding, and destroy the old one. Destroying a buffer that's still referenced by an in-flight GPU command causes errors. Explicit fence management (via <code>device.queue.onSubmittedWorkDone()</code>) ensures the old buffer isn't destroyed until the GPU finishes using it. This lifecycle discipline doesn't exist in WebGL, where the driver manages memory implicitly.</p>
<p><strong>Incremental hardening.</strong> A process discipline for integration: establish a known-good baseline, add one capability at a time, and verify the previous layer still works after each addition. This approach is slower for one day and saves days during later debugging because each regression can be traced to a specific, controlled change. The baseline-then-increment pattern is common in <a href="/guides/browser-3d-open-world-tech.html#what-we-d-build-first">AAA open world development</a> where systems are integrated in a specific order to manage risk.</p>
<hr>
<p>Part 8 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-07-marching-cubes.html">Part 7 - Marching cubes and the first real caves</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-09-transvoxel-first-cut.html">Part 9 - Transvoxel started with a scaffold</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 9: Transvoxel started with a scaffold]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-09-transvoxel-first-cut</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-09-transvoxel-first-cut</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[We didn't jump into full seam coverage. We built a seam test rig first, then validated one face with reference data.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-9-transvoxel-started-with-a-scaffold" tabindex="-1">Building an open world in the browser, part 9: Transvoxel started with a scaffold <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-9-transvoxel-started-with-a-scaffold" aria-label="Permalink to &quot;Building an open world in the browser, part 9: Transvoxel started with a scaffold&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>Seams are where confidence goes to die.</p>
<p>Everything can look stable until two resolutions meet. A chunk at LOD 0 sits next to a chunk at LOD 1. Their meshes are generated independently. Where they share a boundary, the vertex positions don't match because the lower-resolution chunk has half the grid density. The result is visible cracks, T-junctions, and flickering edges.</p>
<p>The Transvoxel algorithm solves this by generating special transition cells along the boundary face between two chunks at different resolutions. These cells sample from both the high-res and low-res grids simultaneously and produce triangles that stitch the two surfaces together. The algorithm uses its own lookup tables, separate from the regular marching cubes tables, with 512 transition cell cases.</p>
<p>We had enough integration scars by this point to know better than to rush the implementation.</p>
<p>Spike 15 had one job: build a seam test rig we could trust before touching the full algorithm. We set up a controlled environment where two chunks of known SDF data sat side by side at different resolutions, with visualization controls to toggle the main mesh, the seam mesh, wireframe, and normals independently.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/15-transvoxel-seam/" title="Spike 15 Transvoxel Seam Scaffold" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/15-transvoxel-seam/" target="_blank">Open Spike 15 in a new tab ↗</a> · <a href="/blog/spike-source?spike=15-transvoxel-seam">View source</a></p>
<p>Once the test rig was stable, Spike 16 validated table-driven transition cell generation on a single face. We picked one axis-aligned face (+X boundary), implemented the transition cell evaluation for that face only, and compared the output against reference data from the Transvoxel paper.</p>
<p><a href="/spikes/16-transvoxel-face/" target="_blank">Open Spike 16 in a new tab ↗</a> · <a href="/blog/spike-source?spike=16-transvoxel-face">View source</a></p>
<p>We tested one face at a time on purpose, because transition table wiring has many independent failure modes. Case index computation depends on sampling the correct vertices from both grids. Vertex indexing within a transition cell uses a different numbering scheme than regular marching cubes cells. Winding order has to be consistent with the main mesh or backface culling will eat your seam triangles. If you test all six faces at once, every symptom looks random. If you test one face thoroughly, you get meaningful, debuggable failures.</p>
<p>Another subtle win from this phase was tooling investment. We built visibility toggles, seam-only rendering, and color-coded LOD indicators early. Those controls felt like overhead at the time. Later, when corner cases got ugly, they paid for themselves over and over because we could isolate exactly which seam cells were misbehaving.</p>
<p>By the end of this chapter we weren't &quot;done with seams.&quot; We were in a position where seam bugs could be reasoned about instead of feared.</p>
<p>In part 10 the real roller coaster starts. Mixed LOD corners, winding flips, partial overdraw ghosts, and those moments where you're sure the algorithm is wrong and then discover the bug is a draw range reading past the active vertex count.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>The Transvoxel algorithm.</strong> Designed by Eric Lengyel (<a href="https://transvoxel.org/" target="_blank" rel="noreferrer">transvoxel.org</a>), Transvoxel solves the hardest problem in volumetric terrain LOD: seams between chunks at different resolutions. When a high-res chunk sits next to a low-res chunk, the marching cubes meshes don't align at the boundary, producing visible cracks. Transvoxel inserts special transition cells along boundary faces that bridge the resolution difference with additional triangles matching both sides. The algorithm uses its own lookup tables (separate from regular marching cubes) with 512 transition cell cases, reduced to 73 equivalence classes. It's patent-free and has been used in shipped games (Space Engineers, Astroneer). See our <a href="/guides/landscape-generation-browser.html#the-transvoxel-algorithm">landscape guide on Transvoxel</a>.</p>
<p><strong>Transition cells.</strong> Special cells generated at the face between two LOD levels. Unlike regular marching cubes cells that sample 8 corners from one grid, transition cells sample from both the high-res and low-res grids simultaneously. The high-res face has 9 sample points (3x3), while the low-res face has 4 (2x2). The cell classification and triangulation use dedicated tables that produce triangles connecting vertices at both resolutions. The vertex numbering scheme is different from regular MC cells, which is a common source of implementation bugs.</p>
<p><strong>LOD transition seams.</strong> The boundary between two mesh resolutions where topology mismatch causes visual artifacts. Without stitching, a LOD 0 chunk (1m grid) next to a LOD 1 chunk (2m grid) produces T-junctions: the fine mesh has vertices along the boundary that the coarse mesh doesn't share, causing cracks where light bleeds through. Transvoxel, geomorphing, and skirt geometry are three approaches to fixing these seams. For volumetric terrain, Transvoxel is the standard solution because geomorphing only works for heightmaps. See <a href="/guides/landscape-generation-browser.html#lod-for-volumetric-terrain">LOD for volumetric terrain</a>.</p>
<hr>
<p>Part 9 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-08-webgpu-integration.html">Part 8 - Integration without losing our baseline</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-10-seam-chaos.html">Part 10 - Seam chaos and the corner boss fight</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 10: Seam chaos and the corner boss fight]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-10-seam-chaos</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-10-seam-chaos</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Spikes 17 to 22 were where theory met edge cases. We chased winding bugs, missing triangles, overdraw ghosts, and hybrid fallback logic.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-10-seam-chaos-and-the-corner-boss-fight" tabindex="-1">Building an open world in the browser, part 10: Seam chaos and the corner boss fight <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-10-seam-chaos-and-the-corner-boss-fight" aria-label="Permalink to &quot;Building an open world in the browser, part 10: Seam chaos and the corner boss fight&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>If the earlier parts felt methodical, this chapter felt like combat.</p>
<p>Spikes 17 through 22 were our corner-case era. Dual-LOD marching cubes, heightmap-to-MC boundary seams, mixed-resolution corner chunks where three or four LOD levels meet, GPU seam generation, and fallback mode behavior. Each spike addressed a specific failure scenario we'd encountered or anticipated.</p>
<p>Spike 17 tested dual marching cubes with two LOD levels active simultaneously. The challenge was that a single chunk's neighbors could be at different resolutions on different faces. The transition cell logic from Spike 16 worked for one face at a time, but when a chunk needed transition cells on multiple faces, the vertex buffer management got complicated. Each face's transition cells had to be generated and appended without overwriting the others.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/19-transvoxel-corner-grid/" title="Spike 19 Corner Grid Transvoxel" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/19-transvoxel-corner-grid/" target="_blank">Open Spike 19 in a new tab ↗</a> · <a href="/blog/spike-source?spike=19-transvoxel-corner-grid">View source</a></p>
<p>The first repeated villain was winding order. Several times we thought we had topology problems, then found orientation problems. Backface culling was eating valid seam triangles because the winding was flipped relative to the main mesh. Same root cause, different visual symptom depending on camera angle. The fix was enforcing a consistent winding convention in the transition cell emission code and verifying it with a double-sided material toggle.</p>
<p>The second villain was false confidence from partial correctness. A seam could look perfect from one camera angle and break when LOD roles swapped between higher and lower resolution chunks. The transition cell is asymmetric. It samples from the high-res side and the low-res side differently. If you get the &quot;which side is high-res&quot; logic backwards for one configuration, you only see the bug when the camera moves to a specific position.</p>
<p>Then came one of our favorite recoveries. We were chasing a seam cutoff artifact on heightmap tiles and blaming the transition logic. Burned two days on it. The real culprit was stale overdraw. Higher-resolution geometry from a previous frame was still living in the buffer tail after the chunk downscaled to a lower LOD. The draw range was still set to the old, larger vertex count. Once we clipped the draw range to the active vertex count reported by the compute shader's atomic counter, the &quot;mystery seam issue&quot; disappeared.</p>
<p>That was a great reminder that rendering bugs often masquerade as meshing bugs. The geometry was correct the whole time. The draw call was just reading past the end of valid data.</p>
<p>By Spike 22 we were testing hybrid fallback where chunks could switch from marching cubes to heightmap mode under specific conditions, like when the chunk contains no volumetric edits and sits far enough from the camera. This gave us a more practical path than an all-or-nothing policy. Near-field edited chunks use MC for volumetric freedom. Far-field unedited chunks use heightmaps for efficiency.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/22-gpu-mc-heightmap-fallback/" title="Spike 22 MC Heightmap Fallback" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/22-gpu-mc-heightmap-fallback/" target="_blank">Open Spike 22 in a new tab ↗</a> · <a href="/blog/spike-source?spike=22-gpu-mc-heightmap-fallback">View source</a></p>
<p>This chapter was the steep drop of the roller coaster. Frustrating and productive at the same time. Many of the individual fixes were small, sometimes a single line changing a comparison operator or an offset. But the understanding they produced about how LOD transitions, buffer management, and draw ranges interact wasn't small at all.</p>
<p>In part 11 we cover the stabilization layer that came after the chaos: policy-based chunk modes and the transition from reactive bug-fixing to explicit system rules.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>Dual-LOD marching cubes.</strong> Running marching cubes at two resolution levels simultaneously, with transition cells stitching the boundary. The challenge is that a single chunk's neighbors can be at different resolutions on different faces, requiring independent transition cell generation per face. Each face's transition cells are appended to the vertex buffer without overwriting the others. Atomic counters track the total active vertex count across all faces.</p>
<p><strong>Winding order.</strong> The vertex order within each triangle determines which side is the &quot;front&quot; face. Consistent winding (typically counter-clockwise when viewed from outside) is required for backface culling. When transition cells emit triangles, the winding must match the main mesh's convention. Getting it backwards causes backface culling to eat valid seam triangles, which looks like missing surfaces from certain camera angles. A common debugging technique is toggling <code>side: THREE.DoubleSide</code> on the material to confirm whether artifacts are winding issues or genuine topology gaps.</p>
<p><strong>Heightmap-to-MC fallback.</strong> A hybrid chunk mode where far or unedited chunks use heightmap terrain (cheap, flat surface) while near or edited chunks use marching cubes (volumetric, supports caves). The fallback decision depends on distance from camera and whether the chunk contains SDF edits. The seam between a heightmap chunk and an MC chunk requires its own transition geometry, similar to Transvoxel but bridging two different representations rather than two LOD levels. See <a href="/guides/landscape-generation-browser.html#hybrid-heightmap-base--volumetric-overlays">hybrid heightmap + volumetric overlays</a>.</p>
<p><strong>Draw range and atomic counters.</strong> In GPU-driven mesh generation, the compute shader writes vertices into a buffer and increments an atomic counter to track how many vertices were emitted. The draw call must use this counter as the vertex count, not the buffer capacity. If the draw range isn't clipped to the active count, stale vertices from previous frames (still living in the buffer tail) produce ghost geometry: thin slivers and flickering triangles that look like topology errors but are actually rendering artifacts from reading past valid data.</p>
<hr>
<p>Part 10 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-09-transvoxel-first-cut.html">Part 9 - Transvoxel started with a scaffold</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-11-policy-modes.html">Part 11 - Policy mode, not hardcoded mode</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 11: Policy mode, not hardcoded mode]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-11-policy-modes</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-11-policy-modes</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Spike 23 shifted us from reactive fixes to explicit policy. Distance-driven LOD, edit-driven HM/MC switching, and better seam debugging controls.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-11-policy-mode-not-hardcoded-mode" tabindex="-1">Building an open world in the browser, part 11: Policy mode, not hardcoded mode <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-11-policy-mode-not-hardcoded-mode" aria-label="Permalink to &quot;Building an open world in the browser, part 11: Policy mode, not hardcoded mode&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>After the seam chaos chapter, we needed to stop reacting and start governing.</p>
<p>Spike 23 replaced ad-hoc behavior with explicit policy rules. Instead of chunks doing whatever their local state suggested, a central policy system now made the decisions. Which LOD level does this chunk get? Is it rendered as heightmap or marching cubes? Does it need transition cells, and on which faces? The answers came from a policy function that evaluated distance to camera, edit history, and neighbor resolution states.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/23-policy-chunk-modes/" title="Spike 23 Policy Chunk Modes" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/23-policy-chunk-modes/" target="_blank">Open Spike 23 in a new tab ↗</a> · <a href="/blog/spike-source?spike=23-policy-chunk-modes">View source</a></p>
<p>The distance-based LOD assignment used concentric rings around the camera, similar to the clipmap concept but applied to the chunk grid. Ring 0 chunks get full-resolution MC. Ring 1 gets half-resolution MC. Ring 2 and beyond get heightmap mode. The adjacency constraint was critical: if two neighboring chunks differ by more than one LOD level, the policy forces the lower-detail chunk to upgrade. This prevents seam cases that the Transvoxel tables can't handle, since transition cells are defined for a 2:1 resolution ratio only.</p>
<p>The HM/MC switching logic checked each chunk's edit bitmap. If a chunk had any volumetric edits (caves, tunnels, terrain sculpts), it stayed in MC mode regardless of distance. Unedited chunks could drop to heightmap mode when they moved far enough away. This hybrid approach gave us volumetric freedom where it mattered and efficiency where it didn't.</p>
<p>This spike looked smaller from the outside than some earlier ones. In practice it was a major quality-of-life improvement for development.</p>
<p>When your system can explain why a chunk switched mode, you spend less time guessing. We added color-coded overlays: green for heightmap chunks, blue for MC chunks, orange for transition-active faces. When seam visibility has dedicated render controls, visual debugging ambiguity drops. When draw ranges are explicitly tied to active vertex counts from the policy system, stale geometry ghosts stop wasting your afternoon.</p>
<p>We also overhauled camera behavior in this spike. Earlier spikes had simple orbit controls fine for screenshots but useless for reproducing bugs. Spike 23 added WASD fly camera with configurable speed, altitude lock toggle, and position readout. That sounds minor. It meant the difference between &quot;I saw a bug somewhere near that ridge&quot; and &quot;the bug appears at position (142, 12, -67) facing northwest.&quot;</p>
<p>The key insight from this chapter is that policy didn't reduce complexity. It organized complexity. The same number of edge cases existed. But now each edge case had a name, a trigger condition, and a place in the code where you could set a breakpoint. That's a different kind of win, and it's what determines whether a system can keep evolving or collapses under its own weight.</p>
<p>By the end of Spike 23, we had a near-field behavior layer that was predictable enough to connect to a far-field clipmap ring strategy without constant fear of interaction bugs.</p>
<p>In part 12 we cover Spike 24, where ring transitions, skybox fog, and Three.js version-level shader integration close out this chapter of the project.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>Distance-based LOD policy.</strong> A central function that assigns LOD level and rendering mode to each chunk based on distance to camera, edit history, and neighbor states. Concentric distance rings determine the base LOD: ring 0 = full-res MC, ring 1 = half-res MC, ring 2+ = heightmap mode. The policy function runs per frame as the camera moves and triggers chunk transitions. This replaces ad-hoc per-chunk decisions with a predictable, debuggable rule system. See <a href="/guides/landscape-generation-browser.html#gpu-driven-lod-with-webgpu">GPU-driven LOD selection</a> for the compute shader equivalent.</p>
<p><strong>Adjacency constraints.</strong> The Transvoxel algorithm handles 2:1 resolution ratios only. If two neighboring chunks differ by more than one LOD level (e.g., LOD 0 next to LOD 2), the transition tables can't produce valid seam geometry. The policy system enforces this by upgrading the lower-detail chunk when the LOD difference exceeds 1. This constraint propagation can cascade: upgrading one chunk may force its neighbors to upgrade as well. The implementation is a simple iterative pass that converges in 2-3 iterations for typical grid configurations.</p>
<p><strong>Edit bitmap for mode selection.</strong> Each chunk maintains a bitmap recording whether it contains volumetric SDF edits (caves, tunnels, sculpts). Chunks with any edits stay in marching cubes mode regardless of distance, preserving the creator's modifications. Unedited chunks drop to heightmap mode when far enough from the camera, saving compute and memory. The bitmap is a single flag per chunk but can be extended to track edit density for more granular mode decisions.</p>
<p><strong>Debug visualization overlays.</strong> Color-coded chunk rendering where green = heightmap mode, blue = MC mode, orange = transition-active faces. Per-chunk overlays with LOD level numbers, mode labels, and wireframe toggles. These are development tools, not shipped features, but they pay for themselves repeatedly when debugging LOD transitions and seam artifacts. Combined with a WASD fly camera that reports exact world position, they turn &quot;I saw a bug somewhere&quot; into &quot;the bug appears at (142, 12, -67) with this LOD configuration.&quot;</p>
<hr>
<p>Part 11 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-10-seam-chaos.html">Part 10 - Seam chaos and the corner boss fight</a><br>
Next: <a href="/blog/2026-02-25-open-world-browser-part-12-lessons.html">Part 12 - Rings, sky fog, and what we would do again</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an open world in the browser, part 12: Rings, sky fog, and what we would do again]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-12-lessons</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-part-12-lessons</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Spike 24 closed this chapter with clipmap ring transitions, skybox fog integration, and one long list of hard-earned lessons.]]></description>
            <content:encoded><![CDATA[<h1 id="building-an-open-world-in-the-browser-part-12-rings-sky-fog-and-what-we-would-do-again" tabindex="-1">Building an open world in the browser, part 12: Rings, sky fog, and what we would do again <a class="header-anchor" href="#building-an-open-world-in-the-browser-part-12-rings-sky-fog-and-what-we-would-do-again" aria-label="Permalink to &quot;Building an open world in the browser, part 12: Rings, sky fog, and what we would do again&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>New here? Use the <a href="/blog/2026-02-25-open-world-browser-series-guide.html">series guide</a>. It explains what a spike is and links all parts.</p>
<p>Spike 24 was supposed to be &quot;add clipmap rings to the terrain.&quot; It became a full finale that touched rendering, shaders, module infrastructure, and visual integration all at once.</p>
<p>The core terrain task was generating concentric clipmap rings in the vertex shader. Each ring is a flat grid mesh centered on the camera, with vertices displaced by heightmap samples. The inner ring uses full resolution. Each subsequent ring doubles the vertex spacing and covers a larger area. The tricky part is the boundary between rings: where a high-resolution ring meets a low-resolution ring, edge vertices on the finer mesh need to snap to the midpoint of the coarser mesh's edge. We did 2:1 edge morphing by detecting boundary vertices (those whose grid coordinate is odd along the ring edge) and interpolating their height between the two neighboring even vertices. This produces watertight seams without transition geometry.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0;border:1px solid rgba(255,255,255,0.12)">
<iframe src="/spikes/24-gpu-clipmap-rings/" title="Spike 24 Clipmap Rings and Sky Fog" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0;background:#000" loading="lazy" allowfullscreen></iframe>
</div>
<p><a href="/spikes/24-gpu-clipmap-rings/" target="_blank">Open Spike 24 in a new tab ↗</a> · <a href="/blog/spike-source?spike=24-gpu-clipmap-rings">View source</a></p>
<p>Then came the fog and sky integration. We wanted distant terrain to fade into the actual sky color, not a flat constant. That meant the fog shader needed to know what color the sky would be in the direction of each fragment. We loaded an equirectangular HDR skybox texture and sampled it in the fragment shader using the view direction from camera to fragment, converted to equirectangular UV coordinates via TSL's <code>equirectUV</code> node. The fog factor was distance-based using <code>positionView.z.negate()</code> for camera-space depth, blended with <code>smoothstep</code> between a near and far distance.</p>
<p>Module wiring turned out to be more annoying than any of the geometry. We upgraded to Three.js 0.183.1, which restructured the build outputs. The <code>three/tsl</code> import needed to resolve to <code>three.tsl.js</code>, and TSL internally imported <code>three/webgpu</code> as a bare specifier. Both mappings had to be explicit in the HTML import map. Missing either one produced cryptic &quot;does not provide an export&quot; or &quot;failed to resolve module specifier&quot; errors with no indication of which mapping was wrong. Once both were in the import map, the shader graph loaded correctly.</p>
<p>We also had a skybox orientation issue where the texture rendered upside down. The fix was <code>flipY = true</code> on the equirectangular texture, which is the Three.js default for loaded textures but was set to <code>false</code> in our initial code.</p>
<p>The original fog implementation sampled the sky at a near-constant direction, producing a thin horizon-colored band instead of a natural gradient. The fix was computing the actual camera-to-fragment world direction per pixel using <code>positionWorld.sub(cameraPosition).normalize()</code> and passing that into <code>equirectUV</code> for the fog color lookup. This made terrain fragments fade into the sky color that's actually behind them, which looks correct from any camera angle.</p>
<p>Underneath all the individual fixes, the core outcome held. We now have a terrain system that combines near-field volumetric editing (marching cubes with Transvoxel seams), mid-field heightmap chunks, and far-field clipmap rings, all governed by a policy layer that decides mode, LOD, and transition behavior.</p>
<p>If I had to name the patterns I'd repeat on the next project, they'd be these:</p>
<p>Start with risk spikes before feature work. Spike 1 killed the &quot;can we even render fast enough&quot; question before we invested in content pipelines.</p>
<p>Freeze known-good baselines before integration jumps. Spikes 13 and 14 saved us days of bisecting regressions.</p>
<p>Force policy and observability before optimization marathons. Spike 23 turned mystery bugs into named conditions with trigger rules.</p>
<p>Test under motion, not screenshots. Clipmap pops, seam flicker, and streaming hitches all hide in still frames.</p>
<p>Measure frame-time cost per feature, not average FPS. Averages hide the spikes that users actually feel.</p>
<p>And publish the messy parts. The wrong turns, the stale-buffer ghost hunts, the two days blaming transition logic when the draw range was wrong. Those are the parts people can actually learn from.</p>
<h2 id="external-reality-check-vuntra-city-devlogs" tabindex="-1">External reality check: Vuntra City devlogs <a class="header-anchor" href="#external-reality-check-vuntra-city-devlogs" aria-label="Permalink to &quot;External reality check: Vuntra City devlogs&quot;"></a></h2>
<p>After finishing this series, we reviewed the <code>@VuntraCity</code> devlogs as an external implementation check against our own open-world assumptions. It's a native UE5 project, not a browser stack, but the system patterns map well enough that the comparison is useful.</p>
<p>The first signal is that traversal speed has to be treated as a streaming control, not just gameplay. In Vuntra City, high-speed transit is intentionally routed above most interiors, and detail range scales with movement speed to avoid spawn churn and stalls (<a href="https://www.youtube.com/watch?v=KKeBElJS6-M" target="_blank" rel="noreferrer">transport system</a>, <a href="https://www.youtube.com/watch?v=aPuYXyJet38" target="_blank" rel="noreferrer">performance techniques</a>). That matches our policy-layer direction: movement mode should directly influence chunk radius, interior activation, and allowed work per frame.</p>
<p>The second signal is architecture. Their maps and address system required separating world topology from rendered objects so global queries can run for unloaded regions (<a href="https://www.youtube.com/watch?v=6dLn1GQpu2c" target="_blank" rel="noreferrer">maps and addresses</a>). That's the same separation we need in browser for world search, quest routing, moderation scans, and POI indexing without forcing render-bound data paths.</p>
<p>The third signal is simulation tiering. Their million-NPC design keeps coarse schedule state cheap and global, then spends expensive behavior budget only near the player (<a href="https://www.youtube.com/watch?v=nBV0yAAJUf0" target="_blank" rel="noreferrer">million-NPC overview</a>, <a href="https://www.youtube.com/watch?v=eUi7DB1ar3s" target="_blank" rel="noreferrer">system deep dive</a>). That reinforces our own AOI-first simulation model, where near-field fidelity and far-field determinism are separate concerns with separate budgets.</p>
<p>And the fourth signal is design quality, not raw scale. Their strongest exploration moments come from weighted distributions, rare outliers, and diegetic navigation clues instead of constant UI overlays (<a href="https://www.youtube.com/watch?v=4MZ5-KQW3pc" target="_blank" rel="noreferrer">procedural environment notes</a>, <a href="https://www.youtube.com/watch?v=ixR1hqZJlv4" target="_blank" rel="noreferrer">no minimap loop</a>). For us, this is a reminder that technical systems should be tuned to produce discoverable variation, not just maximal throughput.</p>
<h2 id="technology-referenced-in-this-chapter" tabindex="-1">Technology referenced in this chapter <a class="header-anchor" href="#technology-referenced-in-this-chapter" aria-label="Permalink to &quot;Technology referenced in this chapter&quot;"></a></h2>
<p><strong>Clipmap ring geometry.</strong> Each ring is a flat grid mesh centered on the camera with vertices displaced by heightmap samples. The inner ring uses full resolution. Each subsequent ring doubles vertex spacing and covers a larger area. The tricky part is the boundary: where a high-res ring meets a low-res ring, edge vertices on the finer mesh snap to the midpoint of the coarser mesh's edge. The technique originates from Losasso and Hoppe's SIGGRAPH 2004 paper (<a href="https://hhoppe.com/geomclipmap.pdf" target="_blank" rel="noreferrer">PDF</a>) and is detailed in <a href="https://developer.nvidia.com/gpugems/gpugems2/part-i-geometric-complexity/chapter-2-terrain-rendering-using-gpu-based-geometry" target="_blank" rel="noreferrer">GPU Gems 2, Chapter 2</a>. See our <a href="/guides/landscape-generation-browser.html#geometry-clipmaps">landscape guide on geometry clipmaps</a>.</p>
<p><strong>2:1 edge morphing.</strong> At the boundary between two clipmap rings, the finer ring has vertices at positions the coarser ring doesn't share. Boundary vertices whose grid coordinate is odd along the ring edge are detected and their height is interpolated between the two neighboring even vertices. This produces watertight seams without dedicated transition geometry. The interpolation runs in the vertex shader: <code>morphedHeight = mix(heightLeft, heightRight, 0.5)</code> for boundary vertices, using the same geomorphing framework described in <a href="/guides/landscape-generation-browser.html#geomorphing-pop-free-lod-transitions">our guide</a>.</p>
<p><strong>Equirectangular skybox mapping.</strong> A single 2D image that maps the full sphere of sky directions using longitude-latitude projection. The horizontal axis covers 0-360 degrees, the vertical axis covers 0-180 degrees. In Three.js, setting <code>texture.mapping = EquirectangularReflectionMapping</code> with <code>SRGBColorSpace</code> enables this as a scene background. In TSL, <code>equirectUV(direction)</code> converts a 3D view direction into the 2D UV coordinates for sampling the texture.</p>
<p><strong>Per-fragment fog color from sky.</strong> Standard fog blends fragments toward a single constant color. For a scene with a detailed skybox, this looks wrong because the sky color varies by direction. The fix is to compute the camera-to-fragment world direction per pixel (<code>positionWorld.sub(cameraPosition).normalize()</code>) and sample the skybox at that direction for the fog color. Each fragment fades toward the sky color that's actually behind it, producing correct blending from any camera angle. The fog factor uses <code>smoothstep(nearDist, farDist, viewDepth)</code> with <code>positionView.z.negate()</code> for camera-space depth.</p>
<p><strong>Import maps for ES modules.</strong> A browser-native mechanism (<code>&lt;script type=&quot;importmap&quot;&gt;</code>) that maps bare module specifiers (like <code>three/tsl</code>) to actual URLs. When Three.js 0.183.1 restructured its build outputs, <code>three/tsl</code> needed to resolve to <code>three.tsl.js</code> and TSL internally imported <code>three/webgpu</code> as a bare specifier. Both mappings had to be explicit in the import map, or the browser produced &quot;does not provide an export&quot; or &quot;failed to resolve module specifier&quot; errors.</p>
<h2 id="further-reading" tabindex="-1">Further reading <a class="header-anchor" href="#further-reading" aria-label="Permalink to &quot;Further reading&quot;"></a></h2>
<p>For deeper coverage of the technologies used throughout this series, see our companion guides:</p>
<ul>
<li><a href="/guides/landscape-generation-browser.html">Landscape Generation with Dynamic LOD and Streaming for Browser Open Worlds</a> covers heightmaps, SDFs, marching cubes, Transvoxel, geometry clipmaps, geomorphing, streaming architecture, terrain materials, and vegetation rendering.</li>
<li><a href="/guides/browser-3d-open-world-tech.html">Browser 3D Open World Tech for Multiplayer Creator Worlds</a> covers rendering stacks, WebGPU, physics, networking, multiplayer architecture, and lessons from Skyrim, The Witcher 3, Breath of the Wild, and GTA V.</li>
</ul>
<p>Thank you for following this twelve-part ride.</p>
<p>Part 1: <a href="/blog/2026-02-25-open-world-browser-part-01-risk-first.html">We started by trying to break it</a><br>
Part 2: <a href="/blog/2026-02-25-open-world-browser-part-02-worker-physics.html">Worker physics and the input lag fear</a><br>
Part 3: <a href="/blog/2026-02-25-open-world-browser-part-03-the-unflashy-spikes.html">The unflashy spikes that saved us</a><br>
Part 4: <a href="/blog/2026-02-25-open-world-browser-part-04-streaming-before-fancy.html">Streaming before fancy terrain</a><br>
Part 5: <a href="/blog/2026-02-25-open-world-browser-part-05-budgeting-the-pretty.html">Budgeting the pretty stuff</a><br>
Part 6: <a href="/blog/2026-02-25-open-world-browser-part-06-clipmaps.html">Clipmaps changed the plot</a><br>
Part 7: <a href="/blog/2026-02-25-open-world-browser-part-07-marching-cubes.html">Marching cubes and the first real caves</a><br>
Part 8: <a href="/blog/2026-02-25-open-world-browser-part-08-webgpu-integration.html">Integration without losing our baseline</a><br>
Part 9: <a href="/blog/2026-02-25-open-world-browser-part-09-transvoxel-first-cut.html">Transvoxel started with a scaffold</a><br>
Part 10: <a href="/blog/2026-02-25-open-world-browser-part-10-seam-chaos.html">Seam chaos and the corner boss fight</a><br>
Part 11: <a href="/blog/2026-02-25-open-world-browser-part-11-policy-modes.html">Policy mode, not hardcoded mode</a></p>
<hr>
<p>Part 12 of 12.<br>
Previous: <a href="/blog/2026-02-25-open-world-browser-part-11-policy-modes.html">Part 11 - Policy mode, not hardcoded mode</a><br>
Series guide: <a href="/blog/2026-02-25-open-world-browser-series-guide.html">/blog/2026-02-25-open-world-browser-series-guide</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Open world in browser series guide]]></title>
            <link>https://app.cinevva.com/blog/2026-02-25-open-world-browser-series-guide</link>
            <guid>https://app.cinevva.com/blog/2026-02-25-open-world-browser-series-guide</guid>
            <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Reading order for the 12-part open world browser series, plus a plain-English spike glossary and live demo map.]]></description>
            <content:encoded><![CDATA[<h1 id="open-world-in-browser-series-guide" tabindex="-1">Open world in browser series guide <a class="header-anchor" href="#open-world-in-browser-series-guide" aria-label="Permalink to &quot;Open world in browser series guide&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO and Co-Founder of Cinevva</em></p>
<p>This page is the navigation hub for the full series.</p>
<p>If you arrived from one random part, start here.</p>
<p>March 2026 update: we added an external implementation cross-check from the Vuntra City devlogs in <a href="/blog/2026-02-25-open-world-browser-part-12-lessons.html">Part 12</a> and expanded the companion <a href="/guides/browser-3d-open-world-tech.html">browser open world tech guide</a> with the same findings.</p>
<h2 id="what-spike-means-in-this-series" tabindex="-1">What &quot;spike&quot; means in this series <a class="header-anchor" href="#what-spike-means-in-this-series" aria-label="Permalink to &quot;What &quot;spike&quot; means in this series&quot;"></a></h2>
<p>A spike is a short, focused experiment.</p>
<p>Each spike tests one risky question. We keep scope tight, measure what matters, and decide what to do next. Think of it as a learning instrument, not a polished feature.</p>
<p>In this project, spikes are mostly standalone pages under <code>/spikes/*</code> that you can run live.</p>
<h2 id="read-the-series-in-order" tabindex="-1">Read the series in order <a class="header-anchor" href="#read-the-series-in-order" aria-label="Permalink to &quot;Read the series in order&quot;"></a></h2>
<ol>
<li><a href="/blog/2026-02-25-open-world-browser-part-01-risk-first.html">Part 1: We started by trying to break it</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-02-worker-physics.html">Part 2: Worker physics and the input lag fear</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-03-the-unflashy-spikes.html">Part 3: The unflashy spikes that saved us</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-04-streaming-before-fancy.html">Part 4: Streaming before fancy terrain</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-05-budgeting-the-pretty.html">Part 5: Budgeting the pretty stuff</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-06-clipmaps.html">Part 6: Clipmaps changed the plot</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-07-marching-cubes.html">Part 7: Marching cubes and the first real caves</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-08-webgpu-integration.html">Part 8: Integration without losing our baseline</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-09-transvoxel-first-cut.html">Part 9: Transvoxel started with a scaffold</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-10-seam-chaos.html">Part 10: Seam chaos and the corner boss fight</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-11-policy-modes.html">Part 11: Policy mode, not hardcoded mode</a></li>
<li><a href="/blog/2026-02-25-open-world-browser-part-12-lessons.html">Part 12: Rings, sky fog, and what we would do again</a></li>
</ol>
<h2 id="live-spike-map" tabindex="-1">Live spike map <a class="header-anchor" href="#live-spike-map" aria-label="Permalink to &quot;Live spike map&quot;"></a></h2>
<p>Start with these if you want to click through the technical timeline directly.</p>
<p><a href="/spikes/01-terrain/" target="_blank">Spike 1</a> terrain and instancing · <a href="/blog/spike-source?spike=01-terrain">source</a><br>
<a href="/spikes/02-rapier-worker/" target="_blank">Spike 2</a> worker physics · <a href="/blog/spike-source?spike=02-rapier-worker">source</a><br>
Spike 3 Durable Object broadcast load test ran as a service script, not a standalone page<br>
Spike 4 mobile quality constraints, tested via Spike 1 query param<br>
Spike 5 LLM behavior reliability, ran as a script, not a standalone page<br>
<a href="/spikes/06-chunk-streaming/" target="_blank">Spike 6</a> chunk load and swap · <a href="/blog/spike-source?spike=06-chunk-streaming">source</a><br>
<a href="/spikes/07-gpu-vegetation/" target="_blank">Spike 7</a> vegetation from density maps · <a href="/blog/spike-source?spike=07-gpu-vegetation">source</a><br>
<a href="/spikes/08-terrain-material/" target="_blank">Spike 8</a> terrain material cost · <a href="/blog/spike-source?spike=08-terrain-material">source</a><br>
<a href="/spikes/09-csm-shadows/" target="_blank">Spike 9</a> cascaded shadows budget · <a href="/blog/spike-source?spike=09-csm-shadows">source</a><br>
<a href="/spikes/10-clipmap-geomorph/" target="_blank">Spike 10</a> clipmaps and geomorphing · <a href="/blog/spike-source?spike=10-clipmap-geomorph">source</a><br>
<a href="/spikes/11-chunk-streaming/" target="_blank">Spike 11</a> heightmap chunk streaming · <a href="/blog/spike-source?spike=11-chunk-streaming">source</a><br>
<a href="/spikes/12-webgpu-marching-cubes/" target="_blank">Spike 12</a> WebGPU marching cubes · <a href="/blog/spike-source?spike=12-webgpu-marching-cubes">source</a><br>
<a href="/spikes/13-threejs-webgpu/" target="_blank">Spike 13</a> WebGPU integration baseline · <a href="/blog/spike-source?spike=13-threejs-webgpu">source</a><br>
<a href="/spikes/14-threejs-webgpu-incremental/" target="_blank">Spike 14</a> incremental hardening · <a href="/blog/spike-source?spike=14-threejs-webgpu-incremental">source</a><br>
<a href="/spikes/15-transvoxel-seam/" target="_blank">Spike 15</a> seam scaffold · <a href="/blog/spike-source?spike=15-transvoxel-seam">source</a><br>
<a href="/spikes/16-transvoxel-face/" target="_blank">Spike 16</a> first Transvoxel face · <a href="/blog/spike-source?spike=16-transvoxel-face">source</a><br>
<a href="/spikes/17-dual-mc-lod/" target="_blank">Spike 17</a> dual MC LOD · <a href="/blog/spike-source?spike=17-dual-mc-lod">source</a><br>
<a href="/spikes/18-transvoxel-heightmap-seam/" target="_blank">Spike 18</a> heightmap seam · <a href="/blog/spike-source?spike=18-transvoxel-heightmap-seam">source</a><br>
<a href="/spikes/19-transvoxel-corner-grid/" target="_blank">Spike 19</a> mixed-resolution corner · <a href="/blog/spike-source?spike=19-transvoxel-corner-grid">source</a><br>
<a href="/spikes/20-gpu-transvoxel-corner/" target="_blank">Spike 20</a> GPU corner seam · <a href="/blog/spike-source?spike=20-gpu-transvoxel-corner">source</a><br>
<a href="/spikes/21-gpu-mc-transvoxel-corner/" target="_blank">Spike 21</a> GPU MC corner seam · <a href="/blog/spike-source?spike=21-gpu-mc-transvoxel-corner">source</a><br>
<a href="/spikes/22-gpu-mc-heightmap-fallback/" target="_blank">Spike 22</a> MC to HM fallback · <a href="/blog/spike-source?spike=22-gpu-mc-heightmap-fallback">source</a><br>
<a href="/spikes/23-policy-chunk-modes/" target="_blank">Spike 23</a> policy-based chunk modes · <a href="/blog/spike-source?spike=23-policy-chunk-modes">source</a><br>
<a href="/spikes/24-gpu-clipmap-rings/" target="_blank">Spike 24</a> clipmap rings and sky fog · <a href="/blog/spike-source?spike=24-gpu-clipmap-rings">source</a></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Breaker Belt: Snake meets Arkanoid, vibe coded in three days]]></title>
            <link>https://app.cinevva.com/blog/2026-02-18-a-breaker-belt</link>
            <guid>https://app.cinevva.com/blog/2026-02-18-a-breaker-belt</guid>
            <pubDate>Wed, 18 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A cosmic serpent breaks bricks across 50 waves with reactive music, AI narration, and 23 brick types. Two people. Three days. Web, mobile, and PC.]]></description>
            <content:encoded><![CDATA[<h1 id="a-breaker-belt-snake-meets-arkanoid-vibe-coded-in-three-days" tabindex="-1">A Breaker Belt: Snake meets Arkanoid, vibe coded in three days <a class="header-anchor" href="#a-breaker-belt-snake-meets-arkanoid-vibe-coded-in-three-days" aria-label="Permalink to &quot;A Breaker Belt: Snake meets Arkanoid, vibe coded in three days&quot;"></a></h1>
<p><em>By <a href="/about.html">Mariana Muntean</a>, CEO of Cinevva</em></p>
<img src="https://cdn.cinevva.com/blog/breaker-belt-cover.webp" alt="A Breaker Belt gameplay: a neon cosmic serpent breaking bricks in space" style="width:100%;border-radius:8px;margin:1.5rem 0" />
<p>We made a game where your snake is the paddle and the bricks fight back. It shipped on web, mobile, and PC. It took two of us about three days. On and off, not crunching. And it's genuinely fun to play.</p>
<p>That last part is the thing worth paying attention to.</p>
<div style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;margin:1.5rem 0">
<iframe src="https://www.youtube.com/embed/VaFkfCT3OuU" style="position:absolute;top:0;left:0;width:100%;height:100%;border:0" allow="accelerometer;autoplay;clipboard-write;encrypted-media;gyroscope;picture-in-picture" allowfullscreen></iframe>
</div>
<h2 id="the-mashup-nobody-asked-for" tabindex="-1">The mashup nobody asked for <a class="header-anchor" href="#the-mashup-nobody-asked-for" aria-label="Permalink to &quot;The mashup nobody asked for&quot;"></a></h2>
<p>Snake is about growth and spatial awareness. Arkanoid is about reflexes and angle prediction. They come from completely different design philosophies, and mashing them together sounds like the kind of pitch that gets politely declined.</p>
<p>But <a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">A Breaker Belt</a> makes it work. You're a cosmic serpent, a living arc of neon current stitched from drift-metal vertebrae and comet silk, threading through an asteroid field of breakable blocks. Your head is the paddle. Your growing tail is both your greatest weapon and your most constant threat. The orbs ricochet off your body to shatter bricks, but one wrong turn into your own tail and you're done.</p>
<p>It's the kind of weird cross-genre experiment that usually dies before anyone gets to play it, because the development cost of finding out whether a weird idea works has traditionally been measured in months. Here it was measured in afternoons.</p>
<h2 id="what-actually-shipped" tabindex="-1">What actually shipped <a class="header-anchor" href="#what-actually-shipped" aria-label="Permalink to &quot;What actually shipped&quot;"></a></h2>
<p>The scope is what makes this interesting. This isn't a game jam prototype with placeholder rectangles and no sound.</p>
<p>The game runs 50 waves deep. That's not 50 variations of the same brick wall. The formations evolve from gentle onboarding arcs into fortress rings that demand angled shots through side gaps, then into layered diagonal mazes with one-tile-wide openings that require precision steering. Explosive bricks blow their neighbors apart. Phantom bricks flicker in and out of existence. Regenerating bricks heal back after you break them, forcing you to prioritize targets instead of sweeping left to right. Portal bricks teleport your orbs across the arena. Gravity wells bend your shots into slow, curving hymns. Laser emitters sketch red lines across the void. Mimic bricks look harmless until they decide they're not. By wave 15, you're navigating something that feels less like a puzzle and more like a living system that's learning your habits.</p>
<p>The music isn't a loop. It's a reactive soundtrack that builds with your gameplay. The underlying system runs a bass line, lead synth, pad, kick, snare, and hi-hat in E minor at 140 BPM, and as the action intensifies, additional percussion and synth layers fade in. When things calm down between waves, ambient pads take over. The music breathes with you. A dedicated composer would spend weeks tuning that kind of responsiveness. Here it was part of the creative flow.</p>
<p>The sound effects aren't samples pulled from a free pack. Every brick shatter, power-up pickup, orb bounce, and collision is synthesized in real time. Different pitch for head contacts versus tail contacts. Warm reverb for the spacey feel. Lowpass filtering to keep things satisfying without getting harsh. When you chain a combo, the audio tells you before the screen does.</p>
<p>And then there's the storytelling. Each wave opens with a narrative beat delivered by AI-generated voice. The writing has genuine personality. Wave one: &quot;They call it the Breaker Belt: a ribbon of engineered debris that circles the old star like a warning.&quot; By wave 38: &quot;The Belt stops feeling like a wall and starts feeling like a mind. It tests not your reflexes, but your habits. You break the habit. The Belt notices.&quot; It's 50 chapters of cosmic mythology that makes you care about why you're a snake breaking bricks in space.</p>
<p>The backgrounds evolve too. Early waves are calm indigo starfields with soft meteor rain. By mid-game, aurora bands and nebula clouds appear. Late game goes full Ion Storm with sharp cyan streaks against near-black space. Wave 15 and beyond, you're in Event Horizon territory: deep void punctuated by slow-moving nebula filaments and faint black hole lensing. The game communicates progression through atmosphere as much as difficulty.</p>
<p>All of this runs on keyboard, gamepad, or touchscreen. Published to web, mobile, and PC from one codebase. No separate builds. No porting.</p>
<h2 id="the-team-that-wasn-t-needed" tabindex="-1">The team that wasn't needed <a class="header-anchor" href="#the-team-that-wasn-t-needed" aria-label="Permalink to &quot;The team that wasn't needed&quot;"></a></h2>
<p>Here's the part that should make any game developer pause.</p>
<p>A game with this depth of content would typically need a game designer, a couple of programmers, a 2D artist, a sound designer, a music composer, a level designer, a narrative writer, and QA. That's eight or nine people. At a modest indie pace, you're looking at three to six months of coordinated work. Standups, Jira tickets, asset pipelines, platform-specific debugging.</p>
<p>Two of us made this in a long weekend on the <a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">Cinevva Engine</a>.</p>
<p>The talent required didn't change. The ratio between creative intent and implementation overhead did. The time was spent deciding what the game should feel like, not fighting tools to make it happen.</p>
<h2 id="why-this-matters-if-you-make-things" tabindex="-1">Why this matters if you make things <a class="header-anchor" href="#why-this-matters-if-you-make-things" aria-label="Permalink to &quot;Why this matters if you make things&quot;"></a></h2>
<p>The interesting question isn't whether AI tools can help make games faster. That's been answered. The interesting question is what happens to ideas that used to be too risky to try.</p>
<p>&quot;Snake but you're the paddle in an Arkanoid arena&quot; is not something a producer greenlights. A few indie devs have tried variations of this mashup on itch.io (<a href="https://newdron.itch.io/breaksnake" target="_blank" rel="noreferrer">BreakSnake</a>, <a href="https://neop87.itch.io/snakeout" target="_blank" rel="noreferrer">SnakeOut</a>, <a href="https://merrak.itch.io/snake-break" target="_blank" rel="noreferrer">Snake Break</a>), but they're all small game jam experiments. The genre fusion has never been given a real production pass with reactive music, AI narration, dozens of brick types, and 50 waves of escalating level design. In traditional development, that kind of polish on a risky concept gets killed in a meeting before anyone writes a line of code.</p>
<p>When trying something weird costs an afternoon instead of a quarter's budget, the strange ideas get built. Some of them turn out to be genuinely good. A Breaker Belt is one of those.</p>
<h2 id="play-it-or-make-your-own" tabindex="-1">Play it. Or make your own. <a class="header-anchor" href="#play-it-or-make-your-own" aria-label="Permalink to &quot;Play it. Or make your own.&quot;"></a></h2>
<p><a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">A Breaker Belt</a> is playable right now on web, mobile, and PC. If it makes you want to build something, the <a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">Cinevva Engine</a> is free to use. Describe what you want, iterate on what comes back, ship when it's ready. The <a href="https://app.cinevva.com/tools/music" target="_blank" rel="noreferrer">music</a>, <a href="https://app.cinevva.com/tools/sfx" target="_blank" rel="noreferrer">sound effects</a>, <a href="https://app.cinevva.com/tools/flux" target="_blank" rel="noreferrer">art</a>, and <a href="https://app.cinevva.com/tools/hunyuan3d" target="_blank" rel="noreferrer">3D models</a> are all built in.</p>
<p>Your weird game idea might be three days away from existing.</p>
<hr>
<p><em><a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">Play A Breaker Belt</a> | <a href="https://app.cinevva.com/engine" target="_blank" rel="noreferrer">Build your own game</a> | <a href="https://cinevva.com/charts" target="_blank" rel="noreferrer">Browse community games</a></em></p>
<p><strong>Related:</strong></p>
<ul>
<li><a href="/signals/2026-03-13-vibe-coding-new-game-jam.html">Vibe coding is the new game jam</a> — how AI tools collapse the gap between idea and prototype</li>
<li><a href="/guides/game-jams-hackathons.html">Game Jams &amp; Hackathons</a> — the jam format that makes weird mashups possible</li>
<li><a href="/guides/web-game-engines-comparison.html">Web Game Engines Comparison</a> — engines for shipping to web, mobile, and PC</li>
</ul>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[We didn't expect to build a radio station]]></title>
            <link>https://app.cinevva.com/blog/2026-02-17-we-didnt-expect-a-radio-station</link>
            <guid>https://app.cinevva.com/blog/2026-02-17-we-didnt-expect-a-radio-station</guid>
            <pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[We have a music generator on Cinevva so game creators can make soundtracks. People started making music just because they could. 362 tracks later, Cinevva Radio is live.]]></description>
            <content:encoded><![CDATA[<h1 id="we-didn-t-expect-to-build-a-radio-station" tabindex="-1">We didn't expect to build a radio station <a class="header-anchor" href="#we-didn-t-expect-to-build-a-radio-station" aria-label="Permalink to &quot;We didn't expect to build a radio station&quot;"></a></h1>
<p><em>By <a href="/about.html">Mariana Muntean</a>, CEO of Cinevva</em></p>
<img src="https://cdn.cinevva.com/blog/cinevva-radio-lofi-cafe.png" alt="Cinevva Community Radio with 8 stations including Soundstage, Voltage, Lo-Fi Cafe, Main Stage, Ivory Tower, and Electric Dreams, currently playing Quiet Flute Drone" style="width:100%;border-radius:8px;margin:1.5rem 0" />
<p>We have a <a href="https://app.cinevva.com/tools/music" target="_blank" rel="noreferrer">music generator</a> on the Cinevva platform so game creators can make game soundtracks and SFX without licensing headaches. What happened next, we did not plan for. People started making music because they wanted to make music. Not for games or projects. Just music production.</p>
<h2 id="bella-bay-made-over-100-tracks-in-two-weeks" tabindex="-1">Bella Bay made over 100 tracks in two weeks <a class="header-anchor" href="#bella-bay-made-over-100-tracks-in-two-weeks" aria-label="Permalink to &quot;Bella Bay made over 100 tracks in two weeks&quot;"></a></h2>
<p>I'm not exaggerating. One of our creators, Bella Bay, generated over 100 tracks in about two weeks. When you listen through them on <a href="https://cinevva.com/radio" target="_blank" rel="noreferrer">Cinevva Radio</a>, you can hear the progression. The early tracks are experiments. The later ones sound like someone who found their style. That kind of creative acceleration doesn't happen when you're fighting your tools. It happens when the tools get out of the way.</p>
<p>Bella Bay isn't an outlier in spirit, just in volume. All across the platform, people started treating the music generator not as a utility for game assets but as a creative instrument. Producers experimenting with genres they'd never tried. People iterating on a sound until it clicked. Bedroom artists who'd never had access to a full production toolkit suddenly had one and went all in.</p>
<h2 id="so-we-built-a-radio" tabindex="-1">So we built a radio <a class="header-anchor" href="#so-we-built-a-radio" aria-label="Permalink to &quot;So we built a radio&quot;"></a></h2>
<p>It had to be done.</p>
<p>We had all these tracks sitting in people's accounts with no collective space. No way to discover what others were making. No way to stumble onto a track that changes your afternoon.</p>
<p><a href="https://cinevva.com/radio" target="_blank" rel="noreferrer">Cinevva Radio</a> is now live. Eight stations. 362 community-created tracks and growing every day. Every single song was made by someone on the platform. You open it and something is playing. You don't have to search. You don't have to decide. You just listen.</p>
<p><strong><a href="https://cinevva.com/radio#cinematic" target="_blank" rel="noreferrer">Soundstage</a></strong> is our biggest station. 111 tracks of cinematic and epic scores. I honestly didn't see that coming. People love making dramatic, film-style music.</p>
<p><strong><a href="https://cinevva.com/radio#mixed" target="_blank" rel="noreferrer">Discovery</a></strong> caught everything that doesn't fit neatly into a genre. 78 tracks. Latin beats next to Afrobeats next to country next to something that defies description. This station is wild.</p>
<p><strong><a href="https://cinevva.com/radio#pop" target="_blank" rel="noreferrer">Main Stage</a></strong> is pop and vocal tracks. 56 tracks with full lyrics and vocals. People are writing love songs, breakup songs, worship music, ballads. Some of it is genuinely catchy.</p>
<p><strong><a href="https://cinevva.com/radio#rock" target="_blank" rel="noreferrer">Voltage</a></strong> is rock and metal. 53 tracks. Guitar riffs, punk energy, grunge, hard rock. I keep this one on while working.</p>
<p><strong><a href="https://cinevva.com/radio#lofi" target="_blank" rel="noreferrer">Lo-Fi Cafe</a></strong> is exactly what it sounds like. 25 chill ambient tracks for studying, working, or just vibing.</p>
<p><strong><a href="https://cinevva.com/radio#electronic" target="_blank" rel="noreferrer">Electric Dreams</a></strong> has 22 electronic and synth tracks. Synthwave, techno, EDM, drum and bass.</p>
<p><strong><a href="https://cinevva.com/radio#classical" target="_blank" rel="noreferrer">Ivory Tower</a></strong> is classical and orchestral. 10 tracks. Piano pieces, string arrangements, symphonic stuff. Small but growing.</p>
<p><strong><a href="https://cinevva.com/radio#hiphop" target="_blank" rel="noreferrer">The Cipher</a></strong> is hip-hop and rap. 7 tracks so far. Newest station, still finding its voice.</p>
<h2 id="what-you-see-when-you-listen" tabindex="-1">What you see when you listen <a class="header-anchor" href="#what-you-see-when-you-listen" aria-label="Permalink to &quot;What you see when you listen&quot;"></a></h2>
<p>When you open <a href="https://cinevva.com/radio" target="_blank" rel="noreferrer">Cinevva Radio</a>, it's a live stream. Think of it less like a playlist app and more like an actual radio station. Tracks play continuously. You can react in real time, chat with other listeners, and see who made each track.</p>
<p>Creators get credited on screen while their track plays. Their name, their prompt, the genre. If you hear something you like, you know who made it. It's community-created content with a discovery layer on top. No algorithms deciding what's worthy. No gatekeepers. You made a track, it goes on the station that matches its genre, and people hear it.</p>
<h2 id="this-is-bigger-than-a-feature" tabindex="-1">This is bigger than a feature <a class="header-anchor" href="#this-is-bigger-than-a-feature" aria-label="Permalink to &quot;This is bigger than a feature&quot;"></a></h2>
<p>For decades, music production required expensive software, years of training, and access to equipment most people couldn't afford. What we're seeing is a different kind of creator. People who think in descriptions and feelings rather than notes and time signatures.</p>
<p>&quot;Rainy night jazz with a broken piano&quot; is a creative direction. The person who wrote that prompt made an artistic choice. They directed the mood, the instrumentation, the emotional register. They just don't happen to play piano.</p>
<p>I don't think that makes their output less valid. I think it means the definition of &quot;musician&quot; is stretching. And watching someone like Bella Bay go from zero to a hundred tracks in two weeks tells me the creative drive was always there. The tools just weren't.</p>
<h2 id="what-s-next" tabindex="-1">What's next <a class="header-anchor" href="#what-s-next" aria-label="Permalink to &quot;What's next&quot;"></a></h2>
<p>We're working on community profiles where producers can showcase their catalog and build a following. The <a href="https://cinevva.com/charts" target="_blank" rel="noreferrer">charts page</a> already shows community creations across all tools, and we want to bring that same energy specifically to music.</p>
<p>But right now, the radio is live. <a href="https://cinevva.com/radio" target="_blank" rel="noreferrer">Go listen</a>. And if you want to make your own tracks, the <a href="https://app.cinevva.com/tools/music" target="_blank" rel="noreferrer">music generator</a> is free to use. Your track might end up on the air.</p>
<hr>
<p><em><a href="https://cinevva.com/radio" target="_blank" rel="noreferrer">Listen to Cinevva Radio</a> | <a href="https://app.cinevva.com/tools/music" target="_blank" rel="noreferrer">Make music</a> | <a href="https://cinevva.com/charts" target="_blank" rel="noreferrer">Browse community creations</a></em></p>
<p><strong>Related:</strong></p>
<ul>
<li><a href="/guides/frontier-gen-ai-models.html">Frontier Open-Source Gen AI Models</a> — the AI music and audio models behind the scenes</li>
<li><a href="/tutorials/web-audio-api-games.html">Web Audio API for games</a> — building interactive audio in the browser</li>
<li><a href="/signals/2026-03-13-vibe-coding-new-game-jam.html">Vibe coding is the new game jam</a> — how describing intent becomes the primary creative input</li>
</ul>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The intuitive mind in an age of AI]]></title>
            <link>https://app.cinevva.com/blog/2026-02-10-the-intuitive-mind</link>
            <guid>https://app.cinevva.com/blog/2026-02-10-the-intuitive-mind</guid>
            <pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Jensen Huang says raw math skills are now a commodity. Here's why that resonates with someone who failed the SAT, ignored the rules, and built something people actually wanted.]]></description>
            <content:encoded><![CDATA[<h1 id="the-intuitive-mind-in-an-age-of-ai" tabindex="-1">The intuitive mind in an age of AI <a class="header-anchor" href="#the-intuitive-mind-in-an-age-of-ai" aria-label="Permalink to &quot;The intuitive mind in an age of AI&quot;"></a></h1>
<p><em>By <a href="/about.html">Mariana Muntean</a>, CEO of Cinevva</em></p>
<img src="https://cdn.cinevva.com/blog/game-jam-houston-2018.png" alt="Mariana Muntean with classmates during a 48 hours game jam in Houston 2018" style="width:100%;border-radius:8px;margin:1.5rem 0" />
<small>With classmates during a 48-hour game jam in Houston, 2018</small>
<p>AI has essentially commoditized raw logic and computational ability. The skills we used to worship, the mental math, the pattern recognition, the ability to grind through complex algorithms, AI does all of that now. Faster. Better. Without getting tired. Do we still need to know it? Absolutely, but perceptions change. Now everyone needs to adjust to the new era - an era of intuitive intreaction and outcomes.</p>
<p>According to <a href="https://business.columbia.edu/insights/digital-future/nvidia-ceo-jensen-huang-reveals-keys-ai-and-leadership" target="_blank" rel="noreferrer">Jensen Huang</a>, NVIDIA's CEO, what matters now is the ability to sense a &quot;vibe&quot; and see around corners before the data appears. The intersection of technical literacy and deep empathy. The intuitive understanding that silicon can't touch.</p>
<p>I've never heard anyone with his credibility say something that validated my entire life trajectory quite so directly.</p>
<h2 id="the-sat-and-the-system-that-wasn-t-built-for-me" tabindex="-1">The SAT and the system that wasn't built for me <a class="header-anchor" href="#the-sat-and-the-system-that-wasn-t-built-for-me" aria-label="Permalink to &quot;The SAT and the system that wasn't built for me&quot;"></a></h2>
<p>A few years back I took the SAT and failed. I'm not proud of it, but I also didn't try again. Something felt fundamentally wrong about the whole thing, and I couldn't shake that feeling no matter how much people told me to just study harder and take it again.</p>
<p>Looking back, I think I was right.</p>
<p>The SAT is designed for the American education system. That sounds obvious, but the implications run deep. American high schools teach to specific patterns, question types, and ways of framing problems. Students grow up marinating in that style of standardized testing from elementary school onward. By the time they hit the SAT, they've internalized the rhythm.</p>
<p>International students don't have that advantage. We come from systems with different educational philosophies. European schools often emphasize depth over breadth, essay-based examination over multiple choice, oral defense over bubble sheets. Asian systems have their own standardized tests, but they measure different things in different ways. South American, African, Middle Eastern educational traditions each carry their own logic.</p>
<p>When you drop an international student into the SAT, you're not just testing their knowledge. You're testing how quickly they can adapt to a foreign testing culture while simultaneously demonstrating mastery of content. You're testing cultural fluency as much as academic ability.</p>
<p>I was expected to prep for this over a summer and ace it. Learn an entire testing culture, unlearn my own educational instincts, and perform at a level that would impress American admissions officers. All in a few months.</p>
<p>I chose not to.</p>
<p>When you're young, you're smart in different ways. Intuitive ways. I was clearing a path for myself, even if I couldn't articulate exactly why at the time. Something in me knew this system wasn't mine.</p>
<h2 id="following-the-creative-thread-instead" tabindex="-1">Following the creative thread instead <a class="header-anchor" href="#following-the-creative-thread-instead" aria-label="Permalink to &quot;Following the creative thread instead&quot;"></a></h2>
<p>I studied game development and design because I loved the idea of using creativity and visual effects to build virtual worlds people could play and interact with. I loved the intersection of art and technology, storytelling and interactivity.</p>
<p>What I found shocked me.</p>
<p>Games require serious technical depth. Physics simulations, collision detection, vector math, lighting calculations, optimization. I knew that going in. The math and engineering aren't obstacles to game development. They're part of what makes games work.</p>
<p>The problem was the gap between creative vision and implementation. The engines and tools dominating the industry were designed by engineers for engineers. Everything ran on &quot;ifs&quot; and &quot;thens&quot; and node-based blueprints. You wanted a vortex effect? Learn shader programming. Specific lighting mood? Dig into material graphs. Character movement that feels right? Debug character controller or physics parameters for hours.</p>
<p>If you're a visual person, if you think in vivid colors and moving images, if ideas come to you as fully formed scenes with sound and texture and emotional weight, you had to translate all of that into technical language before you could build any of it. You see a world in your mind, complete with lighting and atmosphere and the way characters move through space. Then you sit down at your computer and spend the next six hours debugging why your character falls through the floor.</p>
<p>The technical foundation matters. But the tools forced creators to live in implementation details instead of abstracting that complexity away. Game creation should feel like telling a story or directing a movie where people get to participate. That's the magic of the medium. Instead it felt like taking an engineering exam before you could even start.</p>
<p>Movie directors don't spend years learning physics engines before they can express their vision. They get a budget and a team that handles the technical execution. But in indie game development, you rarely have a budget. What you have is time and access to tools. And if you're a technical person, you can build a killer game. But only if.</p>
<p>This is how we got titles like Limbo, made by a small team with a singular artistic vision and the technical chops to execute it. Or Undertale, created largely by one person who happened to have the right combination of creative instinct and programming ability. Or Stardew Valley, where Eric Barone spent years teaching himself everything from pixel art to music composition to C# programming.</p>
<p>These games succeeded against enormous odds. But for every Limbo there are millions of creative visions that died because the tools demanded technical fluency their creators couldn't provide. Less than 3% of indie game developers ever achieve meaningful success. How many brilliant games never got made because their creators hit a wall of &quot;ifs&quot; and &quot;thens&quot; and gave up?</p>
<p>The barrier to entry wasn't creativity. It was technical gatekeeping built into the tools themselves.</p>
<h2 id="building-what-should-have-existed" tabindex="-1">Building what should have existed <a class="header-anchor" href="#building-what-should-have-existed" aria-label="Permalink to &quot;Building what should have existed&quot;"></a></h2>
<p>So 5 years ago I started building something different.</p>
<p>The pitch was simple: game development should be accessible to anyone with a creative vision. You shouldn't need a computer science degree to express yourself through interactive media. The tools should adapt to how creative people actually think, not the other way around.</p>
<p>VCs from Sequoia, Pear, Draper, and dozens of other firms told me it wouldn't work. It's B2C. The market isn't there. Indies don't pay for anything. You can't simplify game development without sacrificing capability. Gamers want complex games, and complex games require complex tools. Millions of excuses dressed up as market analysis.</p>
<p>People told me I was crazy. Maybe I was. But I kept coming back to the same question: why should multimillion dollar budgets be a prerequisite for creative expression? Movie directors and celebrity game producers have teams and resources. Everyone else gets a code editor and a prayer. I wanted to build the thing that closes that gap. You describe what you want, and it happens in front of your eyes. Every part of me knew this was right. I could feel it in every cell of my body.</p>
<p>Today thousands of people use Cinevva daily for 3D game assets, games, music, levels and experiences. Millions of views on projects created so far. Growing every single day. A two-minute pitch to Sand Hill Road isn't exactly the format for &quot;I failed the SAT but trust my intuition.&quot;</p>
<h2 id="what-intuition-actually-means" tabindex="-1">What intuition actually means <a class="header-anchor" href="#what-intuition-actually-means" aria-label="Permalink to &quot;What intuition actually means&quot;"></a></h2>
<p><a href="https://singjupost.com/transcript-jensen-huangs-interview-cisco-ai-summit-2026/" target="_blank" rel="noreferrer">Huang</a> wasn't just making a philosophical point. He was describing a real shift in what constitutes valuable intelligence.</p>
<p>For decades, we optimized for the wrong things. We built educational systems that rewarded memorization and calculation. We designed standardized tests that measured pattern-matching against previously seen problems. We hired people based on credentials that proved they could survive four years of academic gatekeeping. AI just made all of that less special.</p>
<p>What AI can't do, at least not yet, is sense what's missing. Feel when something is off. Intuit what people need before they can articulate it themselves. Read a room. Understand context that isn't captured in any dataset.</p>
<p>I poured my time, international life experience, money, and intuitive knowledge into building Cinevva. That's a mix hard to obtain in college. Hard to test for on the SAT. Hard to capture in any credential system designed before AI made raw cognitive horsepower abundant.</p>
<p>Sixteen or twenty years ago, computer scientists decided what tools should look like and how they should work. They built for themselves, for people who thought like them. The rest of us were expected to adapt. That era is ending. The people who will shape what comes next are the ones who understand what humans actually want. Who can feel when something is wrong and when something is right. Who build for people instead of for technical elegance.</p>
<p>I trusted something in myself that the system told me was worthless. And I was right.</p>
<hr>
<p><strong>Related:</strong></p>
<ul>
<li><a href="/blog/2026-01-18-skills-over-degrees.html">The job market is transforming — from credentials to skills</a></li>
<li><a href="/guides/game-dev-courses.html">Online Game Development Courses</a> — skills-first paths that bypass traditional gatekeeping</li>
<li><a href="/signals/2026-03-04-everyone-wants-ai-game-engine.html">Everyone wants to be the AI game engine now</a> — the industry shift toward tools that adapt to how people think</li>
</ul>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AI controversy, trust, and the post‑AI economy for games]]></title>
            <link>https://app.cinevva.com/blog/2026-01-18-ai-controversy-and-post-ai-economy</link>
            <guid>https://app.cinevva.com/blog/2026-01-18-ai-controversy-and-post-ai-economy</guid>
            <pubDate>Sun, 18 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A practical view: AI isn't the point—trust is. Why labeling, filters, and fair distribution matter more than 'pro' vs 'anti' AI debates.]]></description>
            <content:encoded><![CDATA[<h1 id="ai-controversy-trust-and-the-post‐ai-economy-for-games" tabindex="-1">AI controversy, trust, and the post‑AI economy for games <a class="header-anchor" href="#ai-controversy-trust-and-the-post‐ai-economy-for-games" aria-label="Permalink to &quot;AI controversy, trust, and the post‑AI economy for games&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO of Cinevva</em></p>
<p>AI in games turned into a genuine minefield somewhere around mid-2024. Jobs disappearing. People convinced they're next. Copyright questions that not even the lawyers can untangle. Endless aesthetic debates that loop back on themselves. Steam drowning in stuff nobody asked for. And underneath everything, this nagging worry every creator has now: &quot;am I actually making something here, or just... typing prompts?&quot;</p>
<p>We run a platform. You learn one thing fast doing that.</p>
<p><strong>Nobody likes feeling tricked.</strong></p>
<h2 id="these-numbers-surprised-us" tabindex="-1">These numbers surprised us <a class="header-anchor" href="#these-numbers-surprised-us" aria-label="Permalink to &quot;These numbers surprised us&quot;"></a></h2>
<p>We figured AI adoption would grow. Not like this though. Here's where things landed by late 2025:</p>
<table tabindex="0">
<thead>
<tr>
<th>What we tracked</th>
<th>2024</th>
<th>2025</th>
<th>The shift</th>
</tr>
</thead>
<tbody>
<tr>
<td>Steam games disclosing AI</td>
<td>~1,000</td>
<td>7,818</td>
<td>7x jump</td>
</tr>
<tr>
<td>New Steam releases using AI</td>
<td>~3%</td>
<td>~20%</td>
<td>One in five</td>
</tr>
<tr>
<td>Devs who think AI hurts quality</td>
<td>34%</td>
<td>47%</td>
<td>Thirteen points higher</td>
</tr>
<tr>
<td>Devs who think AI helps quality</td>
<td>—</td>
<td>11%</td>
<td>Not many</td>
</tr>
<tr>
<td>Revenue from AI-disclosed games</td>
<td>—</td>
<td>$660M</td>
<td>Twelve games broke 8 figures</td>
</tr>
</tbody>
</table>
<p>Sources: Tom's Hardware / Totally Human Media analysis, GDC 2025 Developer Survey, Unity 2025 Gaming Report</p>
<h2 id="what-actually-happened-to-make-players-this-skeptical" tabindex="-1">What actually happened to make players this skeptical <a class="header-anchor" href="#what-actually-happened-to-make-players-this-skeptical" aria-label="Permalink to &quot;What actually happened to make players this skeptical&quot;"></a></h2>
<p>This didn't come from nowhere. Real things went wrong. And people remember.</p>
<h3 id="voice-actors-found-out-they-d-been-cloned-—-on-launch-day" tabindex="-1">Voice actors found out they'd been cloned — on launch day <a class="header-anchor" href="#voice-actors-found-out-they-d-been-cloned-—-on-launch-day" aria-label="Permalink to &quot;Voice actors found out they'd been cloned — on launch day&quot;"></a></h3>
<p><em>Tomb Raider 4-6 Remastered</em> shipped with AI-generated versions of the original voice performances. The actors learned about it the same moment everyone else did. Launch day. The publisher eventually patched the voices out after legal pressure built up. That eleven-month SAG-AFTRA strike? This exact scenario was driving it.</p>
<h3 id="ai-slop-became-a-thing-people-say-now" tabindex="-1">&quot;AI slop&quot; became a thing people say now <a class="header-anchor" href="#ai-slop-became-a-thing-people-say-now" aria-label="Permalink to &quot;&quot;AI slop&quot; became a thing people say now&quot;"></a></h3>
<p><em>Call of Duty: Black Ops 6</em> quietly added AI disclosure to its Steam page. After it was already out. Players had noticed the weirdness — visual glitches scattered everywhere. One loading screen character had six fingers. &quot;AI slop&quot; turned into shorthand for anything that looks... off. Hollow. Like nobody cared enough to check.</p>
<h3 id="an-award-nomination-vanished" tabindex="-1">An award nomination vanished <a class="header-anchor" href="#an-award-nomination-vanished" aria-label="Permalink to &quot;An award nomination vanished&quot;"></a></h3>
<p><em>Clair Obscur: Expedition 33</em> got its Game of the Year nomination pulled at the Indie Game Awards. The shipped game didn't use AI though. The problem was AI placeholders during development — internal stuff that never made it to players. Final assets were entirely human-made. Didn't matter. Weeks of arguing about where exactly the line should be.</p>
<h3 id="teams-that-never-touched-ai-had-to-prove-it" tabindex="-1">Teams that never touched AI had to prove it <a class="header-anchor" href="#teams-that-never-touched-ai-had-to-prove-it" aria-label="Permalink to &quot;Teams that never touched AI had to prove it&quot;"></a></h3>
<p><em>Chessplus</em> and <em>Peak</em> both got hit with AI accusations. Neither used any. Both were award nominees. Both development teams ended up digging through old screenshots and layer files just to demonstrate their work was handmade. The <em>Peak</em> team put it best: &quot;We might be slop, but we're human-made, locally-sourced artisanal slop.&quot;</p>
<h3 id="studios-went-in-completely-opposite-directions" tabindex="-1">Studios went in completely opposite directions <a class="header-anchor" href="#studios-went-in-completely-opposite-directions" aria-label="Permalink to &quot;Studios went in completely opposite directions&quot;"></a></h3>
<p><strong>SNK</strong> — Discord moderators walked out after AI-looking visuals showed up in a <em>Fatal Fury: City of the Wolves</em> trailer. <strong>Games Workshop</strong> — banned AI entirely across all Warhammer properties. <strong>Larian Studios</strong> — said they used AI only for very early concept sketches on Baldur's Gate 3, nothing in the final game. <strong>Tim Sweeney</strong> — declared Steam's AI labels worthless and said to scrap them. <strong>Valve</strong> — shot back that devs complaining about AI labels usually worry their work looks &quot;low effort.&quot;</p>
<h2 id="finding-games-turned-into-a-trust-problem" tabindex="-1">Finding games turned into a trust problem <a class="header-anchor" href="#finding-games-turned-into-a-trust-problem" aria-label="Permalink to &quot;Finding games turned into a trust problem&quot;"></a></h2>
<p>Stuff gets made faster than anyone can properly evaluate it now. The questions changed:</p>
<p>What am I even looking at here?
Who made this?
Am I going to regret the time I spend on it?
Does any of this match what the trailer promised?</p>
<p>There's probably no grand resolution coming. No definitive ruling on AI versus human-made.</p>
<p>What's more likely: <strong>filters, real transparency, and incentive structures that push toward honesty</strong>.</p>
<h2 id="small-studios-stuck-in-the-middle-of-all-this" tabindex="-1">Small studios stuck in the middle of all this <a class="header-anchor" href="#small-studios-stuck-in-the-middle-of-all-this" aria-label="Permalink to &quot;Small studios stuck in the middle of all this&quot;"></a></h2>
<p>Indie developers landed in a strange spot.</p>
<p><strong>The upside is genuinely hard to dismiss:</strong>
You iterate faster. Costs drop when four people are making everything. Solo devs can actually finish things now. Localization stops eating your entire budget.</p>
<p><strong>The downside is just as real:</strong>
Output that feels generic. Training data and IP concerns that lawyers won't touch yet. Shipping systems you don't fully understand — some people call it comprehension debt. Getting grouped in with the flood of low-effort releases. Nearly half of surveyed developers think AI makes games worse overall.</p>
<p>Unity's 2025 report says 79% of developers feel positive about AI tools. Sounds pretty definitive. The reality is messier. Teams doing this well tend to use AI for the boring parts — grunt work, rough passes, QA tedium. Creative direction? That stays human.</p>
<h2 id="where-cinevva-lands-neutral-on-ai-strict-on-honesty" tabindex="-1">Where Cinevva lands: neutral on AI, strict on honesty <a class="header-anchor" href="#where-cinevva-lands-neutral-on-ai-strict-on-honesty" aria-label="Permalink to &quot;Where Cinevva lands: neutral on AI, strict on honesty&quot;"></a></h2>
<p>We don't turn games away for using AI. We don't give them special treatment either.</p>
<p>The rule is simple: <strong>if AI was involved, say so.</strong> Players decide what they care about. Filters actually work then.</p>
<ul>
<li><a href="/ai-content.html">AI-generated content policy</a></li>
</ul>
<h2 id="filters-work-better-than-arguments" tabindex="-1">Filters work better than arguments <a class="header-anchor" href="#filters-work-better-than-arguments" aria-label="Permalink to &quot;Filters work better than arguments&quot;"></a></h2>
<p>The AI debate in games isn't reaching consensus anytime soon. Probably never will. But individual preferences? Those are clear enough.</p>
<p>Some players specifically want human-directed art. Writing that came from a person. Visible craft.</p>
<p>Others genuinely don't care. Fun is fun.</p>
<p>Filters let both groups find what they're looking for. Nobody has to win.</p>
<h2 id="payment-models-matter-more-than-anyone-s-opinion" tabindex="-1">Payment models matter more than anyone's opinion <a class="header-anchor" href="#payment-models-matter-more-than-anyone-s-opinion" aria-label="Permalink to &quot;Payment models matter more than anyone's opinion&quot;"></a></h2>
<p>When revenue ties to <strong>playtime</strong> instead of unit sales, you make money by:</p>
<p>Getting players hooked fast. Keeping their attention. Delivering what your marketing said you would.</p>
<p>Quality becomes the obvious path. How you made it matters less.</p>
<p>Overpromise in your trailer? Players leave immediately. Retention tanks. Show them exactly what they're getting? They stick around. Playtime builds. Revenue comes. The economics sort themselves out.</p>
<h2 id="what-2026-probably-brings" tabindex="-1">What 2026 probably brings <a class="header-anchor" href="#what-2026-probably-brings" aria-label="Permalink to &quot;What 2026 probably brings&quot;"></a></h2>
<p><strong>More rules</strong> — EU AI Act keeps expanding. US protections for voice and likeness growing, especially after SAG-AFTRA.</p>
<p><strong>Specialized tools</strong> — AI built specifically for small teams. Designed to keep humans steering.</p>
<p><strong>Smarter labels</strong> — &quot;Made with AI&quot; is too crude. Expect distinctions between AI-assisted workflows and AI-generated final assets.</p>
<p><strong>Audiences splitting</strong> — Some players will deliberately seek out traditionally-made games. Others won't give it a thought. Both groups are large enough to build for.</p>
<p><strong>Platform competition</strong> — How storefronts handle transparency and discovery becomes a real differentiator.</p>
<h2 id="what-actually-moves-things-forward" tabindex="-1">What actually moves things forward <a class="header-anchor" href="#what-actually-moves-things-forward" aria-label="Permalink to &quot;What actually moves things forward&quot;"></a></h2>
<p>Arguing won't settle the AI question. What does:</p>
<ol>
<li><strong>Players</strong> picking based on what genuinely matters to them</li>
<li><strong>Creators</strong> being honest about how they work</li>
<li><strong>Platforms</strong> building tools that help both groups find each other</li>
</ol>
<p>That's the post-AI economy. Not warring camps. Trust holding the whole thing together.</p>
<hr>
<p><strong>Related:</strong></p>
<ul>
<li><a href="/ai-content.html">AI-generated content policy</a></li>
<li><a href="/creators.html">For game creators</a></li>
<li><a href="/faq.html">FAQ</a></li>
<li><a href="/guides/frontier-gen-ai-models.html">Frontier Open-Source Gen AI Models</a> — the specific models and how they work</li>
<li><a href="/signals/2026-03-06-open-source-ai-pollution.html">Open source has an AI pollution problem</a> — what happens when AI output floods open-source projects</li>
</ul>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The job market is transforming — from credentials to skills]]></title>
            <link>https://app.cinevva.com/blog/2026-01-18-skills-over-degrees</link>
            <guid>https://app.cinevva.com/blog/2026-01-18-skills-over-degrees</guid>
            <pubDate>Sun, 18 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[90% of companies now prioritize skills over degrees. Here's what this means for fresh graduates entering tech and game development.]]></description>
            <content:encoded><![CDATA[<h1 id="the-job-market-is-transforming-—-from-credentials-to-skills" tabindex="-1">The job market is transforming — from credentials to skills <a class="header-anchor" href="#the-job-market-is-transforming-—-from-credentials-to-skills" aria-label="Permalink to &quot;The job market is transforming — from credentials to skills&quot;"></a></h1>
<p><em>By <a href="/about.html">Oleg Sidorkin</a>, CTO of Cinevva</em></p>
<p>I watched my cousin graduate with honors in 2023. Computer science degree from a good state school. Solid GPA. He spent fourteen months applying to jobs before landing one. Meanwhile, a friend who dropped out sophomore year to build indie games had three offers within a month of deciding to look.</p>
<p>That's not an anomaly anymore. That's the pattern.</p>
<h2 id="the-numbers-finally-caught-up-to-what-we-ve-been-feeling" tabindex="-1">The numbers finally caught up to what we've been feeling <a class="header-anchor" href="#the-numbers-finally-caught-up-to-what-we-ve-been-feeling" aria-label="Permalink to &quot;The numbers finally caught up to what we've been feeling&quot;"></a></h2>
<p>By 2025, <strong>90% of HR leaders</strong> reported hiring outside traditional four-year degrees (<a href="https://fortune.com/2025/07/01/90-percent-hr-leaders-looking-to-hire-outside-of-traditional-college-degrees-as-they-prioritize-skills/" target="_blank" rel="noreferrer">Fortune</a>). A quarter of U.S. companies dropped bachelor's requirements entirely (<a href="https://www.hrdive.com/news/employer-eliminate-degree-requirements-2025/748998/" target="_blank" rel="noreferrer">HR Dive</a>). And here's the part that stings if you just finished paying off loans: <strong>94% of employers say skills-based hires outperform those hired on credentials</strong> (<a href="https://www.forbes.com/sites/cynthiapong/2024/12/26/90-of-companies-make-better-hires-based-on-skills-over-degrees/" target="_blank" rel="noreferrer">Forbes</a>).</p>
<p>Verizon now says 99% of their roles don't require a degree. Sergey Brin admitted Google hires &quot;tons&quot; of people without bachelor's degrees. IBM made half their U.S. openings degree-optional through their &quot;New Collar&quot; program.</p>
<p>This isn't companies being charitable. They figured out that degrees weren't predicting who'd actually be good at the job.</p>
<h2 id="why-the-old-system-broke" tabindex="-1">Why the old system broke <a class="header-anchor" href="#why-the-old-system-broke" aria-label="Permalink to &quot;Why the old system broke&quot;"></a></h2>
<p>The honest answer? Degrees became a lazy filter.</p>
<p>When you're hiring and you've got 400 applications, requiring a bachelor's degree cuts the pile in half. It doesn't tell you who can actually do the work. It tells you who had the money, the time, and the family stability to sit in classrooms for four years. That correlation with ability was always weaker than we pretended.</p>
<p>And now three things happened at once:</p>
<p><strong>The skills moved too fast.</strong> Game engines, AI tools, web frameworks. By the time a curriculum committee approves a course on something, that thing is already outdated. A four-year degree teaches you theory from four years ago. In tech, that's ancient history.</p>
<p><strong>Portfolios became undeniable.</strong> Why guess whether someone can code when you can look at their GitHub? Why wonder if they can ship a game when their itch.io page has five finished projects with player reviews?</p>
<p><strong>Companies got desperate.</strong> The talent shortage is real. Excluding everyone without a degree means excluding people who might be exactly what you need. Some hiring managers figured this out the hard way, after watching self-taught developers run circles around their credentialed hires.</p>
<h2 id="the-game-industry-saw-this-coming" tabindex="-1">The game industry saw this coming <a class="header-anchor" href="#the-game-industry-saw-this-coming" aria-label="Permalink to &quot;The game industry saw this coming&quot;"></a></h2>
<p>I think games have been ahead of the curve here, and it's worth understanding why.</p>
<p>Studios never really cared where you went to school. They cared what you shipped. A polished 48-hour jam game tells a hiring manager more than a four-year game design degree ever could. It proves you can finish things under pressure. It proves you made hard decisions about scope. It proves the game is playable, not just theoretical.</p>
<p>If you're trying to break into games in 2026, here's what actually matters (<a href="https://www.dice.com/career-advice/aspiring-video-game-designers-in-2025-what-you-need-to-know" target="_blank" rel="noreferrer">Dice</a>, <a href="https://combinegr.com/2025-global-gaming-employment-outlook-trends-talent-strategy/" target="_blank" rel="noreferrer">CombineGR</a>): two or three polished, playable demos. Case studies explaining what you did and why. Evidence that you finish things. Fluency with at least one major engine, shown through real projects.</p>
<p>Hiring managers aren't looking for potential. They're looking for proof.</p>
<h2 id="the-uncomfortable-part-nobody-talks-about" tabindex="-1">The uncomfortable part nobody talks about <a class="header-anchor" href="#the-uncomfortable-part-nobody-talks-about" aria-label="Permalink to &quot;The uncomfortable part nobody talks about&quot;"></a></h2>
<p>Here's something that frustrated me when I dug into the research.</p>
<p>A Harvard/Burning Glass study found that companies dropping degree requirements often didn't actually hire more non-degreed candidates. The increase was only about 3.5 percentage points (<a href="https://www.forbes.com/sites/jenamcgregor/2024/02/14/companies-are-dropping-diploma-requirements-for-more-jobs-but-hiring-few-non-degreed-workers-to-fill-them/" target="_blank" rel="noreferrer">Forbes</a>). The policy changed. The hiring practices lagged.</p>
<p>That means if you're going the non-traditional route, you still have to work harder. The door is more open than it used to be, but you're not walking through on equal footing yet. Your portfolio needs to be undeniable. Your projects need to speak louder than someone else's credential.</p>
<p>It's not fair. But knowing it helps you prepare.</p>
<h2 id="what-i-d-tell-someone-starting-out-right-now" tabindex="-1">What I'd tell someone starting out right now <a class="header-anchor" href="#what-i-d-tell-someone-starting-out-right-now" aria-label="Permalink to &quot;What I'd tell someone starting out right now&quot;"></a></h2>
<p>If you're in school, don't stop. But understand that the degree alone isn't enough anymore. Build things on the side. Do game jams. Get a <a href="https://grow.google/certificates/" target="_blank" rel="noreferrer">Google Career Certificate</a> or Unity certification. Treat the degree as one credential among several, not the credential.</p>
<p>If you're not pursuing a degree, you have a different path but not necessarily a harder one. Build aggressively. Ship things. Document everything. Your <a href="https://github.com" target="_blank" rel="noreferrer">GitHub</a>, your <a href="https://itch.io" target="_blank" rel="noreferrer">itch.io</a> page, your personal site. That's your credential now.</p>
<p>Either way, practice explaining your work. Not just what you built, but why you made the choices you made. What you'd do differently. What you learned. Interviewers remember people who can articulate their thinking.</p>
<h2 id="this-is-bigger-than-hiring" tabindex="-1">This is bigger than hiring <a class="header-anchor" href="#this-is-bigger-than-hiring" aria-label="Permalink to &quot;This is bigger than hiring&quot;"></a></h2>
<p>What's happening here isn't just a shift in how companies fill roles. It's a shift in what we collectively value.</p>
<p>For decades, credentials served as a filter. Expensive, time-consuming, but legible. If someone had a degree, you could assume certain things. It was a signal, not a direct measure, but it was good enough.</p>
<p>That signal is breaking now. Technology makes it possible to see what someone can actually do. Remote work proved that outputs matter more than where you sat. AI is making theoretical knowledge less valuable than practical application.</p>
<p>I find this genuinely hopeful, even if the transition is messy. The question used to be &quot;where did you study?&quot; Now it's &quot;what can you build?&quot;</p>
<p>That's a better question.</p>
<hr>
<h2 id="sources" tabindex="-1">Sources <a class="header-anchor" href="#sources" aria-label="Permalink to &quot;Sources&quot;"></a></h2>
<ul>
<li><a href="https://fortune.com/2025/07/01/90-percent-hr-leaders-looking-to-hire-outside-of-traditional-college-degrees-as-they-prioritize-skills/" target="_blank" rel="noreferrer">Fortune: 90% of HR leaders hiring outside traditional degrees</a></li>
<li><a href="https://www.forbes.com/sites/cynthiapong/2024/12/26/90-of-companies-make-better-hires-based-on-skills-over-degrees/" target="_blank" rel="noreferrer">Forbes: 90% of companies make better hires based on skills</a></li>
<li><a href="https://www.forbes.com/sites/jenamcgregor/2024/02/14/companies-are-dropping-diploma-requirements-for-more-jobs-but-hiring-few-non-degreed-workers-to-fill-them/" target="_blank" rel="noreferrer">Forbes: Companies dropping degree requirements but hiring few non-degreed workers</a></li>
<li><a href="https://www.hrdive.com/news/employer-eliminate-degree-requirements-2025/748998/" target="_blank" rel="noreferrer">HR Dive: Employers eliminating degree requirements</a></li>
<li><a href="https://www.computerworld.com/article/1623286/no-degree-no-problem-tech-firms-move-away-from-college-requirement-for-new-hires.html" target="_blank" rel="noreferrer">Computerworld: Tech firms move away from college requirement</a></li>
<li><a href="https://www.dice.com/career-advice/aspiring-video-game-designers-in-2025-what-you-need-to-know" target="_blank" rel="noreferrer">Dice: Aspiring video game designers in 2025</a></li>
<li><a href="https://combinegr.com/2025-global-gaming-employment-outlook-trends-talent-strategy/" target="_blank" rel="noreferrer">CombineGR: 2025 Global Gaming Employment Outlook</a></li>
</ul>
<hr>
<p><strong>Related:</strong></p>
<ul>
<li><a href="/guides/game-jams-hackathons.html">How to succeed in game jams</a></li>
<li><a href="/guides/game-dev-courses.html">Online Game Development Courses</a> — skills-first learning paths for every budget</li>
<li><a href="/tutorials/agentic-code-tools.html">Agentic AI code tools</a> — the tools reshaping what skills matter</li>
<li><a href="/blog/2026-02-10-the-intuitive-mind.html">The intuitive mind</a> — why intuition matters more than credentials</li>
</ul>
]]></content:encoded>
        </item>
    </channel>
</rss>