A short list of rendering techniques used in modern AAA games
By Oleg Sidorkin, CTO and Co-Founder of Cinevva

If you crack open the rendering pipeline of a 2026 AAA game, most of what you'll find traces back to a small set of techniques that ship across studios. The names change between engines, but the ideas are the same. Here's a short list of what's actually doing the work on screen, with a figure and a few deep technical references for each.
1. Physically based rendering (PBR)
Materials are described by albedo, roughness, metallic, normal, and ambient occlusion textures, and lit by energy-conserving shaders (Cook-Torrance specular, Lambertian or Disney diffuse). It's the baseline every modern AAA engine assumes. If a surface looks consistent under sunlight, lamp light, and a flashlight, PBR is why.

Deep dives:
- Burley, Physically-Based Shading at Disney (the original "Disney BRDF" course notes, SIGGRAPH 2012).
- Karis, Real Shading in Unreal Engine 4 (SIGGRAPH 2013, the canonical UE4 PBR talk).
- Lagarde and de Rousiers, Moving Frostbite to Physically Based Rendering 3.0 (SIGGRAPH 2014, full Frostbite pipeline).
- Real-Time Rendering, 4th edition, chapter 9 (the textbook reference).
2. Deferred and visibility-buffer shading
Geometry first writes attributes (normals, material IDs, depth) into a G-buffer or visibility buffer. Lighting runs as a fullscreen pass that reads those buffers and shades each pixel once. Visibility buffers (used by Nanite and similar) push this further by storing only triangle IDs and resolving material parameters per pixel later, which keeps overdraw cheap on dense geometry.

Deep dives:
- Engel, Deferred Shading (NVIDIA, the foundational talk).
- Burns and Hunt, The Visibility Buffer: A Cache-Friendly Approach to Deferred Shading (JCGT 2013, the original visibility buffer paper).
- Wihlidal, "Optimizing the Graphics Pipeline with Compute" (GDC 2016, Frostbite's compute-based deferred path).
- Karis, Stubbe, Wihlidal, A Deep Dive into Nanite Virtualized Geometry (SIGGRAPH 2021, includes the Nanite visibility buffer in detail).
3. Virtualized geometry (Nanite-style)
Meshes are pre-built into a hierarchy of clusters. At runtime the GPU streams and selects clusters at the resolution that matches each pixel, so you get sub-pixel-accurate detail without manual LODs. Unreal's Nanite is the most visible example. Other engines now ship their own variants. The practical result is film-quality assets in real time without LOD authoring.

Deep dives:
- Karis, Stubbe, Wihlidal, A Deep Dive into Nanite Virtualized Geometry (SIGGRAPH 2021).
- Brian Karis, Nanite GDC 2021 talk (an accessible video walkthrough).
- Liktor, Geometry Rendering Pipeline Architecture at Activision (cluster-based rendering, 2021).
- Schied et al., Spatiotemporal Variance-Guided Filtering (related cluster culling techniques).
4. Real-time ray tracing for shadows, reflections, and AO
Hardware ray tracing (DXR, Vulkan RT) traces shadow rays, mirror and glossy reflection rays, and ambient occlusion rays against a BVH built each frame. Even a few rays per pixel beat what screen-space techniques can do, especially for off-screen reflections and contact shadows. Most games use it surgically, not for everything.

Deep dives:
- Microsoft, DirectX Raytracing (DXR) Functional Spec (the API reference).
- Wyman, Introduction to DirectX Raytracing (the SIGGRAPH course notes, very approachable).
- Boksansky and Marrs, Ray Tracing Gems II, chapters 17-19 (free PDF, modern DXR techniques).
- Stachowiak, Stochastic Screen-Space Reflections (Frostbite, the bridge from SSR to RT).
5. Software ray tracing (Lumen-style)
Not every player has an RTX card, so engines also ship distance-field or surface-cache fallbacks. Unreal's Lumen, for instance, traces against signed distance fields and surface caches for cheap diffuse GI, and only escalates to hardware rays when needed. It's how AAA games hit "ray-traced look" on consoles.

Deep dives:
- Wright et al., Lumen: Real-Time Global Illumination in Unreal Engine 5 (SIGGRAPH 2022, the Lumen paper).
- Epic Games, Lumen Technical Details (official engine docs).
- Wright, Radiance Caching for Real-Time Global Illumination (SIGGRAPH 2021).
- Wright, Lumen GDC 2022 talk (video version).
6. ReSTIR and reservoir sampling
For direct and indirect lighting with thousands of light sources, ReSTIR (Reservoir Spatio-Temporal Importance Resampling) reuses light samples across pixels and frames. It's how games like Cyberpunk 2077 with Path Tracing keep noise low at one or two rays per pixel. Expect to see it in more engines as path tracing becomes the high-end target.

Deep dives:
- Bitterli et al., Spatiotemporal Reservoir Resampling for Real-Time Ray Tracing with Dynamic Direct Lighting (SIGGRAPH 2020, the original ReSTIR paper).
- Ouyang et al., ReSTIR GI: Path Resampling for Real-Time Path Tracing (HPG 2021, indirect-illumination ReSTIR).
- Lin et al., Generalized Resampled Importance Sampling (SIGGRAPH 2022, the math foundations).
- NVIDIA, Cyberpunk 2077 Path Tracing Tech Deep Dive (engineering blog).
7. Volumetric clouds, fog, and atmosphere
Skies are ray-marched through 3D noise and density volumes. Atmosphere uses precomputed scattering tables (Bruneton-style) for sun and moon transitions. Fog is a froxel grid (think a 3D texture aligned to the view frustum) that captures local lighting. Together they give you "weather as a system" instead of a skybox.

Deep dives:
- Schneider, The Real-Time Volumetric Cloudscapes of Horizon Zero Dawn (SIGGRAPH 2015, the canonical clouds reference).
- Hillaire, A Scalable and Production Ready Sky and Atmosphere Rendering Technique (EGSR 2020, the modern Bruneton successor used in UE5).
- Wronski, Volumetric Fog: Unified Compute Shader Based Solution to Atmospheric Scattering (SIGGRAPH 2014, Assassin's Creed 4 froxel fog).
- Hillaire, Physically Based and Unified Volumetric Rendering in Frostbite (SIGGRAPH 2015).
8. Cascaded shadow maps and virtual shadow maps
For sun shadows, cascaded shadow maps split the frustum into ranges and render each at appropriate resolution. Virtual shadow maps go further: a single huge shadow map is split into pages and only the pages visible from the camera get rendered. It's how AAA games keep crisp shadows near the player without a giant memory bill.

Deep dives:
- Dimitrov, Cascaded Shadow Maps (NVIDIA whitepaper, the standard reference).
- Microsoft, Common Techniques to Improve Shadow Depth Maps (DirectX docs).
- Wright, Virtual Shadow Maps in Fortnite Battle Royale Chapter 4 (SIGGRAPH 2023, the UE5 VSM talk).
- Epic Games, Virtual Shadow Maps documentation.
9. Screen-space effects (SSAO, SSR, SSGI, SSSSS)
Reading the depth and normal buffers cheaply gives you ambient occlusion (SSAO), reflections (SSR), one-bounce global illumination (SSGI), and subsurface scattering for skin (SSSSS). They miss off-screen detail, which is why ray tracing is taking over, but they're still everywhere as a fast baseline.

Deep dives:
- Mittring, Finding Next Gen: CryENGINE 2 (SIGGRAPH 2007, the original SSAO paper).
- McGuire et al., Scalable Ambient Obscurance (HPG 2012, modern SSAO).
- Stachowiak and Uludag, Stochastic Screen-Space Reflections (Frostbite, the SSR reference).
- Jimenez, Separable Subsurface Scattering (the technique used for skin in most AAA engines).
- Mara et al., Deep Screen Space (Activision, an SSGI lineage).
10. Temporal anti-aliasing and ML upscaling (DLSS, FSR, XeSS)
The frame is rendered at a lower internal resolution and reconstructed using motion vectors, depth, and history. ML-based upscalers (DLSS 3/4, FSR 3, XeSS) add frame generation on top, interpolating intermediate frames from optical flow. Most AAA titles now ship assuming an upscaler is on, which changes how you budget the rest of the frame.

Deep dives:
- Karis, High Quality Temporal Supersampling (SIGGRAPH 2014, the canonical TAA talk).
- Salvi, An Excursion in Temporal Supersampling (NVIDIA, on the path to DLSS).
- Edelsten, Truly Next-Gen: Adding Deep Learning to Games and Graphics (GDC 2019, DLSS architecture).
- AMD, FidelityFX Super Resolution 3 technical details and Intel, XeSS technical paper.
11. GPU-driven rendering and mesh shaders
Culling, LOD selection, and draw submission all run on the GPU. Mesh shaders replace the vertex/geometry/tessellation pipeline with a more flexible compute-style stage that emits meshlets. Combined with multi-draw indirect, this keeps the CPU out of the per-object hot loop entirely.

Deep dives:
- Haar and Aaltonen, GPU-Driven Rendering Pipelines (SIGGRAPH 2015, the foundational Assassin's Creed Unity talk).
- Wihlidal, Optimizing the Graphics Pipeline with Compute (GDC 2016, Frostbite GPU-driven culling).
- Kubisch, Introduction to Turing Mesh Shaders (NVIDIA, the mesh shader primer).
- Pesce, A Whirlwind Tour of Mesh Shaders (and similar engineering blogs collected on Adrian Courrèges' RenderDoc tear-downs).
12. Hair, cloth, and skin rendering
Hair uses Marschner-style anisotropic shading with strand-based geometry (NVIDIA HairWorks, AMD TressFX, or engine-native systems). Cloth is simulated on the GPU with position-based dynamics and rendered with anisotropic specular. Skin uses screen-space subsurface scattering plus pre-integrated wrap lighting. These three are usually where you spot the budget gap between AAA and indie.

Deep dives:
- Marschner et al., Light Scattering from Human Hair Fibers (SIGGRAPH 2003, the foundational hair model).
- Chiang et al., A Practical and Controllable Hair and Fur Model for Production Path Tracing (Disney 2016, used widely in real-time approximations).
- Müller et al., Position Based Dynamics (the standard cloth simulation reference).
- Jimenez et al., Separable Subsurface Scattering and Real-Time Realistic Skin Translucency.
13. Decals, virtual texturing, and material layering
Surface variation comes from layered decals (bullet holes, dirt, blood, grime) projected onto the depth buffer, plus virtual textures that stream high-resolution detail just-in-time. Material layering blends multiple PBR sets per pixel using masks and triplanar projection, which is how a single rock looks like five rocks.

Deep dives:
- Pranckevičius, Deferred Decals and follow-ups (Aras' classic blog series).
- Mittring, The Technology Behind the "Unreal Engine 4 Elemental Demo" (SIGGRAPH 2012, includes virtual texturing details).
- van Waveren, id Tech 5 Challenges: From Texture Virtualization to Massive Parallelization (SIGGRAPH 2009, the MegaTexture talk).
- Williams, Material Layering in The Order: 1886 (GDC 2014, layered PBR materials).
14. Order-independent transparency
Hair, foliage, particles, and glass don't sort cleanly. AAA engines use techniques like weighted blended OIT, depth peeling, or per-pixel linked lists to render them correctly without a CPU sort step. It's quietly one of the most expensive parts of the frame on a foliage-heavy scene.

Deep dives:
- McGuire and Bavoil, Weighted Blended Order-Independent Transparency (JCGT 2013, the WBOIT paper).
- Bavoil and Myers, Order Independent Transparency with Dual Depth Peeling (NVIDIA whitepaper).
- Yang et al., Real-Time Concurrent Linked List Construction on the GPU (the per-pixel linked list reference).
- Wyman, Exploring and Expanding the Continuum of OIT Algorithms (HPG 2016, comparison survey).
15. Neural radiance caching and ML denoisers
The newest layer. NVIDIA's neural radiance cache learns indirect lighting per-scene and queries it instead of tracing more rays. ML denoisers (OptiX, Intel Open Image Denoise, custom in-house) clean up sparse ray-traced signals in milliseconds. Expect this category to grow fast over the next two years.

Deep dives:
- Müller et al., Real-Time Neural Radiance Caching for Path Tracing (SIGGRAPH 2021, the NRC paper).
- Schied et al., Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination (HPG 2017, SVGF).
- Chaitanya et al., Interactive Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder (SIGGRAPH 2017, the first recurrent ML denoiser).
- Intel, Open Image Denoise documentation (open-source production denoiser).
What this means for the browser
We've shipped several of these techniques in WebGPU for our open-world browser engine. Cascaded shadow maps, GPU-driven instancing, triplanar PBR, screen-space fog, and clipmap-based virtualized terrain all run at 120 FPS in a tab. The rest (hardware ray tracing, mesh shaders, ML upscaling) is coming to the web as the WebGPU spec catches up. For deeper dives, see our guides on browser open-world tech and landscape generation.
Further reading across the whole stack
If you want to read one book, Real-Time Rendering, 4th edition is the standard reference covering most of the topics above. For ongoing research, the SIGGRAPH "Advances in Real-Time Rendering in Games" course archive (advances.realtimerendering.com) has free PDFs of the AAA-engine deep dives going back to 2006. For production tear-downs of how specific games render their frame, Adrian Courrèges' GPU profiling articles are required reading.