Post
Rendering at a lower resolution and using AI or algorithms to reconstruct a higher-resolution image that looks nearly as sharp as native.
Upscaling technology renders the game at a fraction of the target resolution, then uses intelligent reconstruction to fill in the missing detail. NVIDIA's DLSS uses a neural network trained on high-resolution images to predict what the missing pixels should look like. AMD's FSR uses a spatial or temporal algorithm to achieve similar results without dedicated AI hardware. Intel's XeSS offers a middle ground. The key insight is that rendering at half resolution and reconstructing is far cheaper than rendering natively, and modern upscalers are so good that the difference from native rendering is often imperceptible. This effectively gives you free performance or lets you enable expensive features like ray tracing that would otherwise tank your frame rate.
Example
DLSS in Alan Wake 2 is practically mandatory because the game's path tracing is so demanding that even high-end GPUs cannot run it at native 4K. The upscaled result is nearly indistinguishable from native while running at double the frame rate. FSR 2 in God of War on PC made the game accessible to a much wider range of hardware. Baldur's Gate 3 uses DLSS and FSR to maintain performance during its dense, particle-heavy combat scenes. Console games increasingly ship with temporal upscaling built in, with the PS5 and Xbox Series X rendering many titles at lower internal resolutions upscaled to 4K output.
Why it matters
Upscaling technology has fundamentally changed the value proposition of GPU hardware. A mid-range GPU with good upscaling can now deliver visual quality that previously required top-tier hardware at native resolution. It also extends the useful lifespan of existing hardware, since each improvement in upscaling algorithms makes the same GPU effectively more powerful.
Related concepts