Post
A technique that creates real-time reflections using only what is already visible on screen, cheap but imperfect.
Screen Space Reflections (SSR) work by ray-marching through the depth buffer, essentially asking 'if I trace a reflection ray from this shiny surface, does it hit anything that is currently on screen?' If yes, it copies that pixel into the reflection. The result is surprisingly convincing reflections of nearby objects on wet floors, windows, and metallic surfaces. The catch is that SSR can only reflect things the camera can see. Look at a puddle reflecting a building, then turn away from the building, and the reflection vanishes because that data no longer exists in the frame buffer.
Example
Battlefield V uses SSR extensively for its wet and rainy environments, with puddles reflecting nearby explosions and gunfire. The Last of Us Part II combines SSR with cube maps so that when SSR fails at screen edges, a fallback reflection still exists. Spider-Man Remastered on PS5 lets you toggle between SSR and ray-traced reflections to see the quality difference firsthand.
Why it matters
SSR is the workhorse reflection technique for current-gen games because it is dramatically cheaper than ray-traced reflections. Understanding its limitations explains those moments when reflections glitch out or disappear at screen edges. As ray tracing becomes more accessible, SSR will gradually take a back seat, but it remains essential for performance-constrained platforms.
Related concepts