This document summarizes techniques for rendering water and frozen surfaces in CryEngine 2. It discusses procedural shaders for simulating water waves, caustics, god rays, shore foam, and frozen surface effects. It also covers techniques for water reflection, refraction, physics interaction, and camera interaction with water surfaces. Optimization strategies are discussed for minimizing draw calls and rendering costs.
3. CryEngine 2: Shading Overview
• Support shader model 2.0, up to 4.0
• Completely dynamic lighting and shadows
• Up to 4 point light sources per-pass
• Wide range of known and DIY shading models
• Some other fancy features
• Deferred mix with multi-pass rendering approach
• Average of 2K drawcalls per frame (~2M tris)
5. Water and Underwater Rendering
• Rendering believable looking water
• Underwater light-scattering [1]
• Water surface animation & tessellation
• Reflections/Refraction
• Shore/Foam
• Caustics and God-rays
• Camera and objects interaction with water
• Particles
• How to make all this efficiently in a very complex
and open ended world in a game like Crysis ?
6. No more flat water !
• 3D waves
• Used statistical Tessendorf animation model [2]
• Computed on CPU for a 64x64 grid
• Upload results into a FP32 texture
• Vertex displacement on GPU
• Lower HW specs used sin waves sum
• Additionally 4 moving normals maps layers
7. Surface Tessellation
Screen Space Grid Projection
Extreme detail nearby
Problems
Screen edges
Aliasing at distance
Dropped this approach in the end
8. Surface Tessellation
Camera aligned grid
Keep detail nearby and in front of camera
Problems
Camera roll
Balancing nearby VS far away detail
Kept this approach in the end
10. Physics Interaction
• CPU animation is shared with Physics/Game
• For lowspec machines, did same “math” as in
vertex shader on CPU
• Physics samples best water plane fitting object
11. Reflection
Per frame, we had an avg of 2K drawcalls (~ 2M tris)
• This really needed to be cheap – and look good
• Problem: Reflections added about 1500 drawcalls
• Draw calls minimization
• Final average of 300 drawcalls for reflections
• Total internal reflection also simulated
• Half screen resolution RT
12. Reflection Update Trickery
Update Dependencies
• Time
• Camera orientation/position difference from
previous camera orientation/position
• Surface visibility ratio using HW occlusion queries
• Multi-GPU systems need extra care to avoid out of
sync reflection texture
14. Refraction
• No need to render scene again [3]
• Use current back-buffer as input texture
• Mask out everything above water surface
• Water depth > World depth = leaking
• Don’t allow refraction texture offset for this case
• Chromatic dispersion approx. for interesting look
• Scaled offset for R, G and B differently
17. Procedural Caustics
• Extra rendering pass, can handle
opaque/transparent
• Based on real sun direction projection
• Procedural composed using 3 layers
• Chromatic dispersion approximation
• Darken slightly to simulate wet surface
19. Shore and Foam
• Soft water intersection
• Shore blended based on surface depth distance to
world depth
• Waves foam blended based on current height
distance to maximum height
• Foam is composed of 2 moving layers with offsets
perturbation by water bump
• Acceptable quality
21. Underwater God-Rays [4]
• Essentially the same procedural caustics shader
• Based on real sun direction projection
• Projected into multiple planes in front of camera
• Rendered into a 4x smaller than screen RT
• Finally add to frame-buffer
23. Camera/Particles Interaction
• How to handle case where camera intersects an
animated water surface ?
• Water droplets effect when coming out of water
• When inside water used a subtle distortion
• Water particles similar to soft-particles
24. Things for the Future
• Rolling waves didn’t made into final game
• Special animated water decal geometry
• Water splashes
• Surface interaction with shoreline
• Dynamic surface interaction
• Maybe in nearby future project ?
26. Frozen Surfaces
• Huge Headache
• Haven’t found previous research on the subject
• Unique Alien Frozen World: How to make it ?
• Should it look realistic ?
• Or an “artistic” flash frozen world ?
• Make everything Frozen ?
• Dynamically ?
• Custom frozen assets ?
• Reuse assets ?
• Took us 4 iterations until final result
27.
28.
29.
30.
31. Lessons learned
Final iteration
• Main focus was to make it visually interesting
• Used real world images as reference this time
• Minimize artist amount of work as much as
possible
• Impossible to make every single kind of object look
realistically frozen (and good) with a single unified
approach
• 1 week before hitting feature lock/alpha (gulp)
32.
33. Putting all together
• Accumulated snow on top
• Blend in snow depending on WS/OS normal z
• Frozen water droplets accumulated on side
• 2 layers using different uv and offset bump scales to give
impression of volume
• 3D Perlin noise used for blending variation
• 3 octaves and offsets warping to avoid repetitive patterns
• Glittering
• Used a 2D texture with random noise vectors
• Pow( saturate( dot(noise, viewVector), big value)
• If result > threshold, multiply by big value for hdr to kick in
34.
35.
36.
37.
38. Procedural Frozen
• Reused assets
• Dynamic freezing possibility and with nice
transition
• Didn’t gave artists more control than required
• Artists just press button to enable frozen
• Relatively cheap, rendering cost wise
• Visually interesting results
• Only looks believable under good lighting
conditions
40. Post-Effects Overview
Post Effects Mantra:
• Final rendering “make-up”
• Minimal aliasing (for very-hi specs)
• Never sacrifice quality over speed
• Unless you’re doing really crazy expensive stuff !
• Make it as subtle as possible
• But not less - or else average gamer will not notice it
42. Screen-space velocities
• Render a sphere around camera
• Use previous/current camera transformation to
compute delta vector
• Lerp between previous/current transformation by a
shutter speed ratio ( n / frame delta ), to get correct
previous camera matrix
• From previous/current positions compute velocity vector
• Can already accumulate N samples along velocity
direction
• But will get constant blurring everywhere
43. Velocity Mask
• Used depth to generate velocity mask
• We let camera rotation override this mask
• Depth is used to mask out nearby geometry
• If current pixel depth < nearby threshold write 0
• Value used for blending out blur from first person
arms/weapons
• Velocity mask is used later as a scale for motion
blurring velocity offsets
• Blurring amount scales at distance now
48. Optimization strategies
• If no movement skip camera motion blur entirely
• Compare previous camera transformation with current
• Estimate required sample count based on camera
translation/orientation velocity
• If sample count below certain threshold, skip
• Adjust pass/sample count accordingly
• This gave a nice performance boost
• Average case at 1280x1024 runs at ~ 1 ms on a G80
50. DX9 HW Skinning limitation
• 256 vertex constant registers limit
• Our characters have an average of 70 bones per
drawcall
• Each bone uses 2 registers = 140 registers
• For motion blur we need previous frame bones
transformations
• 2 x 140 = 280 registers, bummer..
• Decided for DX10 only solution
51. Step 1: Velocity pass
• Render screen space velocity, surface depth and
instance ID into a FP16 RT
• If no movement, skip rigid/skinned geometry
• Compare previous transformation matrix with current
• Compare previous bone transformations with current
• If per-pixel velocity below certain threshold write 0
to RT
• Can use this data for optimizing further
53. Step 2: Velocity Mask
• Used a 8x smaller render target
• Apply Gaussian blur to velocity length
• Result is a reasonable fast estimation of screen
space needed for dilation and motion blurring
• Mask is used to efficiently skip pixels during
dilation passes
54. Step 3: Velocity Dilation
• Edge dilation
• Done using separated vertical and horizontal
offsets
• 4 passes total (2 for horizontal, 2 for vertical)
• If center offset has velocity or velocity mask is 0
skip processing entirely
• Dilate if world depth > surface depth
57. Final Step: Motion Blurring
• Accumulate N samples along velocity direction
• Can use surface ID to avoid leaking
• Extra lookup..
• Clamp velocity to acceptable range
• Very important to avoid ugly results, especially with jerky
animations
• Divide accumulated results by sample count and
lerp between blurred/non blurred
• Using alpha channel blend mask
61. Object Motion Blur
• Object motion blur with per-pixel dilation
Not perfect, but good quality results for most cases
Geometry independent
Problematic with alpha blending
• Future improvements / challenges
Self-motion blurring
Multiple overlapping geometry + motion blurring
Could use adaptive sample count for blurring
63. Screen Space Sun Shafts
• Generate depth mask
• Mask = 1 - normalized scene depth
• Perform local radial blur on depth mask
• Compute sun position in screen space
• Blur vector = sun position - current pixel position
• Iterative quality improvement
• Used 3 passes (virtually = 512 samples)
• RGB = sun-shafts, A = vol. fog shadow aprox
• Compose with scene
• Sun-shafts = additive blending
• Fog-shadows = soft-blend [5]
66. Color Grading
• Artist control for final image touches
• Depending on “time of day” in game
• Night mood != Day mood, etc
• Usual saturation, contrast, brightness controls and
color levels like in Far Cry [6]
• Image sharpening through extrapolation [9]
• Selective color correction
• Limited to a single color correction
• Photo Filter
• Grain filter
68. Selective Color Correction
• Color range based on Euclidian distance
• ColorRange = saturate(1 - length( src - col.xyz) );
• Color correction done in CMYK color space [8]
• c = lerp( c, clamp(c + dst_c, -1, 1), ColorRange);
• Finally blend between original and correct color
• Orig =lerp( Orig, CMYKtoRGB( c), ColorRange);
69. Photo Filter
• Blend entire image into a different color mood
• Artist defines “mood color”
• cMood = lerp(0, cMood, saturate( fLum * 2.0 ) );
• cMood = lerp(cMood, 1, saturate( fLum - 0.5 ) * 2.0 );
• Final color is a blend between mood color and
backbuffer based on luminance and user ratio
• final= lerp(cScreen, cMood , saturate( fLum * fRatio));
70.
71.
72.
73.
74. Conclusion
• Awesome time to be working in real time computer
graphics
• Hollywood movie image quality still far away
• Some challenges before we can reach it
• Need much more GPU power (Crysis is GPU bound)
• Order independent alpha blending – for real world cases
• Good reflection/refraction
We still need to get rid of aliasing – Everywhere
75. Special Thanks
• To entire Crytek team
• All work presented here wouldn’t have been
possible without your support
76. References
• [1] Wenzel, “Real time atmospheric effects”, GDC 2007
• [2] Tessendorf, “Simulating Ocean Water”, 1999
• [3] Sousa, “Generic Refraction Simulation”, Gpu Gems 2, 2004
• [4] Jensen + et al, “Deep Water Animation and Rendering”,
2001
• [5] Nishita + et al, “A Fast Rendering Method for Shafts of Light
in Outdoor Scene”, 2006
• [6] Gruschel, “Blend Modes”, 2006
• [7] Bjorke, “Color Controls”, GPU Gems 1
• [8] Green, “OpenGL Shader Tricks”, 2003
• [9] Ford + et al, “Colour Space Conversions”, 1998
• [10] Haerberli + et al, “Image processing by Interpolation and
Extrapolation”, 1994
Introduce myself – state title, etc Agenda: Introduction Shading Overview - Water / Underwater rendering - Frozen surfaces Post-effects Overview - Camera and per-object motion blur - Screen space sun-shafts - Artist directed color grading Conclusion
- I personally think we’re still at the dawn of what can be done, and especially fotorealism is still very far away for real time – but where getting closer What is a Next-Gen effect ? - Something never done before or done at higher quality bar than existing games - Harmonious composition of art and technology - ”Wow” factor Creating an effect - “Don’t reinvent the wheel” - Visual references and concept art helps a lot - Might take several iterations
CryEngine 2: Shading Overview Shader model 2.0 minimum, up to 4.0 Completely dynamic lighting and shadows Up to 4 point light sources per-pass Wide range of known shading models used: Blinn, Phong, Cook-Torrance, Ward, Kajiya-Kay … Plus custom DIY shading: Skin, Eyes, Cloth, Glass, Vegetation, Frost, Ice … Other fancy features: Parallax Occlusion Mapping, Detail Bump Mapping, Screen Space Ambient Occlusion, 64bit HDR rendering … Deferred mix with multi-pass rendering approach Z-Pass, Opaque-Pass, Transparent/Refractive passes, Material layers, Shadows passes … Average of 2k drawcalls per frame ~ 2 million triangles per frame
Show water intro vid – talk about subjects as video goes
3D waves = no more flat water! Used statistical Tessendorf animation model Waterworld, Titanic, Pirates of the Caribbean 3… Good tillable results for variety of usage: calm, choppy … Computed on CPU for a 64x64 grid Resolution was enough for us and very fast to compute Pirates of the Caribbean 3 used 2k x 2k Upload results into a R32G32B32A32F texture 64x64 size Vertex displacement on GPU 1 bilinear fetch on DX10 (4 fetches for DX9) Lower HW specs used simple sin waves sum Additionally normals maps translation – 4 normal maps layers Hi frequency/ low frequency Animated along water flow direction
So, in order to have nice water surface animation, the first step is to have good water tessellation Our first try: Screen Space Grid Projection 100k triangles grid, generated only once Project grid on water plane (GPU) Good: Very detailed nearby Bad: Unacceptable aliasing (shading and position) at distance and with slow camera movements In a FPS it’s extremely distracting Problematic on screen edges Had to attenuate displacement (xyz) on edges Which limited displacement size Dropped this approach in the end
Camera-aligned grid 100k triangles grid, generated only once Grid edges extruded into horizon Attenuate displacement at grid edges Idea/Purpose was to keep most triangles nearby and in front of camera Minimizes aliasing with camera rotation Eliminated translation aliasing through snapping Problematic with camera roll Trouble balancing nearby VS far away detail But, kept this approach in the end
Picture showing from a top view. Btw: Lines are from degenerate triangles used for connecting triangle strips
Physics Interaction With animation computed on CPU we can directly share results with physics engine For lower specs case, we did exact same math from vertex shader on CPU For each object in ocean, physics engine samples ocean below object and computes a plane that represents ocean surface best This allowed player/objects/vehicles to be affected by water surface animation
Reflection Overview Per frame, we had an avg of 2K drawcalls (~ 2M tris) This really needed to be cheap, but still look good Problem: Reflections added about 1500 drawcalls Decreased draw calls count by: Decreasing view distance Set near plane to nearest terrain sector Reflection update ratio trickery Final average of 300 drawcalls for reflections Underwater total internal reflection is also simulated Phenomenon that happens when whiting a medium like water/glass where rays of light get completely reflected – happens when the angle of incidence is greated than a certain limiting angle (= critical angle) But limited due players complains about lack of visibility Reflection texture is half screen resolution
Reflection Update Trickery Update Dependencies Time Camera orientation/position difference from previous camera orientation/position Surface mesh visibility ratio using HW occlusion queries If barely visible, decrease update ratio significantly Multi-GPU systems need extra care to avoid out of sync reflection texture Update only for main GPU, use same reflection camera for the following N frames (N way Multi-GPU)
Blur final reflection texture vertically Blur ratio = camera distance to water plane (nearest = blurriest/biggest offset scale) As an extra, it helps minimizing reflection aliasing
Refraction Use current backbuffer as input texture - No need to render scene again - Mask out everything above water surface instead Check “ Generic Refraction Simulation ”, Gpu Gems 2 Extended Refraction Masking concept with more general/simpler approach using depth instead - Water depth > World depth, then don’t allow refraction texture offset - Using chromatic dispersion, means doing this 3 times - Alpha-blend works since we store 2 render lists, before/after water Chromatic dispersion used for interesting look Scaled offset for R, G and B differently Incidentally reflection/refraction done in HDR
Procedural Caustics Very important for realistic water look Extra rendering pass, can handle opaque/transparent – could have done in deferred way, but we lacked some data like per pixel normals to give interesting look Based on real sun direction projection Procedural composed using 3 layers Chromatic dispersion used Darken slightly to simulate wet surface Stencil-pre pass helps a little bit on performance
Shore and Foam Very important to make water look believable Soft water intersection Shore blended based on surface depth distance to world depth Waves foam blended based on current height distance to maximum height Foam is composed of 2 moving layers with offsets perturbation by water bump Acceptable quality
Camera Interaction How to handle case where camera intersects an animated water surface ? Reflections are planar = > Reflection plane adjusted based on water height at camera position Fog is computed for a plane, not for animated surface => Did same thing for fog plane Water surface soft-intersection with camera Water droplets effect when coming out of water volume When inside water volume used a subtle distortion Helps feeling inside water volume And also helps disguising fact that shadows are not refracted Water particles similar to soft-particles Soft-particles = use particle depth and world depth to soften intersection with opaque geometry For water particles we also do soft-intersection with water plane Fake specular highlights
Show into video (walking around, weapon frost, glittering, etc) while talking about subject Video with ICE level, show disabling/enabling frozen layer
Now we’ll go into the first case study: the frozen surfaces - Also known as The Biggest Headhache – ever. - This part of the lecture is kind of a mini-post mortem to the frozen surfaces in crysis, and hopefully will be helpful for someone trying to make a frozen world. - So, this was a big challenge, partially due to not having found any previous research on the subject, at least I did not find any relevant information - And because we had no clear idea/direction Should it look realistic ? Or an “artistic” flash frozen world ? Make everything Frozen ? Should we be able to make any object look frozen ? Dynamically ? Custom frozen assets ? Reuse assets ? Several iterations until final result (4 ?)
1st try ! Artists made custom frozen textures/assets A lot of work for artists No dynamic freezing possibility Resource hog to have frozen and non-frozen on same level Not very good results
2nd try: We made a specific shader for simulating frozen surfaces Reused assets Dynamic freezing possibility and with nice transition Artists wanted too many parameters exposed Which resulted in a lot of work for artists Expensive rendering wise, extra drawcall, blending, too many parameters = too many constants Not very good results Show 2nd gen video
For 3rd iteration We tried to get best of both worlds Reused most assets Dynamic freezing possibility Cheaper than fully procedural approach Some cases looked nice – for example the palmtree leafs Artists wanted too much parameters/control Which resulted in a lot of work for artists Although better than previous tries, results where still visually uninteresting
Until we reached our final iteration, the one showed on the video previously
What we learned from previous approaches Main focus was to make it now visually interesting Used real world images as reference this time Minimize artist amount of work as much as possible Impossible to make every single kind of object look realistically frozen (and good) with a single unified approach Additionally we had 1 week before hitting feature lock/alpha milestone (gulp) – to make this happen
These are actually the real references we used for the 4th and final generation. From references we can see: Accumulated snow on top Frozen water droplets accumulated on side Subtle reflection We need variation and no visible patterns Glittering
Putting all together This final iteration, ended up being the simplest concept wise Accumulated snow on top Blend in snow depending on WS/OS normal z Frozen water droplets accumulated on side 2 layers using different uv and offset bump scales to give impression of volume Normal map texture used was really a water droplets texture btw 3D Perlin noise used for blending variation (using volumetric texture) 3 octaves and offsets warping to avoid repetitive patterns Glittering Used a 2D texture with random noise vectors Pow( saturate( dot(noise, viewVector), big value) If result > threshold, multiply by big value for hdr to kick in – for this trick to work nicely was only possible since we do blooming and image rescaling carefully to minimize any aliasing – therefore when we have a single pixel on screen we don’t see any schimering
Procedural Frozen Reused assets Dynamic freezing possibility and with nice transition Don’t give artists more control than they really need Artists just press button to enable frozen If required they can still make final material tweaks Relatively cheap, rendering cost wise – frozen layer replaces general pass for case with no frost transition Visually interesting results Only looks believable under good lighting conditions
Post-Effects Overview Post Effects Mantra: Final rendering “make-up” Minimal aliasing as possible for this hardware generation (for very-hi specs) = No shimmering, no color banding, no visible sampling artifacts, etc For example like I mentioned in Frost Glitter case, for blooming we went the “extra mile” to make sure no shimmering was visible at all Never sacrifice quality over speed Unless you’re doing really crazy expensive stuff ! Find instead cheaper ways of doing same work, for less cost Make it as subtle as possible We’ve all seen Uber-Glow in a lot of games – or over usage of Brown/Desaturation for color palettes But not less - or else average gamer will not notice it
Motion Blur: we have 2 techniques, one for camera motion blur and other for objects motion blur. We’ll start first with camera motion blur Done in HDR for very hi specs Render a sphere around camera Used scene-depth for velocity scaling Mask out nearby geometry (first person arms/weapons) Maximizing quality through iterative sampling Optimization strategies
Screen-space velocities Render a sphere around camera Use previous/current camera transformation for computing previous sphere vertices positions But we need framerate independent motion blur results Lerp between previous/current transformation by a shutter speed ratio ( n / frame delta ), to get correct previous camera matrix – this is done on CPU From previous/current positions compute velocity vector Can already accumulate N samples along velocity direction And divide accumulation by sample count But will get constant blurring everywhere
Velocity Mask Used depth to generate velocity mask Stored in scene RT alpha channel We let camera rotation override this mask For fast movements feels more “cinematic” Depth is used to mask out nearby geometry If current pixel depth < nearby threshold write 0 Value used for blending out blur from first person arms/weapons Velocity mask is used later as a scale for motion blurring velocity offsets Blurring amount scales at distance now
Optimization strategies If no movement skip camera motion blur entirely Compare previous camera transformation with current Estimate required sample count based on camera translation/orientation velocity If sample count below certain threshold, skip Adjust pass/sample count accordingly This gave a nice performance boost Average case at 1280x1024 runs at ~ 1 ms on a G80 VS 5 ms without Final note regarding camera motion blur Beware, that most hardcore gamers dislike motion blur and will disable it Giving gamers the option to control motion blur strength (shutter speed) helps Crysis has this since patch1
Done in HDR DX9 limitations for characters Render Velocity/Depth/ID into a RT Velocity Mask Per-pixel dilation Motion blur using final dilated velocities Quality improve through iterative sampling Further improvements
DX9 HW Skinning limitation Limit of 256 vertex constant registers Big problem when doing HW skinning Our characters have an average of 70 bones per drawcall Each bone uses 2 registers = 140 registers For motion blur we need previous frame bones transformations So… 2 x 140 = 280 registers And lets not forget all other data needed = Bummer Decided for DX10 only solution Could have skipped problematic skinned geometry, but would not look consistent Or making it work on dx9 would have increased significantly drawcall count (subdivide mesh) Could have used special mesh geometry/shell with less bones = Cumbersome
Step 1: Velocity pass Render screen space velocity, surface depth and instance ID into a R16G16B16A16F RT In our case this was a shared screen size RT, that gets resized after into a 2x smaller RT With a forward rendering approach, is not a good idea todo this for every single object in the world So if no movement, skip rigid/skinned geometry Compare previous transformation matrix with current Compare previous bone transformations with current, this is a simple abs( dot(prev, curr) ) accumulation If per-pixel velocity bellow certain threshold write 0 to RT When time to use RT comes just skip pixel processing entirely
More natural/visual appealing result Dilation possibilities ? Dilate geometry itself Check “Opengl Shader Tricks” Could just apply a simple blur to velocity map Very fast – not very good results though Do dilation per pixel as post process
Final Step: Motion Blurring Accumulate 8 samples along velocity direction Blend mask in alpha channel Can use surface ID to avoid leaking Extra lookup Not really noticeable during high speed gaming – but might be very important for some specific cases/games – with some kind of bullet time or for some slowmotion cutscene Clamp velocity to an acceptable range Very important to avoid ugly results, specially with jerky animations Divide accumulated results by sample count and lerp between blurred/non blurred Using alpha channel blend mask
This is the final blend mask Only tagged pixels get processed further
Object Motion Blur Object motion blur with per-pixel dilation Not perfect, but high quality results for most cases Geometry “independent” – no need for geometry expansion Problematic with alpha blending – how to handle ? Similar to refraction problem. We used nearest alpha blended surface, but stuff behind will not work. Could go crazy, and after every alpha blended surface re-use backbuffer and do motion blur. Will never be 100% with <= DX10. ~ 2.5 ms at 1280x1024 on a G80 Future improvements / challenges - Self-motion blurring - Multiple overlapping geometry + motion blurring - How to handle alpha blended geometry nicely ? - Could use adaptive sample count for blurring, like in camera motion blur case ( low speed = less samples, high speed = more samples) - If still time show video frame by frame using virtualDUB – show artefacts with fast camera movement due to masking and overlapping geometry due to not doing dilation on occluding geometry
Sun Shafts aka Crepuscular Rays/God Rays/Sun beams/Ropes of Maui/ GodLight / Rays of Buddha/Light Shafts…
Very simple trick Generate depth mask Math Perform local radial blur on depth mask Compute sun position in screen space Blur vector = sun position - current pixel position Use blur vector for blurring copy of backbuffer RGB = sun-shafts, A = volumetric shadows aprox Iterative quality improvement Used 3 passes (512 samples) Compose with scene Correct way would be computing how much this pixel is on shadow and attenuate fog density - but this cheap blending approximation delivered already nice results
Sun Shafts Works reasonably well if sun onscreen or nearby Screen edges problematic: - to minimize strength is attenuated based on view angle with sun direction - additionally could attenuate edges
Color Grading Artist control for final image touches Depending on “time of day” in game Night mood != Day mood != Rainy mood, etc Usual saturation, contrast, brightness controls and color levels like in Far Cry Image sharpening through extrapolation Selective color correction Limited to a single color correction Photo Filter Grain filter Simple random noise blended in by a small ratio
Image sharpening through extrapolation Simply lerp between low-res blurred image with original image by a ratio bigger than 1 Sharp = lerp(blurred, orig, bigger than 1 ratio)
Selective Color Correction This was very useful for us, since it allowed artists to fix individual colors and end of Crysis without having to re-tweak lighting (which takes much much longer) Color range based on Euclidian distance ColorRange = saturate(1 - length( src - col.xyz) ); Color correction done in CMYK color space Similar to Photoshop = Happy artist c = lerp( c, clamp(c + dst_c, -1, 1), ColorRange); Finally blend between original and correct color Orig =lerp( Orig, CMYKtoRGB( c), ColorRange);
Photo Filter (need real images) Blend entire image into a different color mood – so artists where able to give a warmer/colder color mood – again without having to re-tweak entire lighting settings Artist defines “mood color” Final mood color is based on luminance cMood = lerp(0, cMood, saturate( fLum * 2.0 ) ); cMood = lerp(cMood, 1, saturate( fLum - 0.5 ) * 2.0 ); Final color is a blend between mood color and backbuffer based on luminance and user ratio final= lerp(cScreen, cMood , saturate( fLum * fRatio)); Example: Left – original, right 50% mix with a orange’ish color – colors feel much warmer
Conclusion Awesome time to be working in real time computer graphics Hollywood movie image quality still far away Some challenges before we can reach it Need much more GPU power (Crysis is GPU bound) Order independent alpha blending – for real world cases Good reflection/refraction: cube maps only take us *so far, we need a generalized solution, imagine a river flowing down a mountain, impossible to make it look good with cubemaps/impossible with planar reflection. Good GI: IL / AO / SSS We still need to get rid of aliasing – Everywhere – no popping, no ugly sharp edges, no shimmering, no shadow aliasing on self-shadowing, etc, etc – lots of room for improvement here – We don’t see any of this when watching a movie or in real life right ? I guess prbly everyone has their own private list