SlideShare uma empresa Scribd logo
1 de 52
Game Engine Rendering Pipeline
Overview

    Game Engine

    The Renderer

    Rendering Workflow

    Coordinate System

    Basic Graphic Pipeline Flow

    Culling Overview

    Geometry

    Rendering/Rasterization

    Lighting

    Shading

    Anti-Aliasing

    Shaders

    Textures

    Compression Format

    Memory Usage

    Physics Engine

    In-Game Effects

    Sound

    Networking

    Scripting

    A.I.

    User Interface (U.I.)
Engine v/s Game
Many people confuse the engine with the entire game. That would be like confusing an automobile
engine with an entire car. You can take the engine out of the car, and build another shell around it,
and use it again.

The game part would be all the content (models, animations, sounds, AI, and physics) which are called
'assets', and the code required specifically to make that game work, like the AI, or how the controls work.

A game engine is the core software component of a video game or other interactive application with
real-time graphics.

Core functionality typically provided by a game engine:

  Rendering engine (2D or 3D graphics)

  Physics engine (Collision detection and collision response)

  Sound

  Scripting

  Animation

  Artificial intelligence

  Networking

  Streaming

  Memory management

  Threading, and a

  Scene graph.
Game Engine
Game engines provide a suite of visual development tools in addition to reusable software components.
These tools are generally provided in an integrated development environment* to enable simplified, rapid
development of games in a data-driven manner.


     *An integrated development environment (IDE) is a software application that provides comprehensive
      facilities to computer programmers for software development. An IDE normally consists of a source
      code editor, a compiler and/or interpreter, build automation tools, and (usually) a debugger.
The Renderer
It visualizes the scene for the player / viewer so he or she can make appropriate decisions based upon what's
displayed.

In a general sense, the renderer's job is to create the visual flare that will make a game stand apart from the herd.

3D graphics is essentially the art of the creating the most while doing the least, since additional 3D processing is
often expensive both in terms of processor cycles and memory bandwidth.


The business of getting pixels on screen these days involves 3D accelerator cards, API's, three-dimensional
math, and an understanding of how 3D hardware works.

For consoles, the same kind of knowledge is required, but at least with consoles you aren't trying to hit a moving
target. A console's hardware configuration is a frozen "snapshot in time", and unlike the PC, it doesn't change at
all over the lifetime of the console.



The renderer is where over 50% of the CPU's processing
time is spent, and where game developers will often be
judged the most harshly.
Rendering Workflow
Because there are so many calculations to be done and volumes of data to be handled, the entire process is broken
down into component steps, sometimes called stages.

One of the art-forms in 3D graphics is to elegantly reduce visual detail in a scene so as to gain better performance,
but do it in such a way that the viewer doesn't notice the loss of quality.

With the number of steps involved and their complexity, the ordering of these stages of the pipeline can vary
between implementations.


  3D Pipeline - High-Level Overview

  1. Application/Scene

     * Scene/Geometry database traversal
     * Movement of objects, and aiming and movement of view camera
     * Animated movement of object models
     * Description of the contents of the 3D world
     * Object Visibility Check including possible Occlusion Culling
     * Select Level of Detail (LOD)
Rendering Workflow
2. Geometry
   * Transforms (rotation, translation, scaling)
   * Transform from Model Space to World Space (Direct3D)
   * Transform from World Space to View Space
   * View Projection
   * Trivial Accept/Reject Culling
   * Back-Face Culling (can also be done later in Screen Space) Lighting
   * Perspective Divide - Transform to Clip Space
   * Clipping
   * Transform to Screen Space

3. Triangle Setup
   * Back-face Culling (or can be done in view space before lighting)
   * Slope/Delta Calculations
   * Scan-Line Conversion

4. Rendering / Rasterization
   * Shading
   * Texturing
   * Fog
   * Alpha Translucency Tests
   * Depth Buffering
   * Antialiasing (optional)
   * Display
Coordinate System
Working with Space
            In a 3D rendering system, multiple Cartesian coordinate systems (x- (left/right), y- (up/down)
and z-axis (near/far)) are used at different stages of the pipeline.

While used for different though related purposes, each coordinate system provides a precise
 mathematical method to locate and represent objects in the space. And not surprisingly, each
of these coordinate systems is referred to as a "space."

Model Space: where each model is in its own coordinate system, whose origin is some
             point on the model

World Space: where models are placed in the actual 3D world, in a unified world coordinate system.

View Space (also called Camera Space): in this space, the view camera is positioned by the application
             (through the graphics API) at some point in the 3D world coordinate system, if it is being used.

The view volume is actually created by a projection, which as
the name suggests, "projects the scene" in front of the camera.
In this sense, it's a kind of role reversal in that the camera now
becomes a projector, and the scene's view volume is defined in
relation to the camera.
Coordinate System
Deeper into Space

Clip Space: Similar to View Space, but the frustum is now "squished" into a unit cube, with the x and y
            coordinates normalized to a range between –1 and 1, and z is between 0 and 1, which simplifies
            clipping calculations.

Screen Space: where the 3D image is converted into x and y 2D screen coordinates for 2D display.
               'z' coordinates are still retained by the graphics systems for depth/Z-buffering and
                back-face culling before the final render.
3D Pipeline Data Flow
It is useful to note that most operations in the application/scene stage and the early geometry stage of the pipeline
are done per vertex, whereas culling and clipping is done per triangle, and rendering operations are done per
pixel.

Another advantage of pipelining is that because no data is passed from one vertex to another in the geometry stage
or from one pixel to another in the rendering stage, chipmakers have been able to implement multiple pixel pipes
and gain considerable performance boosts using parallel processing of these independent entities.



Stage1. Application/Scene

    The renderer traverses the geometry database to gather necessary object
    information (includes object movement, animated movement, and aiming
    and movement of camera object) that are going to change in the next frame
    of animation.

  "occlusion culling", a visibility test that determines whether an object is
 partially or completely occluded (covered) by some object in front of it.
If it is, the occluded object, or the part of it that is occluded is discarded.
Culling Overview
CULLING
           Visibility culling algorithms reduce the number of polygons sent down the rendering pipeline
           based on the simple principle that if something is not seen, it does not have to be drawn.

The simplest approach to culling is to divide the world up into sections, with each section having a list of other
sections that can be seen. That way you only display what's possible to be seen from any given point.

How you create the list of possible view sections is the tricky bit. Again, there are many ways to do this, using
BSP trees, Portals and so on.

View Frustum Culling
View volume is usually defined by six planes, namely the front, back, left, right, top, and bottom clipping planes,
which together form a cut pyramid.

Front and back clipping planes may be defined to lie at the viewer point and
infinity, respectively. If a polygon is entirely outside the pyramid, it cannot
be visible and can be discarded. If it is partially inside, it is clipped to the
planes so that its outside parts are removed.
Culling Overview
Back-face culling
This primitive form of culling is based on the observation that if all objects in the world are closed, then the
polygons which don't face the viewer cannot be seen.

This directly translates to the vector angle between the direction where the
viewer is facing and the normal of the polygon: if the angle is more than 90
degrees, the polygon can be discarded.

Cell-based occlusion culling
Cell-based occlusion culling methods are based on the assumption that the game
world can be divided to cells which are connected to each other using portals.

Clearly, if a portal is not seen from a given point of view, then none of the cells
behind the portal can be seen and they can thus be culled away. There are two
dominating forms of cell-based engines in use today: BSP and "portal" engines.

 Binary Space Partitioning (BSP)- Space is split with a plane to two half spaces,
which are again recursively split. This can be used to force a strict back-to-front
drawing ordering
3D Pipeline Data Flow

    Level of Detail (LOD) involves decreasing the complexity
    of a 3D object representation as it moves away from the viewer
    or according other metrics such as object importance, eye-space
    speed or position.

The Statue (in our case) distance to the view camera will dictate
which LOD level gets used. If it's very near, the highest resolution
LOD gets used, but if it's just barely visible and far from the view
camera, the lowest resolution LOD model would be used, and for
locations between the two, the other LOD levels would be used.

Level of detail techniques increases the efficiency of rendering by decreasing the workload on graphics
pipeline stages, usually vertex transformations.
Geometry
Stage2. Geometry

 Objects get moved from frame to frame to create the illusion of movement, and in a 3D
world, objects can be moved or manipulated using four operations broadly referred to as
transforms.
  
    Transformation
  
    Rotation
  
    Scaling

 Space to Space- For the final rendering of models/geometry, the coordinate space is
translate from object space to world space and then to view space.
  After the transform to view space, many interesting things begin to happen.

Trivial Matters

Viewing frustum
View frustum is the region of space in the modeled world that may appear on the screen

 The first step in reducing the working set of triangles to be processed (rendered) is to
cull those that are completely outside of the view volume.
This is known as View frustum culling.

The next operation is called back-face culling (BFC), which as the name
suggests, is an operation that discards triangles that have surfaces that are
facing away from the view camera.
Bounding Volumes
Getting Clipped and Gaining Perspective

      Clipping- is the operation to discard only the parts of triangles
                 that in some way partially or fully fall outside
                 the view volume (camera field of view).

  Good clipping strategy is important in the development of video games
in order to maximize the game's frame rate and visual quality.

Bounding Volume Hierarchies (BVHs)
      Useful for numerous tasks - including efficient culling and speeding up collision detection between objects.

Examples of tests where BV’s are applied are:
• testing if a point is inside an object.
• testing an object for intersection with a line (ray).
• testing if an object intersects a plane or lies above/below.
• testing an object for intersection with and/or inclusion within a volume.
Setting the Table
Stage3. Triangle Setup
Think of triangle setup as the prelude to the rendering stage of the pipeline, because it "sets the table" for the
 rendering operations that will follow.

    First off, the triangle setup operation computes the slope (or steepness) of a triangle edge using vertex
    information at each of edge's two endpoints.

(The slope is often called delta x/delta y, dx/dy, Dx/Dy, or literally change in x/change in y).

    Using the slope information, an algorithm called a digital differential analyzer (DDA) can calculate x,y
    values to see which pixels each triangle side (line segment) touches.

        What it really does is determine how much the x value of the pixel touched by a given triangle side
        changes per scan line, and increments it by that value on each subsequent scan-line.

        For every single pixel increment along the x-axis, we must increment the y-axis value of the triangle
        edge by Dy.

    Color and Depth values are interpolated for each pixel.

    In addition, the texture coordinates are calculated for use during texture mapping.
Rendering/Rasterization
Stage4. Rendering/Rasterization

Lighting
It's one of those things that when it works, you don't notice it, but when it doesn't, you notice it all too much.

It usually happens once the 3D scene has been transformed into view space.

A geometric light is based upon very simplified lighting and reflection models, which often have little to do with
how lights behave in the real world, but the net effect is deemed sufficient for the purposes of real-time 3D.
  
      "per-vertex" and "per-pixel" lighting

  The advantage of per-pixel lighting is its granularity, especially true in low triangle count scenes with specular
  reflections where the realism of per-vertex lighting can diminish considerably.

   The obvious downside to per-pixel lighting is its considerably larger computational workload.
Rendering/Rasterization

Vertex Lighting

 Determine how many polygons are touching one vertex and then take the mean of all the resultant polygons
orientations (a normal) and assign that normal to the vertex.


    Each vertex for a given polygon will point in slightly different directions, so you wind
    up gradating or interpolating light colors across a polygon, in order to get smoother
    lighting.





    Advantage: Hardware can often help do this in a faster manner using hardware
               transform and lighting (T&L).


    Drawback: It doesn't produce shadowing. For instance, both arms on a model will
              be lit the same way, even if the light is on the right side of the model,
              and the left arm should be left in shadow cast by the body.
Lighting Thoughts
Per Pixel Normal Map based Lighting

 Encode tangent-space normals for the surface in a texture to compute the lighting equation at each pixel, rather than at each
vertex.

    Object space normal maps are also possible and are generally used to light dynamic objects.

    There is also a variation on normal map lighting called parallax mapping which encodes an additional height
    map value into the normal texture in order to simulate the parallax effect.


Performing normal map lighting is a three-step approach:

    Normal Map is created,applied to the model and exported with tangent space information.

    A tangent matrix must be created to transform all positional lighting information into tangent space. The tangent
    space matrix is a 3x3 matrix made up of the vertex's tangent, binormal and normal vectors.

    The color contribution of each light is calculated in the pixel shader using the normal information fetched from
    the normal map and the tangent space lighting vectors computed from data transformed on the vertex shader.

Space

  Normal maps are usually stored as a representation in one of two spaces – either in model space, or in the local tangent space of
each triangle.

  Normal maps specified in model space must generally have three components(tangent, normal and binormal) since all directions
must be representable.

  Normal maps in tangent space can be specified with only two components – since the tangent space of the triangle describes a
hemispherical region the third component can be derived in a pixel shader.
Let There be Light!
Limitations of existing lighting models
Interpolated lighting (Vertex lighting)

   Because vertex lighting works by interpolating the colors attained at each vertex, this scenario would result
  in the quad being equally lit across its entire surface (or not lit if the point light doesn't reach the edges).

     In order to get around this problem, the quad would have to be tessellated in order to achieve a falloff from
    the center to the edges . This is counterproductive for the art team and is a problem that can be easily
    rectified using a per-pixel lighting approach.




                               Two faces model                              Tessellated model


Light count restrictions ( Normal map lighting)

  All lighting calculations must be performed in the same coordinate system.

    The number of light sources that a surface can be lit by is limited to the number of registers the vertex shader
    can pass to the pixel shader.
Let There be Light!
Unified per-pixel lighting solution
Interpolate normals, not colors

  Instead of calculating the color for the vertex, we will now simply transform
  the normal into world space, and then place it into a register alongside our
  emissive color for treatment on the pixel shader.

    If doing point lighting, we will also need to send the world space position
    of the vertex across the shader so we can get an interpolated world space
    pixel value. We then simply perform the lighting calculation on the pixel     Point light on low poly surface with
                                                                                    per-pixel lighting (4 vertices)
    shader much the same way that we did it on the vertex shader.



Perform normal map lighting in world space

  Instead of using matrix to convert data into tangent space, we will compute the inverse tangent space
  matrix and multiply that with the world space matrix. This will allow us to transform the tangent space
  normal into a world space normal that we can then use to perform all of our lighting calculations.

    If a scenario comes up where a set of vertices share a normal, but not tangent information, shading
    seams will show up on the geometry
Shadows Issues
Anatomy of shadow
    
        First, a per-vertex or per-pixel light source must exist
    
        Next, the light source must strike an object that casts a shadow, called the occluder.
    
        Finally, the surface onto which the shadow is being cast which is the receiver.

Shadows themselves have two parts:
  
    umbra, which is the inner part of the shadow
  
    penumbra, which is the outer and border portion of the shadow.

The penumbra creates the difference between hard and soft shadows. With hard shadows, the shadow ends abruptly,
and looks unnatural, whereas with soft shadows, the penumbra transitions from the color of the shadow
(usually black) to the adjacent pixel color,creating a more realistic shadow.

Shadow techniques

    Projected Shadows
    
        Created by having a light source act as a projector, which then "projects" a shadow onto the receiver surface.
    
     One downside to this method is that the receiver has to be a planar (flat) surface, or serious rendering errors can
occur.
    
       One speedup technique is to render the projected shadow polygon into a shadow texture, which can be applied
    to the receiver surface, and subsequently reused, providing that neither the light source nor the occluder moves,
    as shadows are not view- or scene-dependent.
Shadows Issues
Shadow Volumes
 
     Stencil buffer is an area of video memory that contains one to eight bits of
     information about each pixel of the scene, and this information can be used to
     mask certain areas of the scene to create shadow effects.

 
     Shadow volumes create a separate frustum, and place the point light source
     at the top of the frustum and project into it. The resulting intersection of the
     shadow frustum and the view frustum creates a cylindrical volume inside
     the view frustum. Polygons that fall within this cylindrical volume will cast
     shadows upon receiver objects (of any shape) that are aligned with the
     direction the light rays that are being cast from the shadow-generating object.

 
     This technique cast shadow on any object rather than just flat surface.
Shading Techniques

    Made in the Shade
        The rendering engine will shade the models based on various shading algorithms. These shading calculations
       can range in their demand from fairly modest (Flat and Gouraud), to much more demanding (Phong).

        Flat Shading: Takes the color values from a triangle's three vertices and averages those values.
                 The average value is then used to shade the entire triangle.
     This method is very inexpensive in terms of computations, but this method's visual cost is that individual triangles are
     clearly visible, and it disrupts the illusion of creating a single surface out of multiple triangles.

        Gouraud Shading: Takes the lighting values at each of a triangle's three vertices, then interpolates those values
                    across the surface of the triangle.
     One of the main advantages to Gouraud is that it smoothes out triangle edges on mesh surfaces, giving objects a more
     realistic appearance.

        Phong Shading:Phong shading uses these shading
             normals, which are stored at each vertex,
             to interpolate the shading normal at each
             pixel in the triangle.
     A shading normal (also called a vertex normal) actually is an
     average of the surface normals of its surrounding triangles.
Anti Aliasing
Aliasing
Aliasing is the staircase effect at the edge of a line or area of color when it's
displayed by an array of discrete pixels.
Aliasing occurs due to inappropriate sampling rate
Sampling Rate- Defines the number of samples per second (or per other unit)
taken from a continuous signal to make a discrete signal.

Antialiasing
   Smoothes the staircase effect that occurs when diagonal or curved lines or borders
are drawn on raster displays consisting of square or rectangular pixels.

Super-Sampling Techniques
  
     Ordered Grid super-sampling (OGSS)
               Sub-sample positions within a given pixel. The extra samples are
  positioned in an ordered grid shape. The sub-samples are aligned horizontally and
  vertically, creating a matrix of points. These sub-samples are thus located inside
  the original pixel in a regular pattern.

  
      Jittered Grid Super Sampling (JGSS)
                Similar to Ordered Grid Super-Sampling in that extra samples are stored per pixel, but the
      difference between the two is the position of the sub-samples. The sub-sample grid is "jittered," or
      shifted, off of the axis.
Overdraw Issue
While rendering in games a pixel we have drawn may be overdrawn by a pixel that is closer to the camera and this
 can happen several times before the closest pixel is actually figured out once the entire scene has been drawn.
A graphic could fill a pixel several times each frame. This is known as Overdraw issue.

Possible solutions
1. Depth Complexity

   Refers to the number of pixels that compete, via the depth test, to be written to a particular entry in the back
   buffer.

   Depth complexity can be used to do performance analysis and indicate which pixels need to be specially rendered.

2. Z-buffering
   An algorithm used in 3-D graphics to determine which objects, or
parts of objects, are visible and which are hidden behind other objects.

With Z-buffering, the graphics processor stores the Z-axis value of
each pixel in a special area of memory called the Z-buffer . Different
objects can have the same x- and y-coordinate values, but with
different z-coordinate values. The object with the lowest z-coordinate
value is in front of the other objects, and therefore that's the one that's
displayed
Shaders
Vertex and Pixel Shaders

A vertex shader is a graphics processing function used to add special effects to objects in a 3D environment by
  performing mathematical operations on the objects' vertex data before submitting them to the card to render.

Vertices may also be defined by colors, coordinates, textures, and lighting characteristics. Vertex Shaders don't
  actually change the type of data; they simply change the values of the data, so that a vertex emerges with a
  different color, different textures, or a different position in space.




Pixel shaders are routines that are performed per pixel when the texture is rendered that defines how pixels would
  look when they are rendered

This allows you to do some simple pixel effects from making textures in the distance out of focus, adding heat
  haze, and creating internal reflection for water effects to complex ones that simulate bump mapping, shadows,
  specular highlights, translucency and other complex phenomena.

The pixel shader is executed for each pixel rendered, and independently from the other pixels. Taken in isolation, a
  pixel shader alone can't produce very complex effects, because it operates only on a single pixel, without any
  knowledge of scene's geometry or neighbouring pixels.
Textures

    Multiple textures can take up a lot of memory, and it helps to manage their size with various techniques.

 Texture compression is one way of making texture data smaller, while retaining the picture information.
Compressed textures take up less space on the game CD, and more importantly, in memory and on your 3D card.

MIP Mapping

Another technique used by game engines to reduce the memory footprint
and bandwidth demands of textures is to use MIP maps. The technique of
MIP mapping involves preprocessing a texture to create multiple copies,
where each successive copy is one-half the size of the prior copy.




Multiple Textures and Bump Mapping
Single texture maps make a large difference in overall 3D graphics realism, but using multiple textures can achieve
even more impressive effects. Bump mapping is an old technology that is all about creating a texture that shows the
 way light falls on a surface, to show bumps or crevices in that surface.
Light Map
A lightmap is a 3D engine light data structure which contains the brightness of surfaces in a video game.
Lightmaps are precomputed and used for static objects.



    The most common methods of lightmapping are:
        
          Precompute vertex lighting by using distance from each vertex to a light,
        
          Multi-texturing to apply a second texture which contains the lumel data.



 But if you have dynamic lights then you will have to regenerate the light maps every frame, modifying them
according to how your dynamic lights may have moved.


Pre-processed lights don't affect the animated models correctly--they take their overall light value for the whole



model from the polygon they are standing on--and dynamic lights will be applied to give the right effect.



  Using a hybrid lighting approach is a tradeoff that most people don't notice, but it usually gives an effect that
looks "right".
Cache it Right
Cache Thrash = Bad Thing

    Texture cache management is vital to making game engines go fast.


 If you get into a situation where you've got textures being swapped in and out of your graphics card's memory,
you've got yourself a case of texture cache thrashing.


 Often APIs will dump every texture when this happens, resulting in every one of them having to be reloaded next
frame, and that's time consuming and wasteful. To the gamer, this will cause frame rate stutters as the API reloads
the texture cache.
Cache Management
 API Instruction: Coding API (Rendering Engine) to upload and store a texture to the card once instead of
swapping in and out many times. An API like OpenGL actually usually handles texture caching and means that the
API handles what textures are stored on the card, and what's left in main memory, based on rules like how often the
texture is accessed.

 Another texture cache management technique is texture compression.


Introduction to Texture Compression
The texture images that are being used now-a-days in games are higher in resolution, numerous in numbers and
heavy with details. Techniques like bump mapping, normal mapping, etc further complicate things because:

1) the resolution of images are large in size
2) normal maps have to be huge in order to fully cover a model object with a decent level of quality.

By compressing texture images we can:

1) reduce the amount of memory that each image requires
2) increase the visual quality of normal map images (In some cases)
3) boost an application's performance (not much data to process)
Compression Formats
Texture Compression Algorithms
  * DXT (S3TC)         *3Dc        * A8L8

DXT Compression Format
  
      Use a lossy compression that can reduce an image's size by a ratio of 4:1 or 6:1.
  
      Standard part of the Direct3D API and are available for the OpenGL API through the
      ARB_texture_compression and GL_EXT_texture_compression_s3tc extensions.
  
      Good to use on decal texture images, especially high resolution.
  
      They were not designed with normal maps in mind, and can have horrible results.
  
      The DXT compression formats are made up of DXT1, DXT2,
      DXT3, DXT4 and DXT5.

DXT1 Format
 
   The DXT1 format compresses RGB images by a factor of
   6:1, for an average of 4 bits per texel.
  
      DXT1 does not usually produce good results for normal maps.
                                                                         Without compression   With DXT1 Compression
Compression Formats
DXT2/3 Format
 
   DXT2/3 is the same as DXT1 but it uses an additional 4-bits for the alpha channel, thus doubling the size
   of the image.
 
   In the DXT2 format the data is pre-multiplied by the alpha channel while in the DXT3 it is not.


DXT4/5 Format
 
   The DXT5 format compresses an RGBA image by a factor of 4:1, using a byte per texel on average.
 
   The advantage of DXT5 over DXT1 is that it supports an alpha channel.
 
   Interpolate the alpha data when compressing the images.


  DX10Name         Description            Alpha Pre-multiplied?     Compression ratio     Texture Type

  DXT1             1-bit Alpha / Opaque             N/A                   8:1             Simple non-alpha
  DXT2             Explicit alpha                   Yes                   4:1             Sharp alpha
  DXT3             Explicit alpha                   No                    4:1             Sharp alpha
  DXT4             Interpolated alpha               Yes                   4:1             Gradient alpha
  DXT5             Interpolated alpha               No                    4:1             Gradient alpha
3D Card Memory Usage

     Most 3D cards these days handle 32-bit color, which is 8 bits for red, 8 for blue, 8 for green, and 8 for
    transparency of any given pixel.

    That's 256 shades of red, blue, and green in combination, which allows for 16.7 million colors-- that's
    pretty much all the colors you and I are going to be able to see on a monitor.

What is the need for 64-bit color?If we can't see the difference what's the point...??
 Let's say we have a point on a model where several lights are falling, all of different colors. We take
 the original color of the model and then apply one light to it, which changes the color value. Then we
 apply another light, which changes it further. The problem here is that with only 8 bits to play with,
 after applying 4 lights, the 8 bits just aren't enough to give us a good resolution and representation of
 the final color.

Card Memory v/s Texture Memory (Why is card memory too important?)
Running your game using a 32-bit screen at 1280x1024 with a 32-bit Z-buffer means:
    1280x1024 pixels=1,310,720 pixels
    1,310,720 pixels x 8bytes=10,485,760 pixels (4 bytes per pixel for the screen, 4 bytes per pixel for the z-
    buffer)
    1280x1024x12(back buffer)= 15,728,640 bytes, or 15MB.

Obviously if you drop the textures down to 16-bit instead of 32-bit, you could push twice as many lower
  resolutions across AGP. Also, if you ran at a lower color resolution per pixel, then more memory is
  available on the card for keeping often used textures around (caching textures). But you can never
  actually predict how users will set up their system. If they have a card that runs at high resolutions and
  color depths, then chances are they'll set their cards that way.
Physics Engine
A physics engine is a computer program that simulates Newtonian physics models, using variables such
  as mass, velocity, friction and wind resistance. It can simulate and predict effects under different
  conditions that would approximate what happens in real life or in a fantasy world.
Physics engines have two core components:

  Collision detection system,

  Physics simulation component responsible for solving the forces affecting the simulated objects.

There are three major paradigms for physics simulation:
  * Penalty methods, where interactions are commonly modeled as mass-spring systems. This type of
    engine is popular for deformable, or soft-body physics.
  * Constraint based methods, where constraint equations are solved that estimate physical laws.
  * Impulse based methods, where impulses are applied to object interactions.




       Soft Body physics                 Physics law of constrains            Impulse based interaction
Physics in Games
Implementation of Physics
    
        Character animation and collision
    
        Bullet projectiles and similar systems
    
        Water/Particle simulation
    
        Cloth/Fur simulation

3D objects in a game are represented by two separate meshes or shapes:
    
        Base mesh-highly detailed
    
        Highly simplified mesh to represent the base mesh, known as Collision Mesh



    Collision meshes are used to simplify physics calculations and speed up the FPS.
    This may be a bounding box, sphere, or convex hull.

    Generally a bounding box is used for broad phase collision detection to narrow down the number of possible
    collisions before costly mesh on mesh collision detection is done in the narrow phase of collision detection.


Constrains in Physics Engine

    Numbers precision representing the position of an object and the forces acting on that object

    Framerate, or the number of moments in time per second when physics is calculated
In-game Effects
FOG
Based on the camera distance, far away objects becomes more and more 'milky' that means getting
  brighter or become darker with increasing distance. This is known as Depth Cueing.

Particle System
Physically correct particle systems (PS) are designed to add few essential properties to the virtual world
  like explosion, heat, shimmer, waterfall, etc.
Particle System is limited by:
  
     fill rate or the CPU to graphics hardware (GPU) communication
  
     transfer bandwidth of particle data from the simulation on the CPU to the rendering on the GPU

Two ways to simulate particles:

  Use stateless PS- require a particle's data to be computed from its birth to its death by a closed form
  function which is defined by a set of start values and the current time.

    State-preserving PS- Allow using numerical, iterative integration methods to compute the particle
    data from previous values and a changing environmental description (e.g. moving collider objects).
Simulation and Rendering Algorithm
The algorithm consists of six basic steps:
  1. Process birth and death
  2. Update velocities
  3. Update positions
  4. Sort for alpha blending (optional)
  5. Transfer texture data to vertex data
  6. Render particles
Sound System in Games
Sound and music in games are becoming increasingly important in recent years due to both advances in
  the game genres people are playing and in technology.

In the previous generation of consoles, memory has been sparse enough to impact on the sample-rates, or
   ‘perceived quality’ of the audio content. It is certainly the case now, with advancements in audio
   compression that sample rates can now rival those of a film soundtrack on the PS3 and 360.

To add to this, with the limited RAM sound memory of previous-generation consoles, the amount of
  sounds that could physically be loaded and played at any one time was strictly limited. It is also now
  the case that more sounds can be played simultaneously than they could on previous generation
  consoles (more voices available) and around ten times more sounds can be loaded into RAM.

Music and sounds in a game can be :

  2D sound - mono channel converted to stereo channel and passed on to both the speakers (Left and
  Right)

  3D sound- Separate sound samples for both the channels (Left and Right) and can vary in intensity

  3D surround sound- Having 4 or more speakers attached to the game and the intensity/pitch vary
  according to the placement of sound object and the camera distance.

Audio Design Document

   OUTLINE/OBJECTIVES
It should contain a statement or two describing the goals and purpose of the audio for the game. It should
   also describe the audio's proposed style and execution, as compared to the setting of the game.
Sound System in Games

  RESEARCH
This section can prove to be valuable in looking back at past experiments on the project.Items that can be
  included in this section can be file formats tested and used, in-game audio experiments, and especially
  any proprietary audio research.


   IMPLEMENTATION
Define a set of rules: permutations, and boundaries (limits) as to how the audio works on a more detailed
  level. The subsections of this section should include at least A. Sound and B. Music Implementation
.
There should never be more than (8) unique sound effects playing at once, not including dialogue or
  music.

Eg. of a Sound Engine:
Layer0- An ambient sound effect layer.
Layer1- A 3D sound engine, where monaural sounds can travel through a four-speaker system, complete
          with a parameter list defining each sound's path, traveling speed, special effects, and so on.
Layer2- Ambient sounds generated from terrain and buildings, which play only occasionally, and are
          randomly selected.
Layer3-Civilian, animal, monster, and all combat-related sounds are played here, including disaster
         sounds, user interface sounds, and so on.
Layer4- Interactive music categories based on game events
Layer5- All narration and in-game dialogue for all characters.
Sound System in Games

   CONTENT LIST
In any case, it's good idea to include a general outline of content, well before there is enough detail to
   have an actual, formal list. Here is a generic example:

A. SOUND DESIGN
    1. Action sounds
      a. Explosions: 5-10, varying from small to large
      b. Weapons: unknown, will know before 4/20/01 -- possibly 30-50 unique sounds for 15-25 weapons
      c. Characters
        i. human -- military: 5 unit types, 3-5 sounds each
        ii. human -- civilian: unknown
        iii. alien -- misc.: unknown
    2. User interface

B. MUSIC
    1. Setup mode
    2. Mission panel
    3. In-game music
    4. win/lose music

C. DIALOGUE
    1. in-game characters
    2. narration

D. ADDITIONAL AUDIO-FOR-VIDEO CONTENT (marketing promotions, in-game animations)
Sound System in Games



  SCHEDULE
Roughly scheduling the task to be done including:
    
      Setting milestones
    
      Who will do what
    
      Simple deadlines
    
      What's done and what's not


Ultimately, your game's audio design should translate into a rewarding, interactive experience, one that
  blends effortlessly into the gameplay, graphics, and other components of the product. The real trick of
  course, is how you specify it in your audio design document.
Coding Sound

    Voice usage and Memory utilization issues are a sore point between the composer and coder.

    The composer often has no way to accurately know how many voices are being used and the coder is
    bitter that the composer is wasting voices and increasing overhead on the audio driver.

     The composer needs to be aware of the limits of the materials they use to create the audio and the
    coder needs to present all necessary information clearly and not hinder the creative process.

Voice Usage
Some methods to reduce voice utilization are volume culling, sound sphere radius reduction,
voice stealing by priority, instance capping and sub-mixing.

1) Volume Culling
Volume culling involves shutting down a voice when its volume reaches a certain threshold near zero. Done correctly, this
  has the desirable side effect of clearing up the mix and reducing processor overhead. Done incorrectly, it can introduce
  voice “popping” where it is easy to hear when voices get culled and possible thrashing when a voice hovers around the
  threshold and is repeatedly stopped and started due to the culling algorithm. To reduce the possibility of clicks, the
  voice is enveloped before it is stopped.

2) Sound Sphere Reduction
Sound sphere radius reduction should be simple given individual control over 3D sound volumes. The composer should
  be able to reduce various groups and individual 3D sound sources in the world building tool.
Coding Sound

3) Voice Stealing
Voice stealing by priority requires a driver which supports this functionality and some pre-planning on the side of the
  composer. This works by defining good priority values for each sample and allowing the sound driver to decide how to
  steal a voice if none are available to play a new voice.

4) Instance Capping
Instance capping causes the sound driver to steal the voice of the oldest instance of a sample group when the maximum
   number of instances is reached. An example is in button sounds, where the number of instances is set such that it
   doesn’t sound like menu sounds are being cut off as well as not allowing the user to trigger enough voices to distort
   the game platform’s audio output.

5) Sub-Mixing
Sub-mixing is likely the easiest. If two samples are commonly used together and rarely separately then the composer can
  mix the two sounds together, save on memory as well as reduce the number of voices used.

6) “Bank-for-the-Buck”
Obviously, common sense must also be used when acting upon the results of the table, since a large, rarely used sample
  doesn’t automatically mean it is unimportant and can be thrown away! However, all samples and their associated
  memory sizes should be periodically re-evaluated to justify their weighting. An additional offline tool might allow
  tweaking memory use parameters in real-time. This way, the composer can interactively change the sampling rate,
  sample length and other parameters to see how optimizing would affect the resulting memory map and check the mix
  with the results.
Game Networking

    Game Networking exists where multiple people can play the same game and interact with each other on their own system.


    Client/server is where effectively one machine is running the game, and everyone else is just a terminal that accepts input from
    the player, and renders whatever the server tells it to render.


    The advantage of client/server is that every machine will be showing the same game, since all processing is done in one place, not
    across multiple machines where you can get out of sync with each other. The drawback is that the server itself needs to have some
    serious CPU time available for the processing of each of the connected clients, as well as a decent network connection to ensure
    each client receives its updates in a timely fashion.

What are TCP/IP, UDP/IP?

    TCP/IP and UDP/IP are two levels of communication protocol systems.
    The IP figures out the transmission of packets of data to and from the Internet.
    UDP or TCP hands it a big fat packet of data, and IP splits it up into sub packets,
    puts an envelope around each packet, and figures out the IP address of its
    destination, and how it should get there, and then sends the packet out to your ISP
    (Internet Service provider like BSNL), or however you are connected to the Net.


    It's effectively like writing down what you want to send on a postcard, stamping
    it, addressing it, and stuffing it in a mailbox, and off it goes.


    UDP and TCP are higher layers that accept the packet of data from you the coder,
    or the game, and decides what to do with it.


    The difference between UDP and TCP is that TCP guarantees delivery of the packets, in order, and UDP doesn't.
Game Networking

Problem with TCP/IP

  In order to be sure that packets that are sent via the Internet get there intact, TCP expects an Acknowledgment to
  be sent back from the destination for every packet it sends. If it doesn't get an ACK within a certain time, then it
  holds up sending any new packets, resends the one that was lost, and will continue to do so until the destination
  responds.


    This is actually such a problem that almost no games use TCP as their main Internet protocol of choice, unless
    it's not a real-time action game. Most games use UDP--they can't guarantee order or delivery, but it sure is fast -
    or at least faster than TCP/IP usually ends up.


Point to Remember: Network programing

    Use of Protocol- TCP/IP is a strict no-no because of the obvious issues discussed above.

    Packet Bloating- Careful only to transmit that data that is required. The larger the packet you give to the UDP
    system, the more you are asking the network to handle.

    Packet Frequency- Are you expecting packets to be sent faster than the communications infrastructure can
    really handle?

    Handling Packets- If you don't handle it right, you end up with missing events, missing entities, missing
    effects, and sometimes, complete game.
Scripting System

Scripting

    Allows game objects to execute code that is not compiled as part of the object.

    Gives objects the ability to react to the world around them.

    Scripting is where you take complete control over a given scene, setting up events that the player almost always
    has no control over, where the gamer is "on a rail" in order to move though to a given plot point, or set up a
    situation the player needs to resolve.


    There are a variety of scripting languages currently being implement in games such as Lua and Python.

Type of scripting system

  The first is the simple text-based, single-threaded style, just like we programmers are used to coding.


    Then there's the complicated stuff--allowing multiple threads, and actually allowing variable situations. Variable
    situations are those where you don't actually know for sure who's around when the script starts, but you have to
    write the script in such a way that it will work with whoever is around.
Artificial Intelligence
Artificial Intelligence
Game artificial intelligence refers to techniques used in computer and video games to produce the illusion
  of intelligence in the behaviour of non-player characters (NPCs).


Use of A.I.
   . Control of any NPCs in the game. A.I. can induce the functionality in opponents/NPC's like:
        Predicting Player Behavior
        Reaction based on Players action
        Taking logical decision on their own
        Reinforce learning through Knowledge Base

    Pathfinding is another common use for AI, widely seen in real-time strategy games.
    [Pathfinding is the method for determining how to get an NPC from one point on a map to another,
    taking into consideration the terrain, obstacles and possibly "fog of war". ]

   Game AI is also involved with dynamic game balancing.

Emergent A.I.
   The AI method where the opponents are able to "learn" from actions taken by the player and their
    behavior is modified accordingly.
   While these choices are taken from a limited pool, it does often give the desired illusion of an
    intelligence on the other side of the screen.
User Interface
User Interface (U.I.)
  In general it is the means by which the users (gamers) interact with the system - a particular machine,
  device, computer program or other complex tools. The user interface provides means
  of:
  * Input, allowing the users to manipulate a system
  * Output, allowing the system to produce the effects of the users' manipulation.

   The UI is a medium between the Core Game Mechanics and the gamers

User Interface Defines Game Play
Game play is a vague concept and hard to describe. One possible way is to define it is by the way gamers interact
  with games; that is, through the UI and the interactions associated with it.

By doing that, we will able to analyze game play, not as an abstract concept, but something concrete that can be
  described, measured, and studied.

   UI doesn’t only help gamers to play games successfully, but also controls the pace at which the internal
    mechanisms will be revealed to gamers.
User Interface

The form factors of the hardware, especially the I/O devices, greatly affect the design of UI.


User Interfaces of PC and Console Games
    PC games heavily rely on the combination of keyboard and mouse. Whereas console games have
    controllers, which are more refined, although limited, input devices.

   Most gamers realize that they are interacting with some kinds of user interfaces when they are playing
    PC games. But gamers playing console games sometimes don’t realize that they are interacting with
    the UIs. UIs of console games are hidden deeply in the careful calculation of the usage of controllers,
    which is not obvious from the screen.

   We can call the UIs of PC games software-oriented interfaces, which means they utilize lots of
    conventional GUI elements to represent actions. The UIs of console games, on the other hand, are
    hardware-oriented interfaces, which means they are designed around the form factors of controllers
    and use relatively fewer GUI elements.
User Interface

Desirable Properties of User Interfaces for Games
    Attractiveness, enjoyability.
       E.g. quality of graphics, sound, animation, etc.
       Strong influence on marketability.

    Usability
      How easy is it learn?
          Can be learned quickly and easily, either for the long term , or for immediate, "walk up and
           use" purposes
      How easy is it to use?
        Can be used to accomplish work rapidly, accurately, and without undue effort

   If a system is difficult to learn or to use, customers are likely to be dissatisfied eventually,
    even if it is a market success at first.
UI Design Guidelines

   User Interface Shouldn’t Be Distracting
       UI should have aesthetic taste, but more importantly, it should be simple,
    efficient, easy to use, and consistent with the whole game environment.

   User Interface Should Provide Enough Visual Affordance
       A good UI should use its visual appearance to suggest its functions. Gamers
    should be able to understand and act easily by just looking at the UI.
   User Interface Should Be Balanced
       The visual elements (buttons, labels) in a UI should be properly arranged,
    sized, and aligned to reinforce logical relationships among them and ensure
    the UI is stable on the screen.

   User Interface Should Be Transparent
       A good UI should be transparent. Gamers should be able to forget about the
    menus, buttons, icons, and windows and immerse into the virtual world that the
    game creates. In that sense, UI designers’ job is to design UIs that are transparent
    – UIs that can be used so naturally by the gamers so that nobody would even
    notice its existence.
End Of Presentation


                      SHARAD MITRA
                      Sr. Technical Artist
                      Exigent Studios

Mais conteúdo relacionado

Mais procurados

Design your 3d game engine
Design your 3d game engineDesign your 3d game engine
Design your 3d game engineDaosheng Mu
 
Game Development with Unity
Game Development with UnityGame Development with Unity
Game Development with Unitydavidluzgouveia
 
Unity Introduction
Unity IntroductionUnity Introduction
Unity IntroductionJuwal Bose
 
Unity Game Engine - Basics
Unity Game Engine - BasicsUnity Game Engine - Basics
Unity Game Engine - BasicsFirosK2
 
Scene Graphs & Component Based Game Engines
Scene Graphs & Component Based Game EnginesScene Graphs & Component Based Game Engines
Scene Graphs & Component Based Game EnginesBryan Duggan
 
Introduction to Game Development and the Game Industry
Introduction to Game Development and the Game IndustryIntroduction to Game Development and the Game Industry
Introduction to Game Development and the Game IndustryNataly Eliyahu
 
Game development
Game developmentGame development
Game developmentreittes
 
06. Game Architecture
06. Game Architecture06. Game Architecture
06. Game ArchitectureAmin Babadi
 
Game Engine Architecture
Game Engine ArchitectureGame Engine Architecture
Game Engine ArchitectureAttila Jenei
 
Shader Programming With Unity
Shader Programming With UnityShader Programming With Unity
Shader Programming With UnityMindstorm Studios
 
Introduction to Unity3D and Building your First Game
Introduction to Unity3D and Building your First GameIntroduction to Unity3D and Building your First Game
Introduction to Unity3D and Building your First GameSarah Sexton
 
Introduction to Game Development
Introduction to Game DevelopmentIntroduction to Game Development
Introduction to Game DevelopmentSumit Jain
 
The Basics of Unity - The Game Engine
The Basics of Unity - The Game EngineThe Basics of Unity - The Game Engine
The Basics of Unity - The Game EngineOrisysIndia
 
Game Design Principle
Game Design PrincipleGame Design Principle
Game Design PrincipleNaquiah Daud
 
Unreal Engine (For Creating Games) Presentation
Unreal Engine (For Creating Games) PresentationUnreal Engine (For Creating Games) Presentation
Unreal Engine (For Creating Games) PresentationNitin Sharma
 

Mais procurados (20)

Design your 3d game engine
Design your 3d game engineDesign your 3d game engine
Design your 3d game engine
 
Game Development with Unity
Game Development with UnityGame Development with Unity
Game Development with Unity
 
Unity Introduction
Unity IntroductionUnity Introduction
Unity Introduction
 
Unity Game Engine - Basics
Unity Game Engine - BasicsUnity Game Engine - Basics
Unity Game Engine - Basics
 
Scene Graphs & Component Based Game Engines
Scene Graphs & Component Based Game EnginesScene Graphs & Component Based Game Engines
Scene Graphs & Component Based Game Engines
 
Introduction to Game Development and the Game Industry
Introduction to Game Development and the Game IndustryIntroduction to Game Development and the Game Industry
Introduction to Game Development and the Game Industry
 
Game development
Game developmentGame development
Game development
 
Unity3D Programming
Unity3D ProgrammingUnity3D Programming
Unity3D Programming
 
06. Game Architecture
06. Game Architecture06. Game Architecture
06. Game Architecture
 
Game Engine Architecture
Game Engine ArchitectureGame Engine Architecture
Game Engine Architecture
 
Shader Programming With Unity
Shader Programming With UnityShader Programming With Unity
Shader Programming With Unity
 
Shaders in Unity
Shaders in UnityShaders in Unity
Shaders in Unity
 
Introduction to Unity3D and Building your First Game
Introduction to Unity3D and Building your First GameIntroduction to Unity3D and Building your First Game
Introduction to Unity3D and Building your First Game
 
Introduction to Game Development
Introduction to Game DevelopmentIntroduction to Game Development
Introduction to Game Development
 
The Basics of Unity - The Game Engine
The Basics of Unity - The Game EngineThe Basics of Unity - The Game Engine
The Basics of Unity - The Game Engine
 
Game Design Principle
Game Design PrincipleGame Design Principle
Game Design Principle
 
Unreal Engine (For Creating Games) Presentation
Unreal Engine (For Creating Games) PresentationUnreal Engine (For Creating Games) Presentation
Unreal Engine (For Creating Games) Presentation
 
What is OpenGL ?
What is OpenGL ?What is OpenGL ?
What is OpenGL ?
 
Unity 3d Basics
Unity 3d BasicsUnity 3d Basics
Unity 3d Basics
 
Game Design Process
Game Design ProcessGame Design Process
Game Design Process
 

Semelhante a Game Engine Overview

BEV Semantic Segmentation
BEV Semantic SegmentationBEV Semantic Segmentation
BEV Semantic SegmentationYu Huang
 
Task 2 displaying 3 d polygon animation
Task 2 displaying 3 d polygon animationTask 2 displaying 3 d polygon animation
Task 2 displaying 3 d polygon animationRexeh1245
 
Computer Graphics Notes
Computer Graphics NotesComputer Graphics Notes
Computer Graphics NotesGurpreet singh
 
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The BasicsHA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basicshamza_123456
 
Visibility Optimization for Games
Visibility Optimization for GamesVisibility Optimization for Games
Visibility Optimization for GamesUmbra
 
Visibility Optimization for Games
Visibility Optimization for GamesVisibility Optimization for Games
Visibility Optimization for GamesSampo Lappalainen
 
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro..."High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...Edge AI and Vision Alliance
 
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The BasicsHA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basicshamza_123456
 
Introduction occlusion
Introduction occlusionIntroduction occlusion
Introduction occlusionVisCircle
 
Advanced Game Development with the Mobile 3D Graphics API
Advanced Game Development with the Mobile 3D Graphics APIAdvanced Game Development with the Mobile 3D Graphics API
Advanced Game Development with the Mobile 3D Graphics APITomi Aarnio
 
3 d video streaming for virtual exploration of planet surface
3 d video streaming for virtual exploration of planet surface3 d video streaming for virtual exploration of planet surface
3 d video streaming for virtual exploration of planet surfaceeSAT Publishing House
 
3 d graphics with opengl part 2
3 d graphics with opengl  part 23 d graphics with opengl  part 2
3 d graphics with opengl part 2Sardar Alam
 
Point cloud mesh-investigation_report-lihang
Point cloud mesh-investigation_report-lihangPoint cloud mesh-investigation_report-lihang
Point cloud mesh-investigation_report-lihangLihang Li
 
Overview of Graphics System
Overview of Graphics SystemOverview of Graphics System
Overview of Graphics SystemPrathimaBaliga
 
unit1_updated.pptx
unit1_updated.pptxunit1_updated.pptx
unit1_updated.pptxRYZEN14
 
Gpu presentation
Gpu presentationGpu presentation
Gpu presentationspartasoft
 

Semelhante a Game Engine Overview (20)

BEV Semantic Segmentation
BEV Semantic SegmentationBEV Semantic Segmentation
BEV Semantic Segmentation
 
Task 2 displaying 3 d polygon animation
Task 2 displaying 3 d polygon animationTask 2 displaying 3 d polygon animation
Task 2 displaying 3 d polygon animation
 
Ha4 constraints
Ha4   constraintsHa4   constraints
Ha4 constraints
 
Computer Graphics Notes
Computer Graphics NotesComputer Graphics Notes
Computer Graphics Notes
 
Task 2
Task 2Task 2
Task 2
 
3D - The Basics
3D - The Basics 3D - The Basics
3D - The Basics
 
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The BasicsHA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
 
Visibility Optimization for Games
Visibility Optimization for GamesVisibility Optimization for Games
Visibility Optimization for Games
 
Visibility Optimization for Games
Visibility Optimization for GamesVisibility Optimization for Games
Visibility Optimization for Games
 
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro..."High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
 
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The BasicsHA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
 
Introduction occlusion
Introduction occlusionIntroduction occlusion
Introduction occlusion
 
1604.08848v1
1604.08848v11604.08848v1
1604.08848v1
 
Advanced Game Development with the Mobile 3D Graphics API
Advanced Game Development with the Mobile 3D Graphics APIAdvanced Game Development with the Mobile 3D Graphics API
Advanced Game Development with the Mobile 3D Graphics API
 
3 d video streaming for virtual exploration of planet surface
3 d video streaming for virtual exploration of planet surface3 d video streaming for virtual exploration of planet surface
3 d video streaming for virtual exploration of planet surface
 
3 d graphics with opengl part 2
3 d graphics with opengl  part 23 d graphics with opengl  part 2
3 d graphics with opengl part 2
 
Point cloud mesh-investigation_report-lihang
Point cloud mesh-investigation_report-lihangPoint cloud mesh-investigation_report-lihang
Point cloud mesh-investigation_report-lihang
 
Overview of Graphics System
Overview of Graphics SystemOverview of Graphics System
Overview of Graphics System
 
unit1_updated.pptx
unit1_updated.pptxunit1_updated.pptx
unit1_updated.pptx
 
Gpu presentation
Gpu presentationGpu presentation
Gpu presentation
 

Game Engine Overview

  • 2. Overview  Game Engine  The Renderer  Rendering Workflow  Coordinate System  Basic Graphic Pipeline Flow  Culling Overview  Geometry  Rendering/Rasterization  Lighting  Shading  Anti-Aliasing  Shaders  Textures  Compression Format  Memory Usage  Physics Engine  In-Game Effects  Sound  Networking  Scripting  A.I.  User Interface (U.I.)
  • 3. Engine v/s Game Many people confuse the engine with the entire game. That would be like confusing an automobile engine with an entire car. You can take the engine out of the car, and build another shell around it, and use it again. The game part would be all the content (models, animations, sounds, AI, and physics) which are called 'assets', and the code required specifically to make that game work, like the AI, or how the controls work. A game engine is the core software component of a video game or other interactive application with real-time graphics. Core functionality typically provided by a game engine:  Rendering engine (2D or 3D graphics)  Physics engine (Collision detection and collision response)  Sound  Scripting  Animation  Artificial intelligence  Networking  Streaming  Memory management  Threading, and a  Scene graph.
  • 4. Game Engine Game engines provide a suite of visual development tools in addition to reusable software components. These tools are generally provided in an integrated development environment* to enable simplified, rapid development of games in a data-driven manner. *An integrated development environment (IDE) is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of a source code editor, a compiler and/or interpreter, build automation tools, and (usually) a debugger.
  • 5. The Renderer It visualizes the scene for the player / viewer so he or she can make appropriate decisions based upon what's displayed. In a general sense, the renderer's job is to create the visual flare that will make a game stand apart from the herd. 3D graphics is essentially the art of the creating the most while doing the least, since additional 3D processing is often expensive both in terms of processor cycles and memory bandwidth. The business of getting pixels on screen these days involves 3D accelerator cards, API's, three-dimensional math, and an understanding of how 3D hardware works. For consoles, the same kind of knowledge is required, but at least with consoles you aren't trying to hit a moving target. A console's hardware configuration is a frozen "snapshot in time", and unlike the PC, it doesn't change at all over the lifetime of the console. The renderer is where over 50% of the CPU's processing time is spent, and where game developers will often be judged the most harshly.
  • 6. Rendering Workflow Because there are so many calculations to be done and volumes of data to be handled, the entire process is broken down into component steps, sometimes called stages. One of the art-forms in 3D graphics is to elegantly reduce visual detail in a scene so as to gain better performance, but do it in such a way that the viewer doesn't notice the loss of quality. With the number of steps involved and their complexity, the ordering of these stages of the pipeline can vary between implementations. 3D Pipeline - High-Level Overview 1. Application/Scene * Scene/Geometry database traversal * Movement of objects, and aiming and movement of view camera * Animated movement of object models * Description of the contents of the 3D world * Object Visibility Check including possible Occlusion Culling * Select Level of Detail (LOD)
  • 7. Rendering Workflow 2. Geometry * Transforms (rotation, translation, scaling) * Transform from Model Space to World Space (Direct3D) * Transform from World Space to View Space * View Projection * Trivial Accept/Reject Culling * Back-Face Culling (can also be done later in Screen Space) Lighting * Perspective Divide - Transform to Clip Space * Clipping * Transform to Screen Space 3. Triangle Setup * Back-face Culling (or can be done in view space before lighting) * Slope/Delta Calculations * Scan-Line Conversion 4. Rendering / Rasterization * Shading * Texturing * Fog * Alpha Translucency Tests * Depth Buffering * Antialiasing (optional) * Display
  • 8. Coordinate System Working with Space In a 3D rendering system, multiple Cartesian coordinate systems (x- (left/right), y- (up/down) and z-axis (near/far)) are used at different stages of the pipeline. While used for different though related purposes, each coordinate system provides a precise mathematical method to locate and represent objects in the space. And not surprisingly, each of these coordinate systems is referred to as a "space." Model Space: where each model is in its own coordinate system, whose origin is some point on the model World Space: where models are placed in the actual 3D world, in a unified world coordinate system. View Space (also called Camera Space): in this space, the view camera is positioned by the application (through the graphics API) at some point in the 3D world coordinate system, if it is being used. The view volume is actually created by a projection, which as the name suggests, "projects the scene" in front of the camera. In this sense, it's a kind of role reversal in that the camera now becomes a projector, and the scene's view volume is defined in relation to the camera.
  • 9. Coordinate System Deeper into Space Clip Space: Similar to View Space, but the frustum is now "squished" into a unit cube, with the x and y coordinates normalized to a range between –1 and 1, and z is between 0 and 1, which simplifies clipping calculations. Screen Space: where the 3D image is converted into x and y 2D screen coordinates for 2D display. 'z' coordinates are still retained by the graphics systems for depth/Z-buffering and back-face culling before the final render.
  • 10. 3D Pipeline Data Flow It is useful to note that most operations in the application/scene stage and the early geometry stage of the pipeline are done per vertex, whereas culling and clipping is done per triangle, and rendering operations are done per pixel. Another advantage of pipelining is that because no data is passed from one vertex to another in the geometry stage or from one pixel to another in the rendering stage, chipmakers have been able to implement multiple pixel pipes and gain considerable performance boosts using parallel processing of these independent entities. Stage1. Application/Scene  The renderer traverses the geometry database to gather necessary object information (includes object movement, animated movement, and aiming and movement of camera object) that are going to change in the next frame of animation.  "occlusion culling", a visibility test that determines whether an object is partially or completely occluded (covered) by some object in front of it. If it is, the occluded object, or the part of it that is occluded is discarded.
  • 11. Culling Overview CULLING Visibility culling algorithms reduce the number of polygons sent down the rendering pipeline based on the simple principle that if something is not seen, it does not have to be drawn. The simplest approach to culling is to divide the world up into sections, with each section having a list of other sections that can be seen. That way you only display what's possible to be seen from any given point. How you create the list of possible view sections is the tricky bit. Again, there are many ways to do this, using BSP trees, Portals and so on. View Frustum Culling View volume is usually defined by six planes, namely the front, back, left, right, top, and bottom clipping planes, which together form a cut pyramid. Front and back clipping planes may be defined to lie at the viewer point and infinity, respectively. If a polygon is entirely outside the pyramid, it cannot be visible and can be discarded. If it is partially inside, it is clipped to the planes so that its outside parts are removed.
  • 12. Culling Overview Back-face culling This primitive form of culling is based on the observation that if all objects in the world are closed, then the polygons which don't face the viewer cannot be seen. This directly translates to the vector angle between the direction where the viewer is facing and the normal of the polygon: if the angle is more than 90 degrees, the polygon can be discarded. Cell-based occlusion culling Cell-based occlusion culling methods are based on the assumption that the game world can be divided to cells which are connected to each other using portals. Clearly, if a portal is not seen from a given point of view, then none of the cells behind the portal can be seen and they can thus be culled away. There are two dominating forms of cell-based engines in use today: BSP and "portal" engines. Binary Space Partitioning (BSP)- Space is split with a plane to two half spaces, which are again recursively split. This can be used to force a strict back-to-front drawing ordering
  • 13. 3D Pipeline Data Flow  Level of Detail (LOD) involves decreasing the complexity of a 3D object representation as it moves away from the viewer or according other metrics such as object importance, eye-space speed or position. The Statue (in our case) distance to the view camera will dictate which LOD level gets used. If it's very near, the highest resolution LOD gets used, but if it's just barely visible and far from the view camera, the lowest resolution LOD model would be used, and for locations between the two, the other LOD levels would be used. Level of detail techniques increases the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations.
  • 14. Geometry Stage2. Geometry  Objects get moved from frame to frame to create the illusion of movement, and in a 3D world, objects can be moved or manipulated using four operations broadly referred to as transforms.  Transformation  Rotation  Scaling  Space to Space- For the final rendering of models/geometry, the coordinate space is translate from object space to world space and then to view space. After the transform to view space, many interesting things begin to happen. Trivial Matters Viewing frustum View frustum is the region of space in the modeled world that may appear on the screen The first step in reducing the working set of triangles to be processed (rendered) is to cull those that are completely outside of the view volume. This is known as View frustum culling. The next operation is called back-face culling (BFC), which as the name suggests, is an operation that discards triangles that have surfaces that are facing away from the view camera.
  • 15. Bounding Volumes Getting Clipped and Gaining Perspective Clipping- is the operation to discard only the parts of triangles that in some way partially or fully fall outside the view volume (camera field of view).  Good clipping strategy is important in the development of video games in order to maximize the game's frame rate and visual quality. Bounding Volume Hierarchies (BVHs) Useful for numerous tasks - including efficient culling and speeding up collision detection between objects. Examples of tests where BV’s are applied are: • testing if a point is inside an object. • testing an object for intersection with a line (ray). • testing if an object intersects a plane or lies above/below. • testing an object for intersection with and/or inclusion within a volume.
  • 16. Setting the Table Stage3. Triangle Setup Think of triangle setup as the prelude to the rendering stage of the pipeline, because it "sets the table" for the rendering operations that will follow.  First off, the triangle setup operation computes the slope (or steepness) of a triangle edge using vertex information at each of edge's two endpoints. (The slope is often called delta x/delta y, dx/dy, Dx/Dy, or literally change in x/change in y).  Using the slope information, an algorithm called a digital differential analyzer (DDA) can calculate x,y values to see which pixels each triangle side (line segment) touches. What it really does is determine how much the x value of the pixel touched by a given triangle side changes per scan line, and increments it by that value on each subsequent scan-line. For every single pixel increment along the x-axis, we must increment the y-axis value of the triangle edge by Dy.  Color and Depth values are interpolated for each pixel.  In addition, the texture coordinates are calculated for use during texture mapping.
  • 17. Rendering/Rasterization Stage4. Rendering/Rasterization Lighting It's one of those things that when it works, you don't notice it, but when it doesn't, you notice it all too much. It usually happens once the 3D scene has been transformed into view space. A geometric light is based upon very simplified lighting and reflection models, which often have little to do with how lights behave in the real world, but the net effect is deemed sufficient for the purposes of real-time 3D.  "per-vertex" and "per-pixel" lighting The advantage of per-pixel lighting is its granularity, especially true in low triangle count scenes with specular reflections where the realism of per-vertex lighting can diminish considerably. The obvious downside to per-pixel lighting is its considerably larger computational workload.
  • 18. Rendering/Rasterization Vertex Lighting  Determine how many polygons are touching one vertex and then take the mean of all the resultant polygons orientations (a normal) and assign that normal to the vertex.  Each vertex for a given polygon will point in slightly different directions, so you wind up gradating or interpolating light colors across a polygon, in order to get smoother lighting.  Advantage: Hardware can often help do this in a faster manner using hardware transform and lighting (T&L).  Drawback: It doesn't produce shadowing. For instance, both arms on a model will be lit the same way, even if the light is on the right side of the model, and the left arm should be left in shadow cast by the body.
  • 19. Lighting Thoughts Per Pixel Normal Map based Lighting  Encode tangent-space normals for the surface in a texture to compute the lighting equation at each pixel, rather than at each vertex.  Object space normal maps are also possible and are generally used to light dynamic objects.  There is also a variation on normal map lighting called parallax mapping which encodes an additional height map value into the normal texture in order to simulate the parallax effect. Performing normal map lighting is a three-step approach:  Normal Map is created,applied to the model and exported with tangent space information.  A tangent matrix must be created to transform all positional lighting information into tangent space. The tangent space matrix is a 3x3 matrix made up of the vertex's tangent, binormal and normal vectors.  The color contribution of each light is calculated in the pixel shader using the normal information fetched from the normal map and the tangent space lighting vectors computed from data transformed on the vertex shader. Space  Normal maps are usually stored as a representation in one of two spaces – either in model space, or in the local tangent space of each triangle.  Normal maps specified in model space must generally have three components(tangent, normal and binormal) since all directions must be representable.  Normal maps in tangent space can be specified with only two components – since the tangent space of the triangle describes a hemispherical region the third component can be derived in a pixel shader.
  • 20. Let There be Light! Limitations of existing lighting models Interpolated lighting (Vertex lighting)  Because vertex lighting works by interpolating the colors attained at each vertex, this scenario would result in the quad being equally lit across its entire surface (or not lit if the point light doesn't reach the edges).  In order to get around this problem, the quad would have to be tessellated in order to achieve a falloff from the center to the edges . This is counterproductive for the art team and is a problem that can be easily rectified using a per-pixel lighting approach. Two faces model Tessellated model Light count restrictions ( Normal map lighting)  All lighting calculations must be performed in the same coordinate system.  The number of light sources that a surface can be lit by is limited to the number of registers the vertex shader can pass to the pixel shader.
  • 21. Let There be Light! Unified per-pixel lighting solution Interpolate normals, not colors  Instead of calculating the color for the vertex, we will now simply transform the normal into world space, and then place it into a register alongside our emissive color for treatment on the pixel shader.  If doing point lighting, we will also need to send the world space position of the vertex across the shader so we can get an interpolated world space pixel value. We then simply perform the lighting calculation on the pixel Point light on low poly surface with per-pixel lighting (4 vertices) shader much the same way that we did it on the vertex shader. Perform normal map lighting in world space  Instead of using matrix to convert data into tangent space, we will compute the inverse tangent space matrix and multiply that with the world space matrix. This will allow us to transform the tangent space normal into a world space normal that we can then use to perform all of our lighting calculations.  If a scenario comes up where a set of vertices share a normal, but not tangent information, shading seams will show up on the geometry
  • 22. Shadows Issues Anatomy of shadow  First, a per-vertex or per-pixel light source must exist  Next, the light source must strike an object that casts a shadow, called the occluder.  Finally, the surface onto which the shadow is being cast which is the receiver. Shadows themselves have two parts:  umbra, which is the inner part of the shadow  penumbra, which is the outer and border portion of the shadow. The penumbra creates the difference between hard and soft shadows. With hard shadows, the shadow ends abruptly, and looks unnatural, whereas with soft shadows, the penumbra transitions from the color of the shadow (usually black) to the adjacent pixel color,creating a more realistic shadow. Shadow techniques  Projected Shadows  Created by having a light source act as a projector, which then "projects" a shadow onto the receiver surface.  One downside to this method is that the receiver has to be a planar (flat) surface, or serious rendering errors can occur.  One speedup technique is to render the projected shadow polygon into a shadow texture, which can be applied to the receiver surface, and subsequently reused, providing that neither the light source nor the occluder moves, as shadows are not view- or scene-dependent.
  • 23. Shadows Issues Shadow Volumes  Stencil buffer is an area of video memory that contains one to eight bits of information about each pixel of the scene, and this information can be used to mask certain areas of the scene to create shadow effects.  Shadow volumes create a separate frustum, and place the point light source at the top of the frustum and project into it. The resulting intersection of the shadow frustum and the view frustum creates a cylindrical volume inside the view frustum. Polygons that fall within this cylindrical volume will cast shadows upon receiver objects (of any shape) that are aligned with the direction the light rays that are being cast from the shadow-generating object.  This technique cast shadow on any object rather than just flat surface.
  • 24. Shading Techniques  Made in the Shade The rendering engine will shade the models based on various shading algorithms. These shading calculations can range in their demand from fairly modest (Flat and Gouraud), to much more demanding (Phong). Flat Shading: Takes the color values from a triangle's three vertices and averages those values. The average value is then used to shade the entire triangle. This method is very inexpensive in terms of computations, but this method's visual cost is that individual triangles are clearly visible, and it disrupts the illusion of creating a single surface out of multiple triangles. Gouraud Shading: Takes the lighting values at each of a triangle's three vertices, then interpolates those values across the surface of the triangle. One of the main advantages to Gouraud is that it smoothes out triangle edges on mesh surfaces, giving objects a more realistic appearance. Phong Shading:Phong shading uses these shading normals, which are stored at each vertex, to interpolate the shading normal at each pixel in the triangle. A shading normal (also called a vertex normal) actually is an average of the surface normals of its surrounding triangles.
  • 25. Anti Aliasing Aliasing Aliasing is the staircase effect at the edge of a line or area of color when it's displayed by an array of discrete pixels. Aliasing occurs due to inappropriate sampling rate Sampling Rate- Defines the number of samples per second (or per other unit) taken from a continuous signal to make a discrete signal. Antialiasing Smoothes the staircase effect that occurs when diagonal or curved lines or borders are drawn on raster displays consisting of square or rectangular pixels. Super-Sampling Techniques  Ordered Grid super-sampling (OGSS) Sub-sample positions within a given pixel. The extra samples are positioned in an ordered grid shape. The sub-samples are aligned horizontally and vertically, creating a matrix of points. These sub-samples are thus located inside the original pixel in a regular pattern.  Jittered Grid Super Sampling (JGSS) Similar to Ordered Grid Super-Sampling in that extra samples are stored per pixel, but the difference between the two is the position of the sub-samples. The sub-sample grid is "jittered," or shifted, off of the axis.
  • 26. Overdraw Issue While rendering in games a pixel we have drawn may be overdrawn by a pixel that is closer to the camera and this can happen several times before the closest pixel is actually figured out once the entire scene has been drawn. A graphic could fill a pixel several times each frame. This is known as Overdraw issue. Possible solutions 1. Depth Complexity  Refers to the number of pixels that compete, via the depth test, to be written to a particular entry in the back buffer.  Depth complexity can be used to do performance analysis and indicate which pixels need to be specially rendered. 2. Z-buffering An algorithm used in 3-D graphics to determine which objects, or parts of objects, are visible and which are hidden behind other objects. With Z-buffering, the graphics processor stores the Z-axis value of each pixel in a special area of memory called the Z-buffer . Different objects can have the same x- and y-coordinate values, but with different z-coordinate values. The object with the lowest z-coordinate value is in front of the other objects, and therefore that's the one that's displayed
  • 27. Shaders Vertex and Pixel Shaders A vertex shader is a graphics processing function used to add special effects to objects in a 3D environment by performing mathematical operations on the objects' vertex data before submitting them to the card to render. Vertices may also be defined by colors, coordinates, textures, and lighting characteristics. Vertex Shaders don't actually change the type of data; they simply change the values of the data, so that a vertex emerges with a different color, different textures, or a different position in space. Pixel shaders are routines that are performed per pixel when the texture is rendered that defines how pixels would look when they are rendered This allows you to do some simple pixel effects from making textures in the distance out of focus, adding heat haze, and creating internal reflection for water effects to complex ones that simulate bump mapping, shadows, specular highlights, translucency and other complex phenomena. The pixel shader is executed for each pixel rendered, and independently from the other pixels. Taken in isolation, a pixel shader alone can't produce very complex effects, because it operates only on a single pixel, without any knowledge of scene's geometry or neighbouring pixels.
  • 28. Textures  Multiple textures can take up a lot of memory, and it helps to manage their size with various techniques.  Texture compression is one way of making texture data smaller, while retaining the picture information. Compressed textures take up less space on the game CD, and more importantly, in memory and on your 3D card. MIP Mapping Another technique used by game engines to reduce the memory footprint and bandwidth demands of textures is to use MIP maps. The technique of MIP mapping involves preprocessing a texture to create multiple copies, where each successive copy is one-half the size of the prior copy. Multiple Textures and Bump Mapping Single texture maps make a large difference in overall 3D graphics realism, but using multiple textures can achieve even more impressive effects. Bump mapping is an old technology that is all about creating a texture that shows the way light falls on a surface, to show bumps or crevices in that surface.
  • 29. Light Map A lightmap is a 3D engine light data structure which contains the brightness of surfaces in a video game. Lightmaps are precomputed and used for static objects.  The most common methods of lightmapping are:  Precompute vertex lighting by using distance from each vertex to a light,  Multi-texturing to apply a second texture which contains the lumel data.  But if you have dynamic lights then you will have to regenerate the light maps every frame, modifying them according to how your dynamic lights may have moved. Pre-processed lights don't affect the animated models correctly--they take their overall light value for the whole  model from the polygon they are standing on--and dynamic lights will be applied to give the right effect.  Using a hybrid lighting approach is a tradeoff that most people don't notice, but it usually gives an effect that looks "right".
  • 30. Cache it Right Cache Thrash = Bad Thing  Texture cache management is vital to making game engines go fast.  If you get into a situation where you've got textures being swapped in and out of your graphics card's memory, you've got yourself a case of texture cache thrashing.  Often APIs will dump every texture when this happens, resulting in every one of them having to be reloaded next frame, and that's time consuming and wasteful. To the gamer, this will cause frame rate stutters as the API reloads the texture cache.
  • 31. Cache Management  API Instruction: Coding API (Rendering Engine) to upload and store a texture to the card once instead of swapping in and out many times. An API like OpenGL actually usually handles texture caching and means that the API handles what textures are stored on the card, and what's left in main memory, based on rules like how often the texture is accessed.  Another texture cache management technique is texture compression. Introduction to Texture Compression The texture images that are being used now-a-days in games are higher in resolution, numerous in numbers and heavy with details. Techniques like bump mapping, normal mapping, etc further complicate things because: 1) the resolution of images are large in size 2) normal maps have to be huge in order to fully cover a model object with a decent level of quality. By compressing texture images we can: 1) reduce the amount of memory that each image requires 2) increase the visual quality of normal map images (In some cases) 3) boost an application's performance (not much data to process)
  • 32. Compression Formats Texture Compression Algorithms * DXT (S3TC) *3Dc * A8L8 DXT Compression Format  Use a lossy compression that can reduce an image's size by a ratio of 4:1 or 6:1.  Standard part of the Direct3D API and are available for the OpenGL API through the ARB_texture_compression and GL_EXT_texture_compression_s3tc extensions.  Good to use on decal texture images, especially high resolution.  They were not designed with normal maps in mind, and can have horrible results.  The DXT compression formats are made up of DXT1, DXT2, DXT3, DXT4 and DXT5. DXT1 Format  The DXT1 format compresses RGB images by a factor of 6:1, for an average of 4 bits per texel.  DXT1 does not usually produce good results for normal maps. Without compression With DXT1 Compression
  • 33. Compression Formats DXT2/3 Format  DXT2/3 is the same as DXT1 but it uses an additional 4-bits for the alpha channel, thus doubling the size of the image.  In the DXT2 format the data is pre-multiplied by the alpha channel while in the DXT3 it is not. DXT4/5 Format  The DXT5 format compresses an RGBA image by a factor of 4:1, using a byte per texel on average.  The advantage of DXT5 over DXT1 is that it supports an alpha channel.  Interpolate the alpha data when compressing the images. DX10Name Description Alpha Pre-multiplied? Compression ratio Texture Type DXT1 1-bit Alpha / Opaque N/A 8:1 Simple non-alpha DXT2 Explicit alpha Yes 4:1 Sharp alpha DXT3 Explicit alpha No 4:1 Sharp alpha DXT4 Interpolated alpha Yes 4:1 Gradient alpha DXT5 Interpolated alpha No 4:1 Gradient alpha
  • 34. 3D Card Memory Usage  Most 3D cards these days handle 32-bit color, which is 8 bits for red, 8 for blue, 8 for green, and 8 for transparency of any given pixel.  That's 256 shades of red, blue, and green in combination, which allows for 16.7 million colors-- that's pretty much all the colors you and I are going to be able to see on a monitor. What is the need for 64-bit color?If we can't see the difference what's the point...?? Let's say we have a point on a model where several lights are falling, all of different colors. We take the original color of the model and then apply one light to it, which changes the color value. Then we apply another light, which changes it further. The problem here is that with only 8 bits to play with, after applying 4 lights, the 8 bits just aren't enough to give us a good resolution and representation of the final color. Card Memory v/s Texture Memory (Why is card memory too important?) Running your game using a 32-bit screen at 1280x1024 with a 32-bit Z-buffer means: 1280x1024 pixels=1,310,720 pixels 1,310,720 pixels x 8bytes=10,485,760 pixels (4 bytes per pixel for the screen, 4 bytes per pixel for the z- buffer) 1280x1024x12(back buffer)= 15,728,640 bytes, or 15MB. Obviously if you drop the textures down to 16-bit instead of 32-bit, you could push twice as many lower resolutions across AGP. Also, if you ran at a lower color resolution per pixel, then more memory is available on the card for keeping often used textures around (caching textures). But you can never actually predict how users will set up their system. If they have a card that runs at high resolutions and color depths, then chances are they'll set their cards that way.
  • 35. Physics Engine A physics engine is a computer program that simulates Newtonian physics models, using variables such as mass, velocity, friction and wind resistance. It can simulate and predict effects under different conditions that would approximate what happens in real life or in a fantasy world. Physics engines have two core components:  Collision detection system,  Physics simulation component responsible for solving the forces affecting the simulated objects. There are three major paradigms for physics simulation: * Penalty methods, where interactions are commonly modeled as mass-spring systems. This type of engine is popular for deformable, or soft-body physics. * Constraint based methods, where constraint equations are solved that estimate physical laws. * Impulse based methods, where impulses are applied to object interactions. Soft Body physics Physics law of constrains Impulse based interaction
  • 36. Physics in Games Implementation of Physics  Character animation and collision  Bullet projectiles and similar systems  Water/Particle simulation  Cloth/Fur simulation 3D objects in a game are represented by two separate meshes or shapes:  Base mesh-highly detailed  Highly simplified mesh to represent the base mesh, known as Collision Mesh  Collision meshes are used to simplify physics calculations and speed up the FPS. This may be a bounding box, sphere, or convex hull.  Generally a bounding box is used for broad phase collision detection to narrow down the number of possible collisions before costly mesh on mesh collision detection is done in the narrow phase of collision detection. Constrains in Physics Engine  Numbers precision representing the position of an object and the forces acting on that object  Framerate, or the number of moments in time per second when physics is calculated
  • 37. In-game Effects FOG Based on the camera distance, far away objects becomes more and more 'milky' that means getting brighter or become darker with increasing distance. This is known as Depth Cueing. Particle System Physically correct particle systems (PS) are designed to add few essential properties to the virtual world like explosion, heat, shimmer, waterfall, etc. Particle System is limited by:  fill rate or the CPU to graphics hardware (GPU) communication  transfer bandwidth of particle data from the simulation on the CPU to the rendering on the GPU Two ways to simulate particles:  Use stateless PS- require a particle's data to be computed from its birth to its death by a closed form function which is defined by a set of start values and the current time.  State-preserving PS- Allow using numerical, iterative integration methods to compute the particle data from previous values and a changing environmental description (e.g. moving collider objects). Simulation and Rendering Algorithm The algorithm consists of six basic steps: 1. Process birth and death 2. Update velocities 3. Update positions 4. Sort for alpha blending (optional) 5. Transfer texture data to vertex data 6. Render particles
  • 38. Sound System in Games Sound and music in games are becoming increasingly important in recent years due to both advances in the game genres people are playing and in technology. In the previous generation of consoles, memory has been sparse enough to impact on the sample-rates, or ‘perceived quality’ of the audio content. It is certainly the case now, with advancements in audio compression that sample rates can now rival those of a film soundtrack on the PS3 and 360. To add to this, with the limited RAM sound memory of previous-generation consoles, the amount of sounds that could physically be loaded and played at any one time was strictly limited. It is also now the case that more sounds can be played simultaneously than they could on previous generation consoles (more voices available) and around ten times more sounds can be loaded into RAM. Music and sounds in a game can be :  2D sound - mono channel converted to stereo channel and passed on to both the speakers (Left and Right)  3D sound- Separate sound samples for both the channels (Left and Right) and can vary in intensity  3D surround sound- Having 4 or more speakers attached to the game and the intensity/pitch vary according to the placement of sound object and the camera distance. Audio Design Document  OUTLINE/OBJECTIVES It should contain a statement or two describing the goals and purpose of the audio for the game. It should also describe the audio's proposed style and execution, as compared to the setting of the game.
  • 39. Sound System in Games  RESEARCH This section can prove to be valuable in looking back at past experiments on the project.Items that can be included in this section can be file formats tested and used, in-game audio experiments, and especially any proprietary audio research.  IMPLEMENTATION Define a set of rules: permutations, and boundaries (limits) as to how the audio works on a more detailed level. The subsections of this section should include at least A. Sound and B. Music Implementation . There should never be more than (8) unique sound effects playing at once, not including dialogue or music. Eg. of a Sound Engine: Layer0- An ambient sound effect layer. Layer1- A 3D sound engine, where monaural sounds can travel through a four-speaker system, complete with a parameter list defining each sound's path, traveling speed, special effects, and so on. Layer2- Ambient sounds generated from terrain and buildings, which play only occasionally, and are randomly selected. Layer3-Civilian, animal, monster, and all combat-related sounds are played here, including disaster sounds, user interface sounds, and so on. Layer4- Interactive music categories based on game events Layer5- All narration and in-game dialogue for all characters.
  • 40. Sound System in Games  CONTENT LIST In any case, it's good idea to include a general outline of content, well before there is enough detail to have an actual, formal list. Here is a generic example: A. SOUND DESIGN 1. Action sounds a. Explosions: 5-10, varying from small to large b. Weapons: unknown, will know before 4/20/01 -- possibly 30-50 unique sounds for 15-25 weapons c. Characters i. human -- military: 5 unit types, 3-5 sounds each ii. human -- civilian: unknown iii. alien -- misc.: unknown 2. User interface B. MUSIC 1. Setup mode 2. Mission panel 3. In-game music 4. win/lose music C. DIALOGUE 1. in-game characters 2. narration D. ADDITIONAL AUDIO-FOR-VIDEO CONTENT (marketing promotions, in-game animations)
  • 41. Sound System in Games  SCHEDULE Roughly scheduling the task to be done including:  Setting milestones  Who will do what  Simple deadlines  What's done and what's not Ultimately, your game's audio design should translate into a rewarding, interactive experience, one that blends effortlessly into the gameplay, graphics, and other components of the product. The real trick of course, is how you specify it in your audio design document.
  • 42. Coding Sound  Voice usage and Memory utilization issues are a sore point between the composer and coder.  The composer often has no way to accurately know how many voices are being used and the coder is bitter that the composer is wasting voices and increasing overhead on the audio driver.  The composer needs to be aware of the limits of the materials they use to create the audio and the coder needs to present all necessary information clearly and not hinder the creative process. Voice Usage Some methods to reduce voice utilization are volume culling, sound sphere radius reduction, voice stealing by priority, instance capping and sub-mixing. 1) Volume Culling Volume culling involves shutting down a voice when its volume reaches a certain threshold near zero. Done correctly, this has the desirable side effect of clearing up the mix and reducing processor overhead. Done incorrectly, it can introduce voice “popping” where it is easy to hear when voices get culled and possible thrashing when a voice hovers around the threshold and is repeatedly stopped and started due to the culling algorithm. To reduce the possibility of clicks, the voice is enveloped before it is stopped. 2) Sound Sphere Reduction Sound sphere radius reduction should be simple given individual control over 3D sound volumes. The composer should be able to reduce various groups and individual 3D sound sources in the world building tool.
  • 43. Coding Sound 3) Voice Stealing Voice stealing by priority requires a driver which supports this functionality and some pre-planning on the side of the composer. This works by defining good priority values for each sample and allowing the sound driver to decide how to steal a voice if none are available to play a new voice. 4) Instance Capping Instance capping causes the sound driver to steal the voice of the oldest instance of a sample group when the maximum number of instances is reached. An example is in button sounds, where the number of instances is set such that it doesn’t sound like menu sounds are being cut off as well as not allowing the user to trigger enough voices to distort the game platform’s audio output. 5) Sub-Mixing Sub-mixing is likely the easiest. If two samples are commonly used together and rarely separately then the composer can mix the two sounds together, save on memory as well as reduce the number of voices used. 6) “Bank-for-the-Buck” Obviously, common sense must also be used when acting upon the results of the table, since a large, rarely used sample doesn’t automatically mean it is unimportant and can be thrown away! However, all samples and their associated memory sizes should be periodically re-evaluated to justify their weighting. An additional offline tool might allow tweaking memory use parameters in real-time. This way, the composer can interactively change the sampling rate, sample length and other parameters to see how optimizing would affect the resulting memory map and check the mix with the results.
  • 44. Game Networking  Game Networking exists where multiple people can play the same game and interact with each other on their own system.  Client/server is where effectively one machine is running the game, and everyone else is just a terminal that accepts input from the player, and renders whatever the server tells it to render.  The advantage of client/server is that every machine will be showing the same game, since all processing is done in one place, not across multiple machines where you can get out of sync with each other. The drawback is that the server itself needs to have some serious CPU time available for the processing of each of the connected clients, as well as a decent network connection to ensure each client receives its updates in a timely fashion. What are TCP/IP, UDP/IP?  TCP/IP and UDP/IP are two levels of communication protocol systems. The IP figures out the transmission of packets of data to and from the Internet. UDP or TCP hands it a big fat packet of data, and IP splits it up into sub packets, puts an envelope around each packet, and figures out the IP address of its destination, and how it should get there, and then sends the packet out to your ISP (Internet Service provider like BSNL), or however you are connected to the Net.  It's effectively like writing down what you want to send on a postcard, stamping it, addressing it, and stuffing it in a mailbox, and off it goes.  UDP and TCP are higher layers that accept the packet of data from you the coder, or the game, and decides what to do with it.  The difference between UDP and TCP is that TCP guarantees delivery of the packets, in order, and UDP doesn't.
  • 45. Game Networking Problem with TCP/IP  In order to be sure that packets that are sent via the Internet get there intact, TCP expects an Acknowledgment to be sent back from the destination for every packet it sends. If it doesn't get an ACK within a certain time, then it holds up sending any new packets, resends the one that was lost, and will continue to do so until the destination responds.  This is actually such a problem that almost no games use TCP as their main Internet protocol of choice, unless it's not a real-time action game. Most games use UDP--they can't guarantee order or delivery, but it sure is fast - or at least faster than TCP/IP usually ends up. Point to Remember: Network programing  Use of Protocol- TCP/IP is a strict no-no because of the obvious issues discussed above.  Packet Bloating- Careful only to transmit that data that is required. The larger the packet you give to the UDP system, the more you are asking the network to handle.  Packet Frequency- Are you expecting packets to be sent faster than the communications infrastructure can really handle?  Handling Packets- If you don't handle it right, you end up with missing events, missing entities, missing effects, and sometimes, complete game.
  • 46. Scripting System Scripting  Allows game objects to execute code that is not compiled as part of the object.  Gives objects the ability to react to the world around them.  Scripting is where you take complete control over a given scene, setting up events that the player almost always has no control over, where the gamer is "on a rail" in order to move though to a given plot point, or set up a situation the player needs to resolve.  There are a variety of scripting languages currently being implement in games such as Lua and Python. Type of scripting system  The first is the simple text-based, single-threaded style, just like we programmers are used to coding.  Then there's the complicated stuff--allowing multiple threads, and actually allowing variable situations. Variable situations are those where you don't actually know for sure who's around when the script starts, but you have to write the script in such a way that it will work with whoever is around.
  • 47. Artificial Intelligence Artificial Intelligence Game artificial intelligence refers to techniques used in computer and video games to produce the illusion of intelligence in the behaviour of non-player characters (NPCs). Use of A.I.  . Control of any NPCs in the game. A.I. can induce the functionality in opponents/NPC's like:  Predicting Player Behavior  Reaction based on Players action  Taking logical decision on their own  Reinforce learning through Knowledge Base  Pathfinding is another common use for AI, widely seen in real-time strategy games. [Pathfinding is the method for determining how to get an NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war". ]  Game AI is also involved with dynamic game balancing. Emergent A.I.  The AI method where the opponents are able to "learn" from actions taken by the player and their behavior is modified accordingly.  While these choices are taken from a limited pool, it does often give the desired illusion of an intelligence on the other side of the screen.
  • 48. User Interface User Interface (U.I.) In general it is the means by which the users (gamers) interact with the system - a particular machine, device, computer program or other complex tools. The user interface provides means of: * Input, allowing the users to manipulate a system * Output, allowing the system to produce the effects of the users' manipulation.  The UI is a medium between the Core Game Mechanics and the gamers User Interface Defines Game Play Game play is a vague concept and hard to describe. One possible way is to define it is by the way gamers interact with games; that is, through the UI and the interactions associated with it. By doing that, we will able to analyze game play, not as an abstract concept, but something concrete that can be described, measured, and studied.  UI doesn’t only help gamers to play games successfully, but also controls the pace at which the internal mechanisms will be revealed to gamers.
  • 49. User Interface The form factors of the hardware, especially the I/O devices, greatly affect the design of UI. User Interfaces of PC and Console Games  PC games heavily rely on the combination of keyboard and mouse. Whereas console games have controllers, which are more refined, although limited, input devices.  Most gamers realize that they are interacting with some kinds of user interfaces when they are playing PC games. But gamers playing console games sometimes don’t realize that they are interacting with the UIs. UIs of console games are hidden deeply in the careful calculation of the usage of controllers, which is not obvious from the screen.  We can call the UIs of PC games software-oriented interfaces, which means they utilize lots of conventional GUI elements to represent actions. The UIs of console games, on the other hand, are hardware-oriented interfaces, which means they are designed around the form factors of controllers and use relatively fewer GUI elements.
  • 50. User Interface Desirable Properties of User Interfaces for Games Attractiveness, enjoyability. E.g. quality of graphics, sound, animation, etc. Strong influence on marketability. Usability How easy is it learn?  Can be learned quickly and easily, either for the long term , or for immediate, "walk up and use" purposes How easy is it to use?  Can be used to accomplish work rapidly, accurately, and without undue effort  If a system is difficult to learn or to use, customers are likely to be dissatisfied eventually, even if it is a market success at first.
  • 51. UI Design Guidelines  User Interface Shouldn’t Be Distracting UI should have aesthetic taste, but more importantly, it should be simple, efficient, easy to use, and consistent with the whole game environment.  User Interface Should Provide Enough Visual Affordance A good UI should use its visual appearance to suggest its functions. Gamers should be able to understand and act easily by just looking at the UI.  User Interface Should Be Balanced The visual elements (buttons, labels) in a UI should be properly arranged, sized, and aligned to reinforce logical relationships among them and ensure the UI is stable on the screen.  User Interface Should Be Transparent A good UI should be transparent. Gamers should be able to forget about the menus, buttons, icons, and windows and immerse into the virtual world that the game creates. In that sense, UI designers’ job is to design UIs that are transparent – UIs that can be used so naturally by the gamers so that nobody would even notice its existence.
  • 52. End Of Presentation SHARAD MITRA Sr. Technical Artist Exigent Studios