North Avenue Call Girls Services, Hire Now for Full Fun
The Gaming Process: A Guide to Game Design and Development
1. Gaming Process
- SHARAD MITRA
Sr. Technical Artist
Exigent Studios
2. What is a GAME?
A 'game' is a structured activity, usually
undertaken for enjoyment and sometimes also
used as an educational tool.
A rule-based activity involving challenge to reach
a goal.
4. Types Of Video Games
Arcade Games
Computer Games
Console Games
Handheld Games
Mobile Games
Online Games
5. Genre Of Games
Action
Adventure
Arcade Style
Puzzle
Role Playing Games (RPG)
Strategy
Simulation (SIMS)
6. Action Games
An action game requires players to use quick reflexes and timing to overcome obstacles.
They are perhaps the most basic of gaming genres, and certainly one of the broadest.
Action games tend to have gameplay with emphasis on combat.
There are many subgenres of action games, such as fighting games and first-person
shooters.
Ball Games
Fighting
Maze
Pinball
Shooter
First Person Shooter (FPS)
Massive Multiplayer Online FPS
Third Person Shooter
7. Adventure Games
They normally require the player to solve various puzzles by
interacting with people or the environment, most often in a non-
confrontational way.
Prince Of Persia
8. Arcade Games
Arcade games often have very short levels, simple and intuitive
control schemes, and rapidly increasing difficulty.
9. Puzzle Games
Puzzle games require the player to solve logic puzzles or navigate
complex locations such as mazes.
This genre frequently crosses over with adventure and educational
games.
10. Role Playing Games (RPG)
Action/Adventure
Massive Multiplayer Online RPG
11. Strategy Games
Strategy video games focus on gameplay requiring careful and
skillful thinking and planning in order to achieve victory. In most
strategy video games, "the player is given a godlike view of the
game world, indirectly controlling the units under his command"
12. Simulation Games
A simulation game is a game that contains a mixture of skill, chance, and strategy
to simulate an aspect of reality, such as a stock exchange.
Construction and management simulations
City Building
Business simulation
GOD Games
Government simulations
Life simulations
Biological simulation
Social Simulation
Vehicle simulations
Flight
Racing
13. General Game Modelling Guidelines
Before starting to create a model in any 3D application study the concept/reference.
The Reference/Concept art speaks about itself in relation to the Game it's for and
the Game Engine it's designed for. You have to keep in mind following points
before opening a 3D application:
Art Style – Close to real or conceptual art
14. General Game Modelling Guidelines
Color Concept- It will define the loops and tris in the model
Platform (Console, PC, or other)- Some Concept art will
have a different modelling methodology for different Platform
because of variation of hardware availability. It usually happens
with models having high details
15. General Game Modelling Guidelines
Relative Size and Scale of the reference in real world and/or in relation to the
game
Overall Proportion of the model in 3D perspective
16. General Game Modelling Guidelines
Poly Limit- Decides the importance of every cut you place on the model
Texture Limit- Defines the important breakup in the mesh as per texture. We
can use a single texture sheet or multiple textures for the same model
17. General Game Modelling Guidelines
Re-usability- Instancing the model in the final scene at various locations
Camera Distance- Decides on the Vertex density and Texel Density/UV Layout
18. General Game Modelling Guidelines
Normal Map Generation- Major factor in deciding the mesh flow and density
What is a NORMAL?
A VERTEX NORMAL at a vertex of a polyhedron is the normalized
average of the surface normals of the faces that contain that vertex.
A NORMAL defines which way a face or vertex is pointing.
The direction of the normal indicates the front, or outer surface
of the face or vertex.
A Normal is used for Shading and Lighting distribution
on the model.
19. Normal Map Theory
Normal mapping is the technique to replace the existing normals on model.
It can be used to greatly enhance the appearance of a Low Poly model without using more polygons.
Normal( colored as light green ) is the vector that is perpendicular to the surface and thus describes the surface's
orientation. We calculate per-vertex normals on the poly model by averaging connected facet normals on each vertex.
20. Normal Map Theory
NORMALS CALCULATION
Casting rays from the low poly model to hight poly model along
the normal direction returning normals on the high poly model
where the ray hits it.
NORMALS GENERATION
Replace the normals on the origin of the ray with the returned
normals.
21. Normal Map Theory
NORMALS MAP CREATION
The whole process creates a Normal Map that store normals directly in the RGB values of an image.
The RGB values of each texel in the the normal map represent the x,y,z components of the normalized
mesh normal at that texel. Instead of using interpolated vertex normals to compute the lighting, the normals
from the normal map texture are used.
25. Basic Texturing Guide
What are Texture Maps?
A multidimensional image that is mapped to a multidimensional space.
The most common use for maps is to improve the appearance and realism of Materials
Texture mapping means the mapping of a function (Texture) onto a surface in 3-D.
The domain of the function can be one, two, or three-dimensional.
Uses of Texture Mapping
Surface Color - Diffuse/Color Map
Surface Irregularity - Bump Map
Specularity - Specular Map
Environmental Reflection - Reflection Map
Transparency - Transparency Map
Surface Distance - Height Map
Surface Displacement - Displacement Map
Normal Vector - Normal Map
Virtual Displacement - Parallax Map
26. Types of Map
Diffuse Maps
Diffuse maps in represent the diffuse reflection and color of a surface.
In other words they define the color and intensity of light reflected back
when it strikes a surface.
Bump Maps
Bump mapping adds an illusion of depth and texture to images. It doesn't
actually alter geometry but rather affects the shading over a surface. There
are two different types of bump maps: Normal Maps and Height Maps
Normal Maps
Normal maps define the slope or normals of a surface. In other words,
they alter the direction a surface appears to be facing.
27. Types of Map
Height Maps
Height maps are grey-scale images that define the height of the individual
pixels of a surface. They adjust the visual depth of a texture.
The height of each pixel is defined by the brightness of the image.
A white pixel is as high, a black pixel is as low as it gets. grey levels in
between represent different heights.
Specular Maps
Specular maps in Doom 3 represent the specular intensity and color of highlights on
a surface. In other words they define the "shinyness" and color of specular reflections.
The brighter a specular map is, the more shine is applied to the final material.
Color applied to a specular map tints the color of highlights.
28. Types of Map
Reflection Maps
Reflection mapping is an efficient method of simulating a complex
mirroring surface by means of a precomputed texture image.
The texture is used to store the image of the environment surrounding
the rendered object.
There are several ways of storing the surrounding environment;
the most common ones are the
Spherical Environment Mapping in which a single texture
contains the image of the surrounding as reflected on a mirror ball
Cubic Environment Mapping in which the environment is unfolded
onto the six faces of a cube and stored therefore as six square textures.
29. Types of Map
Transparency/Alpha Maps
Alpha mapping is a technique where an image is mapped
(assigned) to a 3D object, and designates certain areas of the
object to be transparent or translucent.
The transparency can vary in strength, based on the image texture,
which can be greyscale, or the alpha channel of an RGBA image
texture.
Displacement Maps
Displacement mapping is an alternative technique in contrast to bump
mapping, normal mapping, and parallax mapping, using a procedural
texture or height map to cause an effect where the actual geometric
position of points over the textured surface are displaced, often along
the local surface normal, according to the value the texture function
evaluates to at each point on the surface.
30. Types of Map
Parallax Map
Parallax mapping (also called offset mapping or virtual displacement mapping) is an enhancement
of the bump mapping or normal mapping techniques applied to textures in 3D rendering applications.
Parallax mapping is implemented by displacing the texture coordinates at a point on the rendered
polygon by a function of the view angle in tangent space (the angle relative to the surface normal)
and the value of the height map at that point.
This means that textures such as stone walls, will have more apparent depth and thus greater realism
with less of an influence on the performance of the simulation.
31. Types of Map
Ambience Occlusion Map
Ambient occlusion is a shading method which helps add realism to local reflection models by taking
into account attenuation of light due to occlusion. Unlike local methods like Phong shading, ambient
occlusion is a global method, meaning the illumination at each point is a function of other geometry
in the scene.
The soft appearance achieved by ambient occlusion alone is similar to the way an object appears on
an overcast day.
32. Seamless Textures
When painting your own textures, it is sometimes desirable to be able to seamlessly tile the image over a surface.
A seamless texture is a texture which can be tiled on both the sides (x,y) without any visible seam which shows
that the texture has been repeated or tiled on the model.
The tile edge is clearly visible The tile edge is made invisible
in the center of the above image. in the center of the above image.
33. Pixel
A pixel is generally thought of as the smallest single component of a digital image.
The pixels that compose an image are ordered
as a grid (columns and rows)
Each pixel consists of numbers representing
magnitudes of brightness and colour.
Pixels are normally arranged in a regular
2-dimensional grid, and are often represented
using dots, squares, or rectangles.
34. Resolution
Image resolution describes the detail an image holds in terms of number of pixels.
More the number of pixels in an image, more crisp and detailed the image/texture would be.
In computer graphic technology, the measuring unit for pixels is PPI (pixels per inch)
An image that is 2048 pixels in width and 1536 pixels in height has a total of
2048×1536 = 3,145,728 pixels or 3.1 megapixels.
35. Image File
Image files are standardized means of organizing and storing images.
Image files are composed of either pixel or vector (geometric) data that are rasterized
to pixels when displayed (with few exceptions) in a vector graphic display.
Image File size
Image file size—expressed as the number of bytes—increases with the number of pixels composing an image,
and the colour depth of the pixels.
The greater the number of rows and columns, the greater the image resolution, and the larger the file.
Also, each pixel of an image increases in size when its color depth increases.
36. Color Depth
Color "depth" is defined by the number of bits per pixel that can be displayed on a computer screen.
Data is stored in bits. Each bit represents two colors because it has a value of 0 or 1.
The more bits per pixel, the more colors that can be displayed.
Examples of color depth are shown in the following table:
Color Depth No. of Colors Color Mode
1 bit color (21)=2 Indexed Color
4 bit color (24)=16 Indexed Color
8 bit color (28)=256 Indexed Color
16 bit color (216)=65,536 True Color
24 bit color (224)16,777,216 True Color
1 bit 4 bit 8 bit 24 bit
37. True Color v/s Index Color
True Color
Images are known as "True Color" where each pixel is defined in terms of its actual RGB or CMYK values.
Every pixel in a a true color image has 256 possible values for each of it's red, green or blue components
(in the RGB model) or cyan, magenta, yellow and black (in the CMYK model). Because there are 256
possible values for each RGB or CMYK component, then RGB true color would have a 24-bit color depth
and CMYK true color would have a 32-bit color depth.
There are millions of possible colors for each pixel in a true color image. That's why it is called "True Color".
Index Color
Images which do not define colors in terms of their actual RGB or CMYK values and which derive its
colors from a "palette" are known as "Indexed Color". The color palette of an indexed color image has
a fixed number of colors. Because the palette is limited to a maximum of 256 colors, it is not possible
for an image to look as realistic as it can using RGB or CMYK. Hence, they are not true color.
This type of color is known as "Indexed Color" because colors in the palette are referenced by index
numbers which are used by the computer to identify each color.
38. Image File Formats
Image file types vary based on the type and amount of compression they use. Few are Lossless compression
and others are Lossy compression.
A lossless compression algorithm discards no information. It looks for more efficient ways to represent an
image, while making no compromises in accuracy.
In contrast, lossy algorithms accept some degradation in the image in order to achieve smaller file size.
File Types
PSD- (Photoshop Data)
- This is the native Photoshop file format created by Adobe. In this format, you can save
multiple alpha channels and paths along with your primary image.
PNG- (Portable Network Graphics)
- PNG is also a lossless storage format. However, in contrast with common TIFF usage, it looks for
patterns in the image that it can use to compress file size. The compression is exactly reversible,
so the image is recovered exactly.
JPG/JPEG- (Joint Photographic Experts Group)
- JPG is optimized for photographs and similar continuous tone images that contain many, many colors.
The degree of compression can be adjusted, allowing a selectable trade off between storage size and
image quality.
- JPEG typically achieves 10 to 1 compression with little perceivable loss in image quality.
39. Image File Formats
TGA/TARGA- (Truevision Graphics)
- Most common in the video industry, this file format is also used by high-end paint and ray-tracing
programs. The TGA format has many variations and supports several types of compression.
DDS- (Direct Draw Surface)
- The DirectDraw Surface graphics file format was established by Microsoft for use with the
DirectX SDK. The format is specifically designed for use in real-time rendering applications,
such as 3D games. It can be used to store textures, cubemaps, mipmap levels, and allows for compression.
Due to the fact that most video cards natively support DXTn texture compression, use of this format can
save memory on the video card.
When saving to DDS with mipmap levels option, the size of the image you are saving should be equal to
a degree of two (128, 512, 1024 etc.).
DXT Format Compression Comparison
DX 10 Name Description Alpha Pre-multiplied? Compression ratio Texture Type
DXT1 1-bit Alpha / Opaque N/A 8:1 Simple non-alpha
DXT2 Explicit alpha Yes 4:1 Sharp alpha
DXT3 Explicit alpha No 4:1 Sharp alpha
DXT4 Interpolated alpha Yes 4:1 Gradient alpha
DXT5 Interpolated alpha No 4:1 Gradient alpha
40. Character Specific Guidelines
Reference Material
It is really important that you have this reference material.
You may as well be modelling with your eyes closed if you
don't use anything to base your model on.
41. Starting Out
You should also find out as much about the character as possible -
* Is it an in game model or will it be used in pre-rendered cut scenes?
* You will need a rough polygon count limit.
* Finger amount. Will it have full fingers or a mitten type hand with no fingers?
* Facial detail, will the characters face need to animate to show emotions and talk?
* How many level of details (LOD's) are required?
42. Muscle Structure
Keep an eye on where the muscles lie on your character, placing
edges along the muscle lines will result in a much more natural
deformation as well as making your model look better.
43. Mesh Flow
It is really important to have the mesh flow (wire connections) right as per the
muscle structure to get the desired result. The muscle structure will depend on the
characteristic properties of the character which defines the type and variation of
animation required.
45. Face Details
Face topology is very important, depending on what you wish to achieve with your
model that is. A good rule, as with the body, is to stick strictly to the muscle
structure when placing your polygons and edges. Paying attention to how the face
creases, and constructing it according will give you a natural looking face as well
as making the creation of blend shapes and animation easier and more fluid.
46. Finalizing the Model
Check the POLY LIMIT specified for that model
Do a Silhouette Check on the model to finalize its proportion and curves
47. Texturing Basics- Vertex
When a model is created as a polygon mesh using a 3D modeler, UV coordinates
can be generated for each vertex in the mesh for texturing the model.
Vertex is a corner point of a polygon; a node in a Triangulated irregular network or TIN.
When vertices are moved or edited, the geometry they form is affected as well.
48. Texturing Basics- UV Coordinates
UV coordinates are 2D coordinates that are mapped onto a 3D model. UV
coordinates are a texture’s x and y coordinates and always range from 0 to 1.
The U, V, and W coordinates parallel the relative directions of X, Y, and Z
coordinates.
49. Texturing Basics- UVs
UVs are two-dimensional texture coordinates that reside with the vertex
component information for polygonal and subdivision surface meshes. UVs exist
to define a two-dimensional texture coordinate system, called UV texture space.
UV texture space uses the letters U and V to indicate the axes in 2D.
UV texture space facilitates the placement of image texture maps on a 3D
surface.
50. Texturing Basics- UV Mapping
The nifty thing about UV mapping is that it provides you with very literal,
accurate template on which to paint your textures, particularly useful for uneven,
organically structured surfaces that can't be textured using standard planar
texturing methods.
We can compile the maps for a bunch of different pieces of your models into a single
UV template so that you can texture all of them in one go with a single image. This is
particularly important for games.
51. Determining Texture Size
What size is this texture going to appear on-screen?
What level of detail do we need to include in this texture?
Object Size - small size texture need small size map while large
objects need large maps as they will be clearly visible
on the screen
Camera Distance – Placement of object in relation to camera
52. Determining Texture Size
LOD's - Level Of Details we need through texture (and not through model details)
Tilability – Repetition of the texture on the model
53. Determining Texture Size
Texel Density
Texture quality is best described as 'texel density'.
A texel, or texture element (also texture pixel) is the
fundamental unit of texture space, used in computer
graphics. Textures are represented by arrays of texels,
just as pictures are represented by arrays of pixels.
What will be the Texel density also depends whether
we need tileable texture or one complete map on the
model.
54. Power of 2 Texture Theory
Why the textures are mostly created with power of 2, i.e. , 64X64; 128x128; 256x256
and so on?
All graphic libraries has defined the texture size standard as power of 2
[D3DPTEXTURECAPS_POW2]
It is easy for any rendering engine to calculate and tile textures with POW2
UV mapping technique in all 3D applications defines the texture space in square (0,1)
Mipmapping is not supported for non-POW2 Textures
Few game engines squeeze the non square texture (128x1024) back to square texture
resulting in inappropriate texture placement.
55. Shaders and Materials
Shader is the algorithm that controls how the material responds to light. Shaders especially control how
highlights appear. They also provide a material's color components, and control its opacity, self-illumination,
and other settings. Shaders are often named for their inventors; they can also be named for the effect they provide.
Samples of different shading for a standard material
1. Anisotropic
2. Blinn
3. Metal
4. Multi-layer
5. Oren-Nayar-Blinn
6. Phong
7. Strauss
8. Translucent
56. Shaders and Materials
Anisotropic shader: creates surfaces with elliptical, "anisotropic" highlights. These highlights are good for
modeling hair, glass, or brushed metal.
Anisotropy measures the difference between sizes of the highlight as seen from two perpendicular directions.
When anisotropy is 0, there is no difference at all. The highlight is circular, as in Blinn or Phong shading.
When anisotropy is 100, the difference is at its maximum. In one direction the highlight is very sharp; in
the other direction it is controlled solely by Glossiness.
57. Shaders and Materials
Blinn shader: Blinn shading is a subtle variation on Phong shading.
The most noticeable difference is that highlights appear rounder.
With Blinn shading, you can obtain highlights produced
by light glancing off the surface at low angles.
Phong shader: Phong shading smoothen the edges between faces
and renders highlights realistically for shiny, regular surfaces.
This shader interpolates intensities across a face based on the
averaged face normals of adjacent faces. It calculates the normal
for every pixel of the face.
58. Shaders and Materials
Other shaders like metal, Oren-Nayar-Blinn, or Multi-layer are generally not used in gaming.
The most common used shader is Blinn because it is simple to calculate and fast to render.
There are various customized shaders created by companies compatible for their game engines
Viewport shaders/RealTime Shaders
Viewport shaders are designed primarily for games and interactive media development.
Viewport Shader is a piece of code saved with the .fx extension, based exclusively on the DirectX 9
API and its high level shader language (HLSL). DirectX 9 (or higher) must be installed on your
computer; otherwise, you will not be able to see any shader rendering.
You can also have OpenGL realtime shader visible on your viewport provided your system supports
and have OpenGL graphic library installed on the system.
Using realtime shaders allows artists to apply in-game shading and effects to characters and
environments directly within any 3d application and see on the viewport, without having to
involve developers/programmers.
Your 3D graphics card must be Shader compliant, i.e. capable of supporting vertex and pixel
shaders. Make sure that your video card supports pixel shader version (2.0, 1.1, etc.)