The document discusses 3D matrix transformations in XNA game development. It explains that to render a 3D scene, a camera matrix, projection matrix, and separate world matrix for each object must be set up. It provides details on creating view matrices with CreateLookAt, projection matrices with CreatePerspectiveFieldOfView or CreateOrthographic, and transforming objects using world matrices created from scaling, rotation, translation and other transformations. Multiple transformations can be combined by multiplying the matrices together in order.
The document discusses using 3D models, textures, lighting, fog, and animation in XNA game development. It explains how to load 3D models, apply the BasicEffect to set textures, lighting properties, and fog. It also demonstrates how to create simple animation by updating the model's position over time in the game's update loop and applying the transformation to the world matrix.
Collision detection determines whether two objects in a virtual world overlap and have collided. Accurate collision detection is fundamental to a solid game engine. XNA has two main types for implementing collision detection: bounding boxes and bounding spheres. Bounding boxes are better for rectangular objects while bounding spheres offer a better fit for rounded objects. Both bounding boxes and bounding spheres can be used to check for intersections and containment between game objects.
XNA L04–Primitives, IndexBuffer and VertexBufferMohammad Shaker
This document discusses drawing 3D primitives and using vertex and index buffers in XNA game development. It begins with an overview of different primitive types like points, lines, and triangles. It then covers drawing triangles by defining vertex positions and colors. Next, it demonstrates creating a rotating tetrahedron using triangle lists. The document concludes by explaining how to create an icosahedron mesh using vertex and index buffers to store vertex data more efficiently for rendering. Key steps include generating vertex and index data, creating vertex and index buffers, and drawing indexed triangles.
This document provides an overview of shaders in XNA game development. It discusses what shaders are and how they allow developers to program graphics pipelines rather than relying on fixed function pipelines. It also covers HLSL (High Level Shader Language) and how to define vertex formats, vertex shaders, and techniques in HLSL code files to render 3D objects with customized shaders. Specific topics covered include declaring effects, loading shader files, using techniques and passes, defining a custom vertex format structure, and writing a simple vertex shader to render colored triangles.
The document discusses experimenting with shaders in XNA game development. It describes replacing code in an HLSL file to output vertex color values directly from position data. This avoids color clipping issues. It also discusses passing unclipped position data to the pixel shader and interpolating color values properly. Examples of other shader techniques are briefly mentioned like texturing, lighting, shadow mapping, and post-processing effects.
The document discusses 3D rendering in WPF. It provides code examples for creating 3D models like a cuboid using triangles, adding lighting and cameras, and manipulating 3D objects. It also covers using the Viewport2DVisual3D control to display 2D UI elements in a 3D environment. Code is provided to construct the geometry and add 2D components like text blocks and buttons to the visual host.
The document discusses 2D graphics and particle engines in game development. It covers topics like SpriteBatches for drawing textures, acquiring fonts, texture atlases for animated sprites, rotating sprites by specifying a center point of rotation, and the anatomy of a 2D particle engine which includes particles, particle emitters that determine the location and number of particles created, and the engine itself. Code examples are provided for drawing sprites, text, and implementing animated sprites and rotating sprites.
The document discusses using 3D models, textures, lighting, fog, and animation in XNA game development. It explains how to load 3D models, apply the BasicEffect to set textures, lighting properties, and fog. It also demonstrates how to create simple animation by updating the model's position over time in the game's update loop and applying the transformation to the world matrix.
Collision detection determines whether two objects in a virtual world overlap and have collided. Accurate collision detection is fundamental to a solid game engine. XNA has two main types for implementing collision detection: bounding boxes and bounding spheres. Bounding boxes are better for rectangular objects while bounding spheres offer a better fit for rounded objects. Both bounding boxes and bounding spheres can be used to check for intersections and containment between game objects.
XNA L04–Primitives, IndexBuffer and VertexBufferMohammad Shaker
This document discusses drawing 3D primitives and using vertex and index buffers in XNA game development. It begins with an overview of different primitive types like points, lines, and triangles. It then covers drawing triangles by defining vertex positions and colors. Next, it demonstrates creating a rotating tetrahedron using triangle lists. The document concludes by explaining how to create an icosahedron mesh using vertex and index buffers to store vertex data more efficiently for rendering. Key steps include generating vertex and index data, creating vertex and index buffers, and drawing indexed triangles.
This document provides an overview of shaders in XNA game development. It discusses what shaders are and how they allow developers to program graphics pipelines rather than relying on fixed function pipelines. It also covers HLSL (High Level Shader Language) and how to define vertex formats, vertex shaders, and techniques in HLSL code files to render 3D objects with customized shaders. Specific topics covered include declaring effects, loading shader files, using techniques and passes, defining a custom vertex format structure, and writing a simple vertex shader to render colored triangles.
The document discusses experimenting with shaders in XNA game development. It describes replacing code in an HLSL file to output vertex color values directly from position data. This avoids color clipping issues. It also discusses passing unclipped position data to the pixel shader and interpolating color values properly. Examples of other shader techniques are briefly mentioned like texturing, lighting, shadow mapping, and post-processing effects.
The document discusses 3D rendering in WPF. It provides code examples for creating 3D models like a cuboid using triangles, adding lighting and cameras, and manipulating 3D objects. It also covers using the Viewport2DVisual3D control to display 2D UI elements in a 3D environment. Code is provided to construct the geometry and add 2D components like text blocks and buttons to the visual host.
The document discusses 2D graphics and particle engines in game development. It covers topics like SpriteBatches for drawing textures, acquiring fonts, texture atlases for animated sprites, rotating sprites by specifying a center point of rotation, and the anatomy of a 2D particle engine which includes particles, particle emitters that determine the location and number of particles created, and the engine itself. Code examples are provided for drawing sprites, text, and implementing animated sprites and rotating sprites.
The document provides an introduction and tutorial to Java 3D, a library for displaying three-dimensional graphics in Java. It covers installing Java 3D, creating a basic 3D program with a cube, adding lighting, positioning objects in 3D space using transformations, changing object appearances using materials and textures, and more. Examples of Java 3D code are provided throughout to demonstrate key concepts.
Harris corner detection is used to extract local features from images. It works by (1) computing the gradient at each point, (2) constructing a second moment matrix from the gradient, and (3) using the eigenvalues of this matrix to score how "corner-like" each point is. Points with a large, local maximum score are detected as corners. The Harris operator, which is a variant using the trace of the matrix, is commonly used due to its efficiency. Corners provide distinctive local features that can be matched between images.
Here is a function to calculate the factorial of an integer N using a for loop:
function fact = factorial(N)
fact = 1;
for i = 1:N
fact = fact * i;
end
end
To test it:
N = 5;
result = factorial(N);
This function:
1. Initializes the factorial variable fact to 1 outside the loop
2. Uses a for loop from 1 to N to iterate over the integers
3. On each iteration, it multiplies the running fact variable by the current integer i
4. After the loop, fact will contain the final factorial
This is complete JavaScript framework for building 3D games and experiences with HTML5, WebGL, WebVR and Web Audio (https://www.babylonjs.com). BabylonJS' basic concepts are explained and illustrated based on a handson lab provided by Mozilla.
This chapter discusses reflection and mirrors. Key points include:
1) Plane mirrors form images that are virtual, same size, and laterally inverted compared to the object. Spherical mirrors can form real or virtual images depending on the object distance.
2) Ray tracing diagrams can be used to determine the nature, size, and location of images formed by spherical mirrors based on the object distance and mirror parameters like radius of curvature and focal length.
3) Mirror equations relate the object and image distances (p and q), focal length (f), and magnification (M) of spherical mirrors. Real, inverted images form when the object is between the focal point and center of curvature. Virtual, erect images form when
This document contains conceptual problems and their solutions related to optical images formed by mirrors and lenses. For concave mirrors, it discusses that the virtual image size depends on the object distance, and real images are possible. Convex mirrors never form real images. A concave mirror can form enlarged real images if the object is between the center of curvature and focal point. Plane mirrors form virtual images, and the eye location range to see the image is discussed. Spherical mirrors equations relate image and object distances. Refraction through a fish bowl or glass rod immersed in water is analyzed. A double concave lens problem applies lens equations to find the focal length, image location and size, and determines if the image is real/virtual and
Company of Heroes 2 (COH2) Rendering Technology: The cold facts of recreating...Daniel Barrero
Presentation at KGC2013 about the techniques developed for COH2 to reproduce the harsh winter conditions of the eastern front of World War 2. It covers the technology developed for dynamic snow and ice rendering, what worked what didn't. It covers as well the lighting and conversion of the COH1 engine from forward to a deferred renderer.
The document discusses a lecture on iPhone application development that covers views, drawing, and animation. It provides information on views including view fundamentals, the view hierarchy, view structures like frames and bounds, and creating and manipulating views. It also discusses drawing in views by overriding the drawRect method and using Core Graphics for drawing operations.
Animation involves rapidly displaying a sequence of images to create the illusion of movement. When developing mobile web games or animations, developers must consider resource management, object representation, animation techniques, and event processing to optimize performance. Hardware acceleration is also important, as it improves the performance of canvas and CSS3D transformations on mobile devices. The Collie library is designed to help with high performance animation across devices by supporting optimized rendering methods and detailed region detection of objects.
Shadow Mapping with Today's OpenGL HardwareMark Kilgard
The document discusses shadow mapping, a technique for real-time shadow generation in 3D graphics. Shadow mapping works by rendering the scene from the point of view of the light to generate a depth map, then using that depth map to determine whether surfaces are in shadow during the main rendering pass from the camera's point of view. Hardware support for shadow mapping allows efficient shadow tests by comparing depth map values to fragment depths.
The document describes how to create water flow maps using Houdini. It explains that a tessellated grid is used to represent the water surface, which is then deformed using techniques like combing normals or magnet deformers to generate flow patterns. The deformed grid is then rendered to create a texture map representing the water flow. Specific nodes and techniques in Houdini like comb tools, metaballs, and attribute transfers are demonstrated for generating realistic yet controllable water flow maps.
The document describes a geometry shader-based approach to bump mapping that has several advantages over traditional CPU-based approaches. The geometry shader constructs an object-to-texture space mapping for each triangle, allowing lighting computations to be done efficiently in texture space in the pixel shader. It addresses issues like texture mirroring and lighting discontinuities. Examples and Cg source code are provided to illustrate the technique.
This document provides an overview of views, drawing, and animation in iPhone application development. It discusses view fundamentals like the view hierarchy, frames and bounds, and view-related structures. It covers drawing by overriding drawRect and using Core Graphics. It also discusses animating view properties.
This document provides an overview of graphics and animations in Android. It discusses the architecture including surfaces, views, and view groups. It covers graphics topics such as Skia, OpenGL, rendering scripts, surfaces, and drawing with canvases, paints, shaders, color filters, and bitmaps. It also discusses animations including the animation superclass, transformation, fading, sequence, cross-fading, and layout animations. It provides tips on performance and previews future property animation capabilities in Android.
The document provides instructions on creating and customizing Java applets. It explains that applets extend the JApplet class and must implement init(), start(), stop(), and paint() methods. It also lists Graphics methods that can be used to draw on applets and describes how to add mouse event handling.
2. reflection (solved example + exercise)SameepSehgal1
This document contains 20 solved examples related to the concepts of reflection.
The examples cover topics like laws of reflection, image formation using plane and curved mirrors, relative motion of object and image in plane mirrors, and numerical problems to calculate angles of incidence and reflection, focal length of curved mirrors, position and nature of images. Detailed step-by-step solutions are provided for each example.
The examples range from basic to slightly complex, involving application of mirror equations, concept of virtual objects and images, and relative motion concepts to solve problems related to reflection of light.
Introduction to Game Programming TutorialRichard Jones
The slides to accompany the Introduction to Game Programming tutorial I ran at LCA 2010. The tutorial ran over 90 minutes with the participants following along.
The Ring programming language version 1.5.3 book - Part 48 of 184Mahmoud Samir Fayed
This document provides documentation on creating a 2D game engine in Ring. It discusses organizing the project into layers, including the games layer, game engine classes layer, and interface to graphics library layer. It then describes the key classes in the game engine - Game, GameObject, Sprite, Text, Animate, Sound, and Map. It provides details on the attributes and methods for each class. It also provides an example of how to load the game engine library, create a Game object, and start drawing text to the screen. The document is intended to teach how to structure a 2D game engine project using different programming paradigms in Ring.
Maximizing performance of 3 d user generated assets in unityWithTheBest
Maximizing performance of 3D user-generated assets in Unity
The document discusses optimizing 3D assets in Unity. It begins with an introduction and agenda, then covers optimization principles through examples from a trail renderer asset. The examples demonstrate reducing garbage collection by using queues instead of arrays, reusing components instead of creating new game objects, and comparing distances through dot products instead of taking square roots. Hands-on demonstrations are provided. Key takeaways are to profile for garbage collection, eliminate it by reusing objects when possible, and optimize comparisons.
The document discusses Microsoft's XNA game development platform. XNA provides tools for game development in a managed environment across Windows and Xbox 360. It includes the XNA Framework, content pipeline, and game studio. Developers need the .NET Framework and Visual Studio to use XNA. The framework provides graphics, audio, input and other functionality. Games are built using components and starter kits provided by XNA and the developer community.
Ultra Fast, Cross Genre, Procedural Content Generation in Games [Master Thesis]Mohammad Shaker
In my MSc. thesis, I have re-tackled the problem of procedurally generating content for physics-based games I have previously investigated in my BSc. graduation thesis. This time around I propose two novel methods: the first is projection based for faster generation of physics-based games content. The other, The Progressive Generation, is a generic, wide-range, across genre, customisable with playability check method all bundled in a fast progressive approach. This new method is applied on two completely different games: NEXT And Cut the Rope.
The document provides an introduction and tutorial to Java 3D, a library for displaying three-dimensional graphics in Java. It covers installing Java 3D, creating a basic 3D program with a cube, adding lighting, positioning objects in 3D space using transformations, changing object appearances using materials and textures, and more. Examples of Java 3D code are provided throughout to demonstrate key concepts.
Harris corner detection is used to extract local features from images. It works by (1) computing the gradient at each point, (2) constructing a second moment matrix from the gradient, and (3) using the eigenvalues of this matrix to score how "corner-like" each point is. Points with a large, local maximum score are detected as corners. The Harris operator, which is a variant using the trace of the matrix, is commonly used due to its efficiency. Corners provide distinctive local features that can be matched between images.
Here is a function to calculate the factorial of an integer N using a for loop:
function fact = factorial(N)
fact = 1;
for i = 1:N
fact = fact * i;
end
end
To test it:
N = 5;
result = factorial(N);
This function:
1. Initializes the factorial variable fact to 1 outside the loop
2. Uses a for loop from 1 to N to iterate over the integers
3. On each iteration, it multiplies the running fact variable by the current integer i
4. After the loop, fact will contain the final factorial
This is complete JavaScript framework for building 3D games and experiences with HTML5, WebGL, WebVR and Web Audio (https://www.babylonjs.com). BabylonJS' basic concepts are explained and illustrated based on a handson lab provided by Mozilla.
This chapter discusses reflection and mirrors. Key points include:
1) Plane mirrors form images that are virtual, same size, and laterally inverted compared to the object. Spherical mirrors can form real or virtual images depending on the object distance.
2) Ray tracing diagrams can be used to determine the nature, size, and location of images formed by spherical mirrors based on the object distance and mirror parameters like radius of curvature and focal length.
3) Mirror equations relate the object and image distances (p and q), focal length (f), and magnification (M) of spherical mirrors. Real, inverted images form when the object is between the focal point and center of curvature. Virtual, erect images form when
This document contains conceptual problems and their solutions related to optical images formed by mirrors and lenses. For concave mirrors, it discusses that the virtual image size depends on the object distance, and real images are possible. Convex mirrors never form real images. A concave mirror can form enlarged real images if the object is between the center of curvature and focal point. Plane mirrors form virtual images, and the eye location range to see the image is discussed. Spherical mirrors equations relate image and object distances. Refraction through a fish bowl or glass rod immersed in water is analyzed. A double concave lens problem applies lens equations to find the focal length, image location and size, and determines if the image is real/virtual and
Company of Heroes 2 (COH2) Rendering Technology: The cold facts of recreating...Daniel Barrero
Presentation at KGC2013 about the techniques developed for COH2 to reproduce the harsh winter conditions of the eastern front of World War 2. It covers the technology developed for dynamic snow and ice rendering, what worked what didn't. It covers as well the lighting and conversion of the COH1 engine from forward to a deferred renderer.
The document discusses a lecture on iPhone application development that covers views, drawing, and animation. It provides information on views including view fundamentals, the view hierarchy, view structures like frames and bounds, and creating and manipulating views. It also discusses drawing in views by overriding the drawRect method and using Core Graphics for drawing operations.
Animation involves rapidly displaying a sequence of images to create the illusion of movement. When developing mobile web games or animations, developers must consider resource management, object representation, animation techniques, and event processing to optimize performance. Hardware acceleration is also important, as it improves the performance of canvas and CSS3D transformations on mobile devices. The Collie library is designed to help with high performance animation across devices by supporting optimized rendering methods and detailed region detection of objects.
Shadow Mapping with Today's OpenGL HardwareMark Kilgard
The document discusses shadow mapping, a technique for real-time shadow generation in 3D graphics. Shadow mapping works by rendering the scene from the point of view of the light to generate a depth map, then using that depth map to determine whether surfaces are in shadow during the main rendering pass from the camera's point of view. Hardware support for shadow mapping allows efficient shadow tests by comparing depth map values to fragment depths.
The document describes how to create water flow maps using Houdini. It explains that a tessellated grid is used to represent the water surface, which is then deformed using techniques like combing normals or magnet deformers to generate flow patterns. The deformed grid is then rendered to create a texture map representing the water flow. Specific nodes and techniques in Houdini like comb tools, metaballs, and attribute transfers are demonstrated for generating realistic yet controllable water flow maps.
The document describes a geometry shader-based approach to bump mapping that has several advantages over traditional CPU-based approaches. The geometry shader constructs an object-to-texture space mapping for each triangle, allowing lighting computations to be done efficiently in texture space in the pixel shader. It addresses issues like texture mirroring and lighting discontinuities. Examples and Cg source code are provided to illustrate the technique.
This document provides an overview of views, drawing, and animation in iPhone application development. It discusses view fundamentals like the view hierarchy, frames and bounds, and view-related structures. It covers drawing by overriding drawRect and using Core Graphics. It also discusses animating view properties.
This document provides an overview of graphics and animations in Android. It discusses the architecture including surfaces, views, and view groups. It covers graphics topics such as Skia, OpenGL, rendering scripts, surfaces, and drawing with canvases, paints, shaders, color filters, and bitmaps. It also discusses animations including the animation superclass, transformation, fading, sequence, cross-fading, and layout animations. It provides tips on performance and previews future property animation capabilities in Android.
The document provides instructions on creating and customizing Java applets. It explains that applets extend the JApplet class and must implement init(), start(), stop(), and paint() methods. It also lists Graphics methods that can be used to draw on applets and describes how to add mouse event handling.
2. reflection (solved example + exercise)SameepSehgal1
This document contains 20 solved examples related to the concepts of reflection.
The examples cover topics like laws of reflection, image formation using plane and curved mirrors, relative motion of object and image in plane mirrors, and numerical problems to calculate angles of incidence and reflection, focal length of curved mirrors, position and nature of images. Detailed step-by-step solutions are provided for each example.
The examples range from basic to slightly complex, involving application of mirror equations, concept of virtual objects and images, and relative motion concepts to solve problems related to reflection of light.
Introduction to Game Programming TutorialRichard Jones
The slides to accompany the Introduction to Game Programming tutorial I ran at LCA 2010. The tutorial ran over 90 minutes with the participants following along.
The Ring programming language version 1.5.3 book - Part 48 of 184Mahmoud Samir Fayed
This document provides documentation on creating a 2D game engine in Ring. It discusses organizing the project into layers, including the games layer, game engine classes layer, and interface to graphics library layer. It then describes the key classes in the game engine - Game, GameObject, Sprite, Text, Animate, Sound, and Map. It provides details on the attributes and methods for each class. It also provides an example of how to load the game engine library, create a Game object, and start drawing text to the screen. The document is intended to teach how to structure a 2D game engine project using different programming paradigms in Ring.
Maximizing performance of 3 d user generated assets in unityWithTheBest
Maximizing performance of 3D user-generated assets in Unity
The document discusses optimizing 3D assets in Unity. It begins with an introduction and agenda, then covers optimization principles through examples from a trail renderer asset. The examples demonstrate reducing garbage collection by using queues instead of arrays, reusing components instead of creating new game objects, and comparing distances through dot products instead of taking square roots. Hands-on demonstrations are provided. Key takeaways are to profile for garbage collection, eliminate it by reusing objects when possible, and optimize comparisons.
The document discusses Microsoft's XNA game development platform. XNA provides tools for game development in a managed environment across Windows and Xbox 360. It includes the XNA Framework, content pipeline, and game studio. Developers need the .NET Framework and Visual Studio to use XNA. The framework provides graphics, audio, input and other functionality. Games are built using components and starter kits provided by XNA and the developer community.
Ultra Fast, Cross Genre, Procedural Content Generation in Games [Master Thesis]Mohammad Shaker
In my MSc. thesis, I have re-tackled the problem of procedurally generating content for physics-based games I have previously investigated in my BSc. graduation thesis. This time around I propose two novel methods: the first is projection based for faster generation of physics-based games content. The other, The Progressive Generation, is a generic, wide-range, across genre, customisable with playability check method all bundled in a fast progressive approach. This new method is applied on two completely different games: NEXT And Cut the Rope.
The document discusses object cloning in C# programming. It explains shallow cloning versus deep cloning and demonstrates different approaches to cloning objects, including using the ICloneable interface and MemberwiseClone() method. It notes issues with these approaches. The fastest way to do a deep clone, it states, is to serialize an object to a stream and then deserialize it back, which performs a full deep copy. Code is provided to implement this serialization/deserialization cloning approach.
This document provides an introduction to event-driven programming and forms using Delphi. It discusses various controls that can be used in forms like labels, edits, combo boxes, check boxes, group boxes, radio buttons, radio groups, and list boxes. It provides examples of how to use these controls and their properties. Tips are also provided like using auto-completion, differentiating between control properties, and changing the application icon. Functions for manipulating strings are also listed.
Utilizing Kinect Control for a More Immersive Interaction with 3D EnvironmentMohammad Shaker
Utilizing Kinect Control for a More Immersive Interaction with 3D Environment. Implemented by Saed Haj Ali, Kinda Tarboush and Marah Halawah and Supervised by me, Dr. Noor Shaker and Dr. Ammar Joukhadar.
The document discusses various web technologies including HTML5, CSS, JavaScript, jQuery, ASP.NET, MVC pattern, and more. It provides an overview of each topic with definitions and examples. It also includes a brief history and future directions of web standards.
This document provides an overview of various topics related to mobile application development including cloud computing, interaction design, Android, iOS, web technologies like HTML5 and JavaScript, programming languages like Java and Objective-C, frameworks, gaming, user experience design, and more. It discusses tools for Android development and covers basics of creating an Android app like setting up the IDE, creating the UI, adding interactivity, debugging, and referencing documentation.
The document discusses Windows Workflow Foundation (WF), a framework that enables users to create system or human workflows in applications. WF allows for workflows within line-of-business apps, user interface page flows, document workflows, and more. A WF project can be created to define workflows as classes using C# or XML. The document provides a link to an MSDN tutorial about a simple expense report workflow that routes approvals based on amount and uses controls like If, Assign, and Sequence.
WPF L01-Layouts, Controls, Styles and TemplatesMohammad Shaker
The document provides an overview of Windows Presentation Foundation (WPF) layouts, controls, and other UI elements. It includes code examples for common controls like text boxes, buttons, grids, menus, toolbars and dialog boxes. It also covers more advanced elements like tab controls, scroll viewers, expanders and popups. The document appears to be from a WPF starter course, aiming to introduce developers to the core concepts and building blocks of WPF applications.
The document describes code for implementing a client-server application using TCP sockets in C#. It includes code for initializing connections on both the client and server sides, with the client connecting to the server on a specific port and IP address. Event handler methods are used to handle connection events like accepting new client connections, receiving and sending data. The overall purpose is to create a chat application where the client can connect to the server and they can exchange messages.
Short, Matters, Love - Passioneers Event 2015Mohammad Shaker
Short, Matters, Love is a presentation I prepared for freshmen students at the Faculty of Information Technology in Damascus, Syria organised by Passioneers - 2015
C# Starter L06-Delegates, Event Handling and Extension MethodsMohammad Shaker
The document discusses delegates, events, and extension methods in C#. It explains that delegates allow functions to be passed as parameters and can point to methods. Events use delegates to call subscriber methods when an event is raised. Extension methods extend existing classes with new methods without modifying the original class. The document provides examples of how to use delegates to handle events, attach multiple event handlers, and create anonymous methods. It also demonstrates how to write an extension method to add new functionality to the string class.
This is my project in my third year of studying in the Faculty of Information Technology Engineering in Damascus, Syria, 2011 with Ismaeel Abo Abdalla, Zaher Wanli and Mhd Noor Alhamwi. The project simulates the physics of the car movement with/without Anti Brake-Lock System (ABS), Electronic Stability Program (ESP) and Global Positioning System (GPS) all in realtime.
The document provides an overview of various computer graphics and OpenGL concepts including cube maps, texture mapping, lighting, blending, shadowing, fog, blurring, cameras, clipping, reflection, particles systems, loaders for 3D objects, terrain generation, and sound engines. It also includes code snippets and explanations for implementing concepts like lighting, blending, shadow mapping, and simple particle systems in OpenGL. The document serves as a short introductory course covering essential topics for OpenGL graphics programming.
Teachers are supposed to teach how to study new ones.
Students are supposed to learn how to study new ones.
But,
teachers do not teach how to study new ones; and students do not learn how to study new ones.
This is the basis of all academic problems and raise of educational cost.
By reading this document, you will learn how to study new ones.
Quantitative Comparison of Artificial Honey Bee Colony Clustering and Enhance...idescitation
This paper introduces a comparison of two popular
clustering algorithms for breast DCE-MRI segmentation
purpose. Magnetic resonance imaging (MRI) is an advanced
medical imaging technique providing rich information about
the human soft tissue anatomy. The goal of breast magnetic
resonance image segmentation is to accurately identify the
principal mass or lesion structures in these image volumes.
There are many methods that exist to segment the breast
DCE-MR images. One of these, K-means clustering procedure
provides effective solutions in many science and engineering
fields. They are especially popular in the pattern classification
and signal processing areas and can segment the breast DCE-
MRI with high precision. The artificial bee colony (ABC)
algorithm is a new, very simple and robust population based
optimization algorithm that is inspired by the intelligent
behavior of honey bee swarms. This paper compares the
performance of two image segmentation techniques in the
subject of breast DCE-MR image. In the experiments, the
real dynamic contrast enhanced magnetic resonance images
(DCE- MRI) are used. Results show that artificial bee colony
algorithm performs better in terms of segmentation accuracy,
robustness and speed of computation.
ePatCon11: Miron-Shatz - Inserting the Human Factor in Advanced Technologye-Patient Connections
The document discusses factors that influence patient adherence to medical treatment plans. It notes that around a third of prescriptions are never filled and compliance rates are low even for prescriptions that are picked up. Barriers to adherence include lack of knowledge about diseases and medications, concerns about side effects and costs, and lack of motivation. The document advocates for addressing these barriers by providing personalized messages that create motivation, ensuring patients comprehend their conditions and treatments, and identifying what specific issues cause non-adherence for individual patients. Digital tools should be designed using these behavioral principles to increase long-term engagement.
NEDRA Big Data, Big Gifts: Social Donor Management EverTrue
This document summarizes a presentation given by Jesse Bardo of EverTrue on using social media data from LinkedIn and Facebook to improve donor management and fundraising. Some key points include: LinkedIn contains valuable career and industry data that can help segment donors, while Facebook "Likes" are correlated with increased donation rates. Analyzing these social graphs can provide insights to prospect identification, volunteer recruitment, and other areas to potentially increase fundraising returns.
The document discusses the graphics rendering pipeline for virtual reality displays. It covers topics such as the graphics pipeline, stereo rendering, coordinate space transformations, shaders, lens distortion, and using WebGL and three.js for 3D graphics rendering in web browsers. The graphics pipeline involves vertex processing, rasterization, and fragment processing to convert 3D scene descriptions into 2D images. Key steps include model, view, and projection transformations as well as vertex and fragment shaders. Stereo rendering and lens distortion are also covered to enable VR displays.
The document discusses creating and animating custom views in Android. It covers topics like why to use custom views, the View class hierarchy, drawing and styling custom views, and different techniques for animating views including using Runnables, ValueAnimators, and ObjectAnimators. Key points include how to subclass View, override drawing methods like onDraw(), apply XML styling attributes, and animate view properties over time through interpolation of values.
Structure from motion is a computer vision technique used to recover the three-dimensional structure of a scene and the camera motion from a set of images. It involves detecting feature points in multiple images, matching corresponding points across images, estimating camera poses and orientations, and reconstructing the 3D geometry of scene points. Large-scale structure from motion can reconstruct scenes from thousands of images but requires solving very large optimization problems. Applications include 3D modeling, surveying, robot navigation, virtual reality, augmented reality, and simultaneous localization and mapping.
Structure from motion is a computer vision technique used to recover the three-dimensional structure of a scene and the camera motion from a set of images. It can be used to build 3D models of scenes without any prior knowledge of the camera parameters or 3D locations of the scene points. Structure from motion involves detecting feature points in multiple images, matching the features between images, estimating the fundamental matrices between image pairs, and then optimizing a bundle adjustment problem to simultaneously compute the 3D structure and camera motion parameters. Some applications of structure from motion include 3D modeling, surveying, robot navigation, virtual and augmented reality, and visual effects.
SIFT extracts scale and rotation invariant features from images by using differences of Gaussians to identify keypoints and creating histograms of local gradients to describe keypoints. It achieves scale invariance through scale space analysis using Gaussian pyramids and difference of Gaussians, rotation invariance by assigning a consistent orientation to each keypoint based on local gradient histograms, and other invariances through the gradient-based descriptor. SIFT has been widely used for applications like image matching, object recognition and mosaicing due to its robustness to changes in scale, rotation, illumination and viewpoint.
SIFT extracts scale and rotation invariant features from images by using differences of Gaussians to identify keypoints and creating histograms of local gradients to describe keypoints. It achieves scale invariance through scale space analysis using Gaussian pyramids and difference of Gaussians, rotation invariance by assigning a consistent orientation to each keypoint based on local gradient histograms, and other invariances through the gradient-based descriptor. SIFT has been widely used for applications like image matching, object recognition and mosaicing due to its robustness to changes in scale, rotation, illumination and viewpoint.
OpenGL is a cross-language, cross-platform API for rendering 2D and 3D graphics via hardware acceleration. It uses shaders and programmable pipelines to process vertices and fragments. The rendering pipeline involves transforming vertices, assembling triangles, rasterization, applying textures, testing fragments, and writing pixels to the framebuffer. Key concepts include transformation matrices, lighting, and the vertex and fragment shaders that operate on data at each pipeline stage.
affine transformation for computer graphicsDrSUGANYADEVIK
Graphics toolkits take geometry as input and output pixel data. They handle rendering details. OpenGL is an open standard graphics toolkit that provides functions for modeling, rendering, and manipulating the framebuffer. It is portable, hardware supported, and simple to program. In the coming weeks, the document will cover the math and algorithms behind OpenGL and how to access them. Coordinate systems are fundamental to graphics and describe point locations. Transformations convert between systems.
Presented at the 2011 IEEE 7th International Conference on Intelligent Computer Communication and Processing (ICCP 2011), August 26th, 2011 in Cluj-Napoca, Romania.
Publication: http://bit.ly/x1OpFL
Abstract:
In this paper we introduce a system for semantic understanding of traffic scenes. The system detects objects in video images captured in real vehicular traffic situations, classifies them, maps them to the OpenCyc1 ontology and finally generates descriptions of the traffic scene in CycL or cvasi-natural language. We employ meta-classification methods based on AdaBoost and Random forest algorithms for identifying interest objects like: cars, pedestrians, poles in traffic and we derive a set of annotations for each traffic scene. These annotations are mapped to OpenCyc concepts and predicates, spatiotemporal rules for object classification and scene understanding are then asserted in the knowledge base. Finally, we show that the system performs well in understanding traffic scene situations and summarizing them. The novelty of the approach resides in the combination of stereo-based object detection and recognition methods with logic based spatio-temporal reasoning.
This is a primer on some of the foundations of 3D math used in computer graphics programming. This is the version of the talk from CocoaConf Chicago 2015.
This document provides an introduction to real-time rendering concepts. It discusses 3D mathematics including coordinate systems, primitives, and affine transformations. It then explains the graphics pipeline including programmable shaders and GPU architecture. Finally, it covers simulating light through reflection models and shading models. Key topics include primitive types and topologies, constructing polygon meshes, transforming objects, the view and projection transforms, and how shaders are used in the graphics pipeline to simulate lighting effects.
Technical presentation of the gesture based NUI I developed for the Aigaio smart conference room in IIT Demokritos
Demo In Greek:
https://www.youtube.com/watch?v=5C_p7MHKA4g
This legal document provides several notices and disclaimers regarding the information presented. Specifically:
- The presentation is for informational purposes only and Intel makes no warranties regarding the information or summaries of the information.
- Any performance claims depend on system configuration and hardware/software/service activation. Performance varies depending on system configuration.
- The sample source code is released under the Intel Sample Source Code License Agreement.
- Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries. Other names may belong to other owners.
- Copyright of the content is held by Intel Corporation and all rights are reserved.
This document discusses techniques for batching and rendering 2D quads efficiently on mobile using vertex and fragment shaders. It explains how to collect quads with the same state into one draw call, pack vertex data into buffers, and process the data in shaders using constants from the GPU. The techniques aim to batch up to 31 quads into one draw call. Real-world tests show performance improvements. It also provides suggestions for batching spine animations and links to related resources.
With cheap cameras becoming ubiquitous the camera has become probably the most
important sensor for many applications.
However extracting usable information from the images produced by cameras is
non-trivial. There have been many published successes in recent years using deep
learning (multi-layered convolutional neural networks) but it’s not always
necessary to apply such techniques to get useful results for many applications.
This talk will focus on “classical” machine vision using java and the OpenCV
library. We’ll start with a quick refresher on how image data is represented and
then cover topics such as determining if an image is blurred (and therefore
unusable) and then explore a number of techniques such as shape and face
detection.
This document discusses various techniques for 3D computer animation including modeling, representing objects, key frame animation, motion capture, and morphing. It covers modeling primitives like polygons, spline curves, and meshes. Representing objects involves transforming them from model to world coordinates. Key frame animation involves setting parameter values at key frames for the system to interpolate between. Motion capture involves attaching sensors to record live actor motion. Morphing smoothly shifts between images by warping control points. The future of animation may involve more realistic human characters and interactions through techniques like motion capture and tissue simulation.
This document is a presentation on algorithms, computer graphics, and mathematics for game developers and computer scientists. It covers topics like the Twelve-Marble Problem, Fibonacci sequences, 3D modeling with lathe modifiers, cameras and lights in Three.js, depth of field, and assigning homework on modeling a chess board and creating scenes with different lights and cameras. Homework is due on July 2nd.
This document discusses Unity3D and game development. It provides an overview of Unity3D and other game engines like Unreal Engine, comparing their features and costs. Examples are given of popular games made with each engine. The document also lists several games the author has made using Unity3D and provides some additional resources and references.
The document discusses various topics related to mobile application design including cloud interaction, Android touch and gesture interaction, UI element sizing, screen sizes, changing orientation, retaining objects during configuration changes, multi-device targeting, and wearables. It provides examples and guidelines for designing applications that can adapt to different devices and configurations.
The document discusses principles of interaction design, color theory, and game design. It covers topics like primary and secondary colors, color harmonies, using color to attract attention and set mood, the importance of white space and negative space in design, and how games like Journey, Fez, Luftrausers, Monument Valley, Ori and the Blind Forest, and Limbo effectively use techniques like the rule of thirds, establishing a sense of goal, and game feel.
This document discusses various topics related to typography including letter shapes like the letter "T", how words for concepts like water have evolved across languages, symbols for ideas like fish, and different writing styles such as styles that would be impossible to write. It examines typography from multiple perspectives like shapes, language evolution, symbols, and stylization.
Interaction Design L04 - Materialise and CouplingMohammad Shaker
This document discusses various aspects of coupling and interaction design in mobile applications. It addresses good and bad examples of coupling on Android and iOS, such as how apps are switched between. It also discusses using accurate text to represent backend processes, and using faster progress bars to reduce cognitive load on users. Visualizations are suggested to improve progress bars.
The document discusses various options for storing data in an Android application including SharedPreferences for simple key-value pairs, internal storage for private files, external storage for public files, SQLite databases for structured data, network connections for storing data on a web server, and ContentProviders for sharing data between applications. It provides details on using SharedPreferences, internal SQLite databases stored in the application's files, and ContentProviders for sharing Contacts data with other apps.
The document discusses various interaction design concepts in Android including toasts, notifications, threads, broadcast receivers, and alarms. It provides code examples for creating toasts, setting notification priorities, and scheduling alarms to fire at boot or at specific times using the AlarmManager. Broadcast receivers can be used to set alarms during device boot by listening for the BOOT_COMPLETED intent filter and implementing the onReceive callback.
This document provides an overview of various mobile development technologies and frameworks including Cloud, iOS, Android, iPad Pro, Xcode, Model-View-Controller (MVC), C, Objective-C, Foundation data types, functions calls, Swift, iOS Dev Center, coordinate systems, Windows Phone, .NET support, MVVM, binding, WebClient, and navigation. It also mentions tools like Expression Blend and frameworks like jQuery Mobile, PhoneGap, Sencha Touch, and Xamarin.
This document discusses various topics related to mobile app design including user experience (UX), user interface (UI), interaction design, user constraints like limited data/battery and screen size, and using context like location to improve the user experience. It provides examples of a pizza ordering app and making ATM machines smarter. It also covers design patterns and principles like focusing on user needs and testing designs through feedback.
This document discusses principles of visual organization and responsive grid systems for web design. It mentions laws of proximity, similarity, common fate, continuity, closure, and symmetry which help organize visual elements. It also discusses column-based and ratio-based grid systems as well as responsive grid systems that adapt to different screen widths, citing examples from Pinterest, Bootstrap, and the website www.mohammadshaker.com which demonstrates responsive design.
This document provides an overview comparison of key aspects of mobile app development for iOS and Android platforms. It discusses differences in app store policies, pricing, monetization options like ads and in-app purchases, development tools including engines like Unity and Unreal, and the publishing process. Key points mentioned include Android apps averaging over 2.5x the price of similar iOS apps, Apple's restrictive app review policies, the 70/30 revenue split in Google Play Store, and tools for user testing and publishing on both platforms. It also shares stats on the revenue and success of specific apps like Monument Valley.
The document discusses various ways to implement cloud functionality in Android applications using services like Parse and Android Backup. It provides code examples for backing up app data to the cloud using Android Backup, setting up a backend using Parse, pushing notifications with Parse, and performing analytics tracking with Parse.
This document discusses several topics related to developing Android apps including:
1. Adding markers to maps by setting an onMapClickListener and adding a MarkerOptions to the clicked location.
2. Signing into apps with Google accounts using the Google Identity API.
3. Following Material Design guidelines for visual style and user interfaces.
4. Maintaining multiple APK versions and using OpenGL ES for games.
This document discusses various techniques for styling Android applications including adding styles, overriding styles, using themes, custom backgrounds, nine-patch images, and animations. It provides links to tutorials and documentation on animating views with zoom animations and other motion effects.
This document provides information about various Android development topics including:
- ListAdapters and mapping models to UI using an MVVM-like pattern
- Creating custom lists
- Starting a new activity using an Intent and passing data between activities
- Understanding the Android activity lifecycle and methods like onPause() and onResume()
- Handling configuration changes that recreate the activity
- Working with permissions
The document discusses common patterns for working with lists, launching new screens, and handling activity state changes. It also provides code examples for starting a new activity, passing data between activities, and handling the activity lifecycle callbacks.
This document provides an overview of game development topics including types of games, game engines, platforms, and ratings systems. It compares the popular game engines Unity and Unreal Engine, noting their key features such as scripting languages, cost structures, and graphics capabilities. Examples of games built with Unity and Unreal are also mentioned. The document concludes with brief discussions of other game engines like Source and CryEngine, gamification techniques, and uses of games for serious purposes.
1. Mohammad Shaker discusses his rhythm-based game SyncSeven which uses procedural content generation to generate levels based on music.
2. He outlines the music-based content generation process which extracts notes, rhythm and beats from music to drive a game model through a mapper.
3. The talk describes implementing SyncSeven using C# and Unity3D, including features like ads, leaderboards through Google Play, and social sharing.
A software employing Domain-Specific Language (DSL) for Roboconf architecture for distributed cloud-based applications. HTML5 graphical representation along with ECA scaling rules are implemented.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
6. XNA Matrices
• Matrix.CreateRotationX, Matrix.CreateRotationY, and Matrix.CreateRotationZ: Each of these creates a rotation matrix for each of the axes.
• Matrix.Translation: Creates a translation matrix (one or more axes).
• Matrix.Scale: Creates a scale matrix (one or more axes).
• Matrix.CreateLookAt: Creates a view matrix used to position the camera, by setting the 3D position of the camera, the 3D position it is facing, and which direction is “up” for the camera.
• Matrix.CreatePerspectiveFieldOfView: Creates a projection matrix that uses a perspective view.
11. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA
12. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA
PROJECTION
13. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA
PROJECTION
WORLD MATRIX
14. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA (Singleton, for all objects)
PROJECTION (Singleton, for all objects)
WORLD MATRIX (For each object separately)
15. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA (Singleton, for all objects)
PROJECTION (Singleton, for all objects)
WORLD MATRIX (For each object separately)
16. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA (Singleton, for all objects)
PROJECTION (Singleton, for all objects)
WORLD MATRIX (For each object separately)
17. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA (Singleton, for all objects)
PROJECTION (Singleton, for all objects)
WORLD MATRIX (For each object separately)
18. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA (Singleton, for all objects)
PROJECTION (Singleton, for all objects)
WORLD MATRIX (For each object separately)
20. XNA Matrices
• Matrix.CreateRotationX, Matrix.CreateRotationY, and Matrix.CreateRotationZ: Each of these creates a rotation matrix for each of the axes.
• Matrix.Translation: Creates a translation matrix (one or more axes).
• Matrix.Scale: Creates a scale matrix (one or more axes).
• Matrix.CreateLookAt: Creates a view matrix used to position the camera, by setting the 3D position of the camera, the 3D position it is facing, and which direction is “up” for the camera.
• Matrix.CreatePerspectiveFieldOfView: Creates a projection matrix that uses a perspective view.
24. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA (Singleton, for all objects)
PROJECTION (Singleton, for all objects)
WORLD MATRIX (For each object separately)
25. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA (Singleton, for all objects)
PROJECTION (Singleton, for all objects)
WORLD MATRIX (For each object separately)
27. XNA Matrices
• Matrix.CreateRotationX, Matrix.CreateRotationY, and Matrix.CreateRotationZ: Each of these creates a rotation matrix for each of the axes.
• Matrix.Translation: Creates a translation matrix (one or more axes).
• Matrix.Scale: Creates a scale matrix (one or more axes).
• Matrix.CreateLookAt: Creates a view matrix used to position the camera, by setting the 3D position of the camera, the 3D position it is facing, and which direction is “up” for the camera.
• Matrix.CreatePerspectiveFieldOfView: Creates a projection matrix that uses a perspective view.
41. XNA Matrices
• Matrix.CreateRotationX, Matrix.CreateRotationY, and Matrix.CreateRotationZ: Each of these creates a rotation matrix for each of the axes.
• Matrix.Translation: Creates a translation matrix (one or more axes).
• Matrix.Scale: Creates a scale matrix (one or more axes).
• Matrix.CreateLookAt: Creates a view matrix used to position the camera, by setting the 3D position of the camera, the 3D position it is facing, and which direction is “up” for the camera.
• Matrix.CreatePerspectiveFieldOfView: Creates a projection matrix that uses a perspective view.
42. Orthographic Projections
•An orthographic projection can be created with the following code:
Matrix.CreateOrthographic(float width, float height, float zNearPlane, float zFarPlane);
43. Orthographic Projections
•An orthographic projection can be created with the following code:
•Off-center orthogonal projection:
Matrix.CreateOrthographic(float width, float height, float zNearPlane, float zFarPlane);
Matrix.CreateOrthographicOffCenter(float left,
float right,
float bottom,
float top,
float zNearPlane, float zFarPlane);
57. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA (Singleton, for all objects)
PROJECTION (Singleton, for all objects)
WORLD MATRIX (For each object separately)
58. RULE
TO SEE A 3D SCENE YOU SHOULD SET UP:
CAMERA (Singleton, for all objects)
PROJECTION (Singleton, for all objects)
WORLD MATRIX (For each object separately)
61. World Matrix
•Example
•Let’s assume that the coordinates of the triangle vertices are as follows:
62. World Matrix
•Example
•To translate 40 units over the y axis’s positive direction,allyou need to do is to add 40 toeachy position, and you have the new coordinates for the vertices:
68. Transformations
•Create a matrix that rotates around the x-axis:
Matrix.CreateRotateX(float angleInRadians);
•Create a matrix that rotatesaround the y-axis:
Matrix.CreateRotateY(float angleInRadians);
•Create a matrix that rotatesaround the z-axis:
Matrix.CreateRotateZ(float angleInRadians);
Identity
Scale
Rotate
Orbit
Translate
69. Transformations
•Create a matrix that rotates points around an arbitrary axis:
Matrix.CreateFromAxisAngle(Vector3 axis, float angleInRadians);
Identity
Scale
Rotate
Orbit
Translate
90. Basic Matrices -A Final Example
Vector3cameraPosition= newVector3(30.0f, 30.0f, 30.0f);
Vector3cameraTarget= newVector3(0.0f, 0.0f, 0.0f); // Look back at the origin
floatfovAngle= MathHelper.ToRadians(45); // convert 45 degrees to radians
floataspectRatio= graphics.PreferredBackBufferWidth/ graphics.PreferredBackBufferHeight;
floatnear = 0.01f; // the near clipping plane distance
floatfar = 100f; // the far clipping plane distance
Matrixworld = Matrix.CreateTranslation(10.0f, 0.0f, 10.0f);
Matrixview = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
Matrixprojection = Matrix.CreatePerspectiveFieldOfView(fovAngle, aspectRatio, near, far);
91. Basic Matrices -A Final Example
Vector3cameraPosition= newVector3(30.0f, 30.0f, 30.0f);
Vector3cameraTarget= newVector3(0.0f, 0.0f, 0.0f); // Look back at the origin
float fovAngle= MathHelper.ToRadians(45); // convert 45 degrees to radians
float aspectRatio= graphics.PreferredBackBufferWidth/ graphics.PreferredBackBufferHeight;
float near = 0.01f; // the near clipping plane distance
float far = 100f; // the far clipping plane distance
Matrix world = Matrix.CreateTranslation(10.0f, 0.0f, 10.0f);
Matrixview = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(fovAngle, aspectRatio, near, far);
92. Basic Matrices -A Final Example
Vector3 cameraPosition= new Vector3(30.0f, 30.0f, 30.0f);
Vector3 cameraTarget= new Vector3(0.0f, 0.0f, 0.0f); // Look back at the origin
floatfovAngle= MathHelper.ToRadians(45); // convert 45 degrees to radians
floataspectRatio= graphics.PreferredBackBufferWidth/ graphics.PreferredBackBufferHeight;
floatnear = 0.01f; // the near clipping plane distance
floatfar = 100f; // the far clipping plane distance
Matrix world = Matrix.CreateTranslation(10.0f, 0.0f, 10.0f);
Matrix view = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
Matrixprojection = Matrix.CreatePerspectiveFieldOfView(fovAngle, aspectRatio, near, far);
93. Basic Matrices -A Final Example
Vector3 cameraPosition= new Vector3(30.0f, 30.0f, 30.0f);
Vector3 cameraTarget= new Vector3(0.0f, 0.0f, 0.0f); // Look back at the origin
float fovAngle= MathHelper.ToRadians(45); // convert 45 degrees to radians
float aspectRatio= graphics.PreferredBackBufferWidth/ graphics.PreferredBackBufferHeight;
float near = 0.01f; // the near clipping plane distance
float far = 100f; // the far clipping plane distance
Matrix world = Matrix.CreateTranslation(10.0f, 0.0f, 10.0f);
Matrix view = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(fovAngle, aspectRatio, near, far);
94. Basic Matrices -A Final Example
Vector3 cameraPosition= new Vector3(30.0f, 30.0f, 30.0f);
Vector3 cameraTarget= new Vector3(0.0f, 0.0f, 0.0f); // Look back at the origin
float fovAngle= MathHelper.ToRadians(45); // convert 45 degrees to radians
float aspectRatio= graphics.PreferredBackBufferWidth/ graphics.PreferredBackBufferHeight;
float near = 0.01f; // the near clipping plane distance
float far = 100f; // the far clipping plane distance
Matrix world = Matrix.CreateTranslation(10.0f, 0.0f, 10.0f);
Matrix view = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(fovAngle, aspectRatio, near, far);
95. Basic Matrices -A Final Example
Vector3 cameraPosition= new Vector3(30.0f, 30.0f, 30.0f);
Vector3 cameraTarget= new Vector3(0.0f, 0.0f, 0.0f); // Look back at the origin
float fovAngle= MathHelper.ToRadians(45); // convert 45 degrees to radians
float aspectRatio= graphics.PreferredBackBufferWidth/ graphics.PreferredBackBufferHeight;
float near = 0.01f; // the near clipping plane distance
float far = 100f; // the far clipping plane distance
Matrix world = Matrix.CreateTranslation(10.0f, 0.0f, 10.0f);
Matrix view = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(fovAngle, aspectRatio, near, far);