The document discusses different methods for creating skyboxes and terrains in XNA game development. It provides code examples for creating a skydome using a model and single texture. It also discusses loading skybox textures, creating the skybox from six faces with different textures, and manipulating the terrain through a vertex buffer and height map. Websites referenced for additional skybox and terrain tutorials include Riemers, Rbwhitaker, Innovative Games, and PacktPub.
This document discusses core animation in iOS, including:
1. UIView animations and animatable properties like frame, bounds, center, and transform.
2. Using CALayer for core animation, including implicit layers associated with UIViews and creating explicit layers.
3. Animatable layer properties like position, bounds, contents and how to animate them using CAAnimation subclasses.
Bringing Virtual Reality to the Web: VR, WebGL and CSS – Together At Last!FITC
Save 10% off ANY FITC event with discount code 'slideshare'
See our upcoming events at www.fitc.ca
Virtual Reality development has become very active recently, with the availability of low cost and high quality headsets, motion tracking equipment, and sensors. However, most VR app development is happening natively — users are stuck in the days of needing to download the right binary, trust a third-party that their code isn’t malicious and fix compatibility issues. Developers need to target multiple platforms, thus often ignoring those with fewer users. Instead, wouldn’t it be great if high quality VR content could be delivered through the Web?
In this session, Vladimir Vukicevic will address additions to HTML, CSS, and WebGL that Mozilla is experimenting with which allow Web developers to create immersive VR experiences. Everything from pure VR WebGL content to responsive HTML and CSS that can shift from mobile to tablet to desktop to VR will be covered. Additionally, Vladimir will discuss delivering VR video via the Web, as well as how to mix WebGL and CSS content in a true 3D space.
OBJECTIVE
To show how VR and the Web work together, and the techniques for bringing VR content to the Web.
TARGET AUDIENCE
Web developers and designers
ASSUMED AUDIENCE KNOWLEDGE
Some knowledge of at least one of WebGL, CSS 3D Transforms, or modern 3D graphics would be helpful.
FIVE THINGS AUDIENCE MEMBERS WILL LEARN
An overview of current VR devices, their capabilities and how they can interface with the Web.
How to render WebGL content to a VR device.
How to create documents using HTML and CSS that can be projected in VR.
How to create responsive documents that can shift in and out of VR based on user choice.
How WebGL and CSS content can be mixed, providing interactive 3D graphics but with the full power of HTML for non-3D elements.
With Oculus, Samsung Gear, Google Cardboard, and more headsets rushing to market, it's an exciting time to enter the world of virtual reality. With frameworks from Mozilla WebVR, Unity, LeapMotion and others providing support, Javascript developers can literally get into the game.
In this talk, we'll walkthrough a simple WebVR program to see:
* the ease of getting started
* the technical, design, and UX challenges faced
* the roadmap of things to come
This document discusses WebGL and WebVR. It provides an introduction and overview of WebGL 1.0 and 2.0, including key features and APIs. It also covers how to get VR devices and handle rendering for VR using WebVR, including handling eye parameters, view matrices, and timewarp. Code examples are provided for common VR rendering tasks. The document concludes by noting the rapid growth of VR and encourages developing with these web technologies.
This document discusses Box2D, a 2D physics engine, and how it can be used with libGDX, an open-source game development framework. It provides an overview of Box2D concepts like the world, bodies, fixtures, shapes, and joints. It also discusses how to set up a Box2D world in libGDX, create dynamic and static bodies, add fixtures to bodies, and render physics simulations. The document includes code examples for creating a Box2D world, bodies, and handling the physics step to update simulations over time.
- Tiled is an open-source editor for creating and editing tile maps for use in video games and other multimedia projects.
- LibGDX is a cross-platform game development framework that supports loading and rendering Tiled maps.
- Tiled maps can have multiple layers, tiles, and object layers for entities like the player.
- LibGDX provides classes for loading, rendering, and getting tile map data from Tiled maps to enable map navigation and collision detection in games.
A mobile VR game requires a 3D scene, game characters, controllers for automatic movement, a stereoscopic camera, sound effects, and collision detection. The document discusses implementing these elements in A-Frame, including creating the 3D environment and objects, adding a first-person camera for controller input, integrating GUI elements, detecting collisions, and optimizing performance. Code snippets are provided as examples for building out these various components in an A-Frame VR game.
Web Standards for AR workshop at ISMAR13Rob Manson
This work was presented at the Open Standards session at the IEEE ISMAR 2013 event. It provides a detailed overview and working examples that show exactly where Augmented Reality and Computer Vision are up to on the Web Platform.
This presentation also provides a detailed description of how to define exactly what the Augmented Web is.
This document discusses core animation in iOS, including:
1. UIView animations and animatable properties like frame, bounds, center, and transform.
2. Using CALayer for core animation, including implicit layers associated with UIViews and creating explicit layers.
3. Animatable layer properties like position, bounds, contents and how to animate them using CAAnimation subclasses.
Bringing Virtual Reality to the Web: VR, WebGL and CSS – Together At Last!FITC
Save 10% off ANY FITC event with discount code 'slideshare'
See our upcoming events at www.fitc.ca
Virtual Reality development has become very active recently, with the availability of low cost and high quality headsets, motion tracking equipment, and sensors. However, most VR app development is happening natively — users are stuck in the days of needing to download the right binary, trust a third-party that their code isn’t malicious and fix compatibility issues. Developers need to target multiple platforms, thus often ignoring those with fewer users. Instead, wouldn’t it be great if high quality VR content could be delivered through the Web?
In this session, Vladimir Vukicevic will address additions to HTML, CSS, and WebGL that Mozilla is experimenting with which allow Web developers to create immersive VR experiences. Everything from pure VR WebGL content to responsive HTML and CSS that can shift from mobile to tablet to desktop to VR will be covered. Additionally, Vladimir will discuss delivering VR video via the Web, as well as how to mix WebGL and CSS content in a true 3D space.
OBJECTIVE
To show how VR and the Web work together, and the techniques for bringing VR content to the Web.
TARGET AUDIENCE
Web developers and designers
ASSUMED AUDIENCE KNOWLEDGE
Some knowledge of at least one of WebGL, CSS 3D Transforms, or modern 3D graphics would be helpful.
FIVE THINGS AUDIENCE MEMBERS WILL LEARN
An overview of current VR devices, their capabilities and how they can interface with the Web.
How to render WebGL content to a VR device.
How to create documents using HTML and CSS that can be projected in VR.
How to create responsive documents that can shift in and out of VR based on user choice.
How WebGL and CSS content can be mixed, providing interactive 3D graphics but with the full power of HTML for non-3D elements.
With Oculus, Samsung Gear, Google Cardboard, and more headsets rushing to market, it's an exciting time to enter the world of virtual reality. With frameworks from Mozilla WebVR, Unity, LeapMotion and others providing support, Javascript developers can literally get into the game.
In this talk, we'll walkthrough a simple WebVR program to see:
* the ease of getting started
* the technical, design, and UX challenges faced
* the roadmap of things to come
This document discusses WebGL and WebVR. It provides an introduction and overview of WebGL 1.0 and 2.0, including key features and APIs. It also covers how to get VR devices and handle rendering for VR using WebVR, including handling eye parameters, view matrices, and timewarp. Code examples are provided for common VR rendering tasks. The document concludes by noting the rapid growth of VR and encourages developing with these web technologies.
This document discusses Box2D, a 2D physics engine, and how it can be used with libGDX, an open-source game development framework. It provides an overview of Box2D concepts like the world, bodies, fixtures, shapes, and joints. It also discusses how to set up a Box2D world in libGDX, create dynamic and static bodies, add fixtures to bodies, and render physics simulations. The document includes code examples for creating a Box2D world, bodies, and handling the physics step to update simulations over time.
- Tiled is an open-source editor for creating and editing tile maps for use in video games and other multimedia projects.
- LibGDX is a cross-platform game development framework that supports loading and rendering Tiled maps.
- Tiled maps can have multiple layers, tiles, and object layers for entities like the player.
- LibGDX provides classes for loading, rendering, and getting tile map data from Tiled maps to enable map navigation and collision detection in games.
A mobile VR game requires a 3D scene, game characters, controllers for automatic movement, a stereoscopic camera, sound effects, and collision detection. The document discusses implementing these elements in A-Frame, including creating the 3D environment and objects, adding a first-person camera for controller input, integrating GUI elements, detecting collisions, and optimizing performance. Code snippets are provided as examples for building out these various components in an A-Frame VR game.
Web Standards for AR workshop at ISMAR13Rob Manson
This work was presented at the Open Standards session at the IEEE ISMAR 2013 event. It provides a detailed overview and working examples that show exactly where Augmented Reality and Computer Vision are up to on the Web Platform.
This presentation also provides a detailed description of how to define exactly what the Augmented Web is.
Useful Tools for Making Video Games - XNA (2008)Korhan Bircan
This document provides an overview of tools and techniques for creating 3D video games in XNA, including installing Visual Studio and XNA Game Studio, displaying 3D models by loading them and applying transformations, handling keyboard/mouse input, implementing a basic camera, adding a skybox, and creating animations using curves to interpolate between control points over time. Sample code implementations for many of these techniques can be found in ZIP files referenced.
This document provides a short introduction to HTML5, including:
- HTML5 is the 5th version of the HTML standard by the W3C and is still under development but supported by many browsers.
- HTML5 introduces new semantic elements, video and audio tags, 2D/3D graphics using <canvas>, and new JavaScript APIs for features like geolocation, offline web apps, and drag and drop.
- The document provides examples of using new HTML5 features like video playback, semantic elements, geolocation API, and drawing on a canvas with JavaScript.
WebGL is a JavaScript API for rendering interactive 3D graphics and 2D graphics within any compatible web browser without the use of plug-ins. It can be used for data visualization, creative coding, art, 3D design environments, music videos, mathematical function graphing, 3D modeling, texture creation, physics simulations, and more. WebGL works by using JavaScript to interface with the GPU through WebGL API calls. Common libraries like Three.js simplify the use of WebGL. The basics of a WebGL app include setting up a 3D scene, camera, and rendering loop. Sample code is provided to load a 3D model and texture and allow interactive rotation. Resources listed for learning more include tutorials on Phil
Peint is a Javascript graphics engine for building HTML5 games. It uses a component-based and event-driven approach to handle positioning and rendering separately. Features include event-based rendering, object-based positioning, animation management, and a modular design to support additional rendering backends like CSS3 and WebGL. While currently focused on the canvas API, the developer plans to add support for additional rendering methods and primitives to better support mobile games.
Tricks to Making a Realtime SurfaceView Actually Perform in Realtime - Maarte...DroidConTLV
SurfaceViews allow drawing to a separate thread to achieve realtime performance. Key aspects include:
- Driving the SurfaceView with a thread that locks and draws to the canvas in a loop.
- Using input buffering and object pooling to efficiently process touch/key events from the main thread.
- Employing various timing and drawing techniques like fixed scaling to optimize for performance. Managing the SurfaceView lifecycle to ensure the drawing thread starts and stops appropriately.
This document discusses creating a web-based rotoscoping tool using HTML5 canvas. It proposes allowing users to place shapes over video frames and edit them to create light saber-like effects. Key features would include acquiring video, drawing closed shapes frame-by-frame, reviewing the output, and exporting results. Technical approaches covered include using canvas drawings over video playback, saving frame data to localStorage, and potential improvements like a database backend.
The document discusses 3D web programming using WebGL and Three.js. It provides an overview of WebGL and how to set it up, then introduces Three.js as a library that wraps raw WebGL code to simplify 3D graphics creation. Examples are given for basic Three.js scene setup and adding objects like cubes and lights. The document concludes with suggestions for interactive workshops using these techniques.
I wanted to change the cloudsrectangles into an actuall image it do.pdffeelinggifts
I wanted to change the clouds/rectangles into an actuall image it doesnt matter the image.
import javax.swing.*;
import java.awt.*;
/**
* Created by Thomas on 11/27/2016.
*/
public class Renderer extends JPanel{
//private static final long serialVersionUID = 1L;
protected void paintComponent(Graphics g) {
Main.main.repaint(g);
}
public static int clamp(int greenValue, int i, int j) {
// TODO Auto-generated method stub
return 0;
}
}
OTHER PART:
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.KeyEvent;
import java.awt.event.KeyListener;
import java.util.ArrayList;
import java.util.Random;
import javax.swing.*;
/**
* Created by Thomas on 11/27/2016.
*/
public class Main implements ActionListener, KeyListener{
public static Main main;
public final int WIDTH = 1400;
public final int HEIGHT = 600;
public HUD Hud;
public Renderer renderer;
public Rectangle character;
public ArrayList cloud;
public Random rand;
public boolean start = false, gameover = false;
public int tick;
public Main() {
JFrame jFrame = new JFrame();
Timer timer = new Timer(20, this);
renderer = new Renderer();
rand = new Random();
jFrame.setTitle(\"Example\");
jFrame.add(renderer);
jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jFrame.setSize(WIDTH, HEIGHT);
jFrame.addKeyListener(this);
jFrame.setVisible(true);
cloud = new ArrayList();
character = new Rectangle(200, 220, 20, 20);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
timer.start();
}
public void repaint(Graphics g) {
g.setColor(Color.black);
g.fillRect(0,0, WIDTH, HEIGHT);
g.setColor(Color.blue);
g.fillRect(0, HEIGHT - 100, WIDTH, 100);
g.setColor(Color.green);
g.fillRect(character.x, character.y, character.width, character.height);
if (character.y >= HEIGHT - 100 || character.y < 0) {
gameover = true;
}
for (Rectangle rect : cloud) {
g.setColor(Color.white);
g.fillRect(rect.x, rect.y, rect.width, rect.height);
}
g.setColor(Color.WHITE);
g.setFont(new Font(\"Times New Roman\", 1 ,100));
if (!start) {
g.drawString(\"Press to start!\", 450, HEIGHT / 2);
}
else if (gameover) {
g.drawString(\"Game Over!\", 450, HEIGHT / 2);
}
}
public void addCloud(boolean start) {
int width = 400;
int height = 200;
if (start) {
cloud.add(new Rectangle(WIDTH + width + cloud.size() * 300, rand.nextInt(HEIGHT-120),
80, 100));
}
else {
cloud.add(new Rectangle(cloud.get(cloud.size() - 1).x + 300, rand.nextInt(HEIGHT-120), 80,
100));
}
}
public void flap() {
if (gameover) {
character = new Rectangle(300, 400, 40, 40);
cloud.clear();
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
gameover = false;
}
if (!start) {
start = true;
}
else if (!gameover) {
character.y -= 70;
tick = 0;
}
}
@Override
public void actionPerformed(ActionEvent e) {
int speed = 15;
//System.out.println(\"Space\");
if (start) {
for (int i = 0; i .
Leaving Flatland: getting started with WebGLgerbille
WebGL is a JavaScript API for rendering interactive 3D graphics within any compatible web browser without the use of plug-ins. It can be used for data visualization, creative coding, 3D modeling, games, and more. WebGL works by using JavaScript to interface with the GPU through WebGL APIs to run GLSL shaders that render 3D scenes. To get started, one needs to choose a WebGL library like Three.js, add a <canvas> element, and get the WebGL context. Sample code is provided to render a 3D model by loading geometry, adding lights and materials, and animating the scene render.
Advanced Game Development with the Mobile 3D Graphics APITomi Aarnio
This document provides an overview of the Mobile 3D Graphics API (M3G), which was designed for 3D graphics on mobile devices. It discusses why developers should use M3G and highlights some of its key features, including scene graphs, dynamic meshes, animation, textures, and more. The document also provides code examples for common tasks like setting up a camera, rendering a rotating cube, and creating animated keyframe sequences.
Introduction to open gl in android droidcon - slidestamillarasan
This document provides an introduction and overview of OpenGL ES 2.0 for Android. It discusses setting up a OpenGL view, drawing basic shapes, animating objects, and applying textures. The key steps covered are initializing a GLSurfaceView, creating and linking shader programs, defining and drawing vertex data, setting up the model-view-projection matrix, and mapping texture coordinates. The goal is to provide everything needed to get started with basic 2D OpenGL graphics and animation on Android.
Flash over the years, has been used to prop up the regular browser like a sad old man drinking alone in a pub.
Today browsers come shipped with technology designed to rival flash and aim to shut it squarely out of the game.
Are browser ready to rock without Flash?
The document discusses techniques for optimizing Android UI performance. It covers optimizing adapter views by reusing views, pre-scaling images to avoid runtime scaling, invalidating specific regions instead of entire views, using fewer views in layouts by combining views, and avoiding memory allocations in performance critical code. The document provides examples of using view holders, compound drawables, ViewStubs, merge tags, custom views and layouts to reduce view count. It also discusses caching objects using soft and weak references to avoid memory leaks.
This document provides tips and tricks for using the Canvas API, with a focus on game programming and bitmaps. It discusses setting up an animation loop using requestAnimationFrame, caching techniques like double buffering to improve performance, and manipulating pixel data directly using ImageData to implement features like hit detection and image filters. The document encourages profiling code and considers challenges in testing Canvas code.
Using the potential of WebGL in web browser in a simple way with three.js javascript library. Practical demonstration of a WebGL app developed for a Silicon Valley startup.
The document provides instructions and examples for making games using HTML5 canvas and JavaScript. It discusses using canvas to draw basic shapes and images. It introduces the concept of sprites as reusable drawing components and provides an example sprite class. It demonstrates how to create a game loop to continuously update and render sprites to animate them. It also provides an example of making a sprite respond to keyboard input to allow user control. The document serves as a tutorial for building the core components of a simple HTML5 canvas game.
Single Page Web Applications with CoffeeScript, Backbone and JasminePaulo Ragonha
This document discusses using CoffeeScript, Backbone.js, and Jasmine BDD to build single page web applications. It begins by explaining why CoffeeScript is useful for cleaning up JavaScript code and avoiding errors. It then discusses how Backbone.js provides structure for single page apps by defining models, collections, views and routers. It notes that Backbone works well with CoffeeScript. Finally, it mentions that Jasmine BDD can be used for writing professional tests.
This document provides an overview of creating 3D graphics using the Three.js library. It describes the basic structure of a Three.js application including creating a scene, camera, and renderer. It also explains how to add objects like a cube to the scene, animate the cube by incrementally changing its rotation, and customize properties like color, size and rotation speed. The goal is to introduce the fundamentals needed to get started with Three.js and create simple animated 3D graphics in a web application.
The document discusses different techniques for animation and graphics rendering in web browsers, including CSS transforms and animations, Canvas, SVG, WebGL, and HTML5 video. It provides code examples and comparisons of performance between techniques like Canvas with JavaScript versus Flash. Key technologies mentioned are CSS transforms, requestAnimationFrame, Box2D physics engine, Raphael.js for vector graphics, and WebGL shaders.
This document provides an introduction to using the Three.js library for 3D graphics in web pages. It explains how to set up a basic Three.js application with a renderer, scene, and camera. It then demonstrates how to add 3D objects, textures, lighting, materials, load 3D models, and perform animations. The document also provides information on topics like cameras, textures, loading different 3D file formats, model conversion, and blending 3D content into HTML.
Ultra Fast, Cross Genre, Procedural Content Generation in Games [Master Thesis]Mohammad Shaker
In my MSc. thesis, I have re-tackled the problem of procedurally generating content for physics-based games I have previously investigated in my BSc. graduation thesis. This time around I propose two novel methods: the first is projection based for faster generation of physics-based games content. The other, The Progressive Generation, is a generic, wide-range, across genre, customisable with playability check method all bundled in a fast progressive approach. This new method is applied on two completely different games: NEXT And Cut the Rope.
Useful Tools for Making Video Games - XNA (2008)Korhan Bircan
This document provides an overview of tools and techniques for creating 3D video games in XNA, including installing Visual Studio and XNA Game Studio, displaying 3D models by loading them and applying transformations, handling keyboard/mouse input, implementing a basic camera, adding a skybox, and creating animations using curves to interpolate between control points over time. Sample code implementations for many of these techniques can be found in ZIP files referenced.
This document provides a short introduction to HTML5, including:
- HTML5 is the 5th version of the HTML standard by the W3C and is still under development but supported by many browsers.
- HTML5 introduces new semantic elements, video and audio tags, 2D/3D graphics using <canvas>, and new JavaScript APIs for features like geolocation, offline web apps, and drag and drop.
- The document provides examples of using new HTML5 features like video playback, semantic elements, geolocation API, and drawing on a canvas with JavaScript.
WebGL is a JavaScript API for rendering interactive 3D graphics and 2D graphics within any compatible web browser without the use of plug-ins. It can be used for data visualization, creative coding, art, 3D design environments, music videos, mathematical function graphing, 3D modeling, texture creation, physics simulations, and more. WebGL works by using JavaScript to interface with the GPU through WebGL API calls. Common libraries like Three.js simplify the use of WebGL. The basics of a WebGL app include setting up a 3D scene, camera, and rendering loop. Sample code is provided to load a 3D model and texture and allow interactive rotation. Resources listed for learning more include tutorials on Phil
Peint is a Javascript graphics engine for building HTML5 games. It uses a component-based and event-driven approach to handle positioning and rendering separately. Features include event-based rendering, object-based positioning, animation management, and a modular design to support additional rendering backends like CSS3 and WebGL. While currently focused on the canvas API, the developer plans to add support for additional rendering methods and primitives to better support mobile games.
Tricks to Making a Realtime SurfaceView Actually Perform in Realtime - Maarte...DroidConTLV
SurfaceViews allow drawing to a separate thread to achieve realtime performance. Key aspects include:
- Driving the SurfaceView with a thread that locks and draws to the canvas in a loop.
- Using input buffering and object pooling to efficiently process touch/key events from the main thread.
- Employing various timing and drawing techniques like fixed scaling to optimize for performance. Managing the SurfaceView lifecycle to ensure the drawing thread starts and stops appropriately.
This document discusses creating a web-based rotoscoping tool using HTML5 canvas. It proposes allowing users to place shapes over video frames and edit them to create light saber-like effects. Key features would include acquiring video, drawing closed shapes frame-by-frame, reviewing the output, and exporting results. Technical approaches covered include using canvas drawings over video playback, saving frame data to localStorage, and potential improvements like a database backend.
The document discusses 3D web programming using WebGL and Three.js. It provides an overview of WebGL and how to set it up, then introduces Three.js as a library that wraps raw WebGL code to simplify 3D graphics creation. Examples are given for basic Three.js scene setup and adding objects like cubes and lights. The document concludes with suggestions for interactive workshops using these techniques.
I wanted to change the cloudsrectangles into an actuall image it do.pdffeelinggifts
I wanted to change the clouds/rectangles into an actuall image it doesnt matter the image.
import javax.swing.*;
import java.awt.*;
/**
* Created by Thomas on 11/27/2016.
*/
public class Renderer extends JPanel{
//private static final long serialVersionUID = 1L;
protected void paintComponent(Graphics g) {
Main.main.repaint(g);
}
public static int clamp(int greenValue, int i, int j) {
// TODO Auto-generated method stub
return 0;
}
}
OTHER PART:
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.KeyEvent;
import java.awt.event.KeyListener;
import java.util.ArrayList;
import java.util.Random;
import javax.swing.*;
/**
* Created by Thomas on 11/27/2016.
*/
public class Main implements ActionListener, KeyListener{
public static Main main;
public final int WIDTH = 1400;
public final int HEIGHT = 600;
public HUD Hud;
public Renderer renderer;
public Rectangle character;
public ArrayList cloud;
public Random rand;
public boolean start = false, gameover = false;
public int tick;
public Main() {
JFrame jFrame = new JFrame();
Timer timer = new Timer(20, this);
renderer = new Renderer();
rand = new Random();
jFrame.setTitle(\"Example\");
jFrame.add(renderer);
jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jFrame.setSize(WIDTH, HEIGHT);
jFrame.addKeyListener(this);
jFrame.setVisible(true);
cloud = new ArrayList();
character = new Rectangle(200, 220, 20, 20);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
timer.start();
}
public void repaint(Graphics g) {
g.setColor(Color.black);
g.fillRect(0,0, WIDTH, HEIGHT);
g.setColor(Color.blue);
g.fillRect(0, HEIGHT - 100, WIDTH, 100);
g.setColor(Color.green);
g.fillRect(character.x, character.y, character.width, character.height);
if (character.y >= HEIGHT - 100 || character.y < 0) {
gameover = true;
}
for (Rectangle rect : cloud) {
g.setColor(Color.white);
g.fillRect(rect.x, rect.y, rect.width, rect.height);
}
g.setColor(Color.WHITE);
g.setFont(new Font(\"Times New Roman\", 1 ,100));
if (!start) {
g.drawString(\"Press to start!\", 450, HEIGHT / 2);
}
else if (gameover) {
g.drawString(\"Game Over!\", 450, HEIGHT / 2);
}
}
public void addCloud(boolean start) {
int width = 400;
int height = 200;
if (start) {
cloud.add(new Rectangle(WIDTH + width + cloud.size() * 300, rand.nextInt(HEIGHT-120),
80, 100));
}
else {
cloud.add(new Rectangle(cloud.get(cloud.size() - 1).x + 300, rand.nextInt(HEIGHT-120), 80,
100));
}
}
public void flap() {
if (gameover) {
character = new Rectangle(300, 400, 40, 40);
cloud.clear();
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
addCloud(true);
gameover = false;
}
if (!start) {
start = true;
}
else if (!gameover) {
character.y -= 70;
tick = 0;
}
}
@Override
public void actionPerformed(ActionEvent e) {
int speed = 15;
//System.out.println(\"Space\");
if (start) {
for (int i = 0; i .
Leaving Flatland: getting started with WebGLgerbille
WebGL is a JavaScript API for rendering interactive 3D graphics within any compatible web browser without the use of plug-ins. It can be used for data visualization, creative coding, 3D modeling, games, and more. WebGL works by using JavaScript to interface with the GPU through WebGL APIs to run GLSL shaders that render 3D scenes. To get started, one needs to choose a WebGL library like Three.js, add a <canvas> element, and get the WebGL context. Sample code is provided to render a 3D model by loading geometry, adding lights and materials, and animating the scene render.
Advanced Game Development with the Mobile 3D Graphics APITomi Aarnio
This document provides an overview of the Mobile 3D Graphics API (M3G), which was designed for 3D graphics on mobile devices. It discusses why developers should use M3G and highlights some of its key features, including scene graphs, dynamic meshes, animation, textures, and more. The document also provides code examples for common tasks like setting up a camera, rendering a rotating cube, and creating animated keyframe sequences.
Introduction to open gl in android droidcon - slidestamillarasan
This document provides an introduction and overview of OpenGL ES 2.0 for Android. It discusses setting up a OpenGL view, drawing basic shapes, animating objects, and applying textures. The key steps covered are initializing a GLSurfaceView, creating and linking shader programs, defining and drawing vertex data, setting up the model-view-projection matrix, and mapping texture coordinates. The goal is to provide everything needed to get started with basic 2D OpenGL graphics and animation on Android.
Flash over the years, has been used to prop up the regular browser like a sad old man drinking alone in a pub.
Today browsers come shipped with technology designed to rival flash and aim to shut it squarely out of the game.
Are browser ready to rock without Flash?
The document discusses techniques for optimizing Android UI performance. It covers optimizing adapter views by reusing views, pre-scaling images to avoid runtime scaling, invalidating specific regions instead of entire views, using fewer views in layouts by combining views, and avoiding memory allocations in performance critical code. The document provides examples of using view holders, compound drawables, ViewStubs, merge tags, custom views and layouts to reduce view count. It also discusses caching objects using soft and weak references to avoid memory leaks.
This document provides tips and tricks for using the Canvas API, with a focus on game programming and bitmaps. It discusses setting up an animation loop using requestAnimationFrame, caching techniques like double buffering to improve performance, and manipulating pixel data directly using ImageData to implement features like hit detection and image filters. The document encourages profiling code and considers challenges in testing Canvas code.
Using the potential of WebGL in web browser in a simple way with three.js javascript library. Practical demonstration of a WebGL app developed for a Silicon Valley startup.
The document provides instructions and examples for making games using HTML5 canvas and JavaScript. It discusses using canvas to draw basic shapes and images. It introduces the concept of sprites as reusable drawing components and provides an example sprite class. It demonstrates how to create a game loop to continuously update and render sprites to animate them. It also provides an example of making a sprite respond to keyboard input to allow user control. The document serves as a tutorial for building the core components of a simple HTML5 canvas game.
Single Page Web Applications with CoffeeScript, Backbone and JasminePaulo Ragonha
This document discusses using CoffeeScript, Backbone.js, and Jasmine BDD to build single page web applications. It begins by explaining why CoffeeScript is useful for cleaning up JavaScript code and avoiding errors. It then discusses how Backbone.js provides structure for single page apps by defining models, collections, views and routers. It notes that Backbone works well with CoffeeScript. Finally, it mentions that Jasmine BDD can be used for writing professional tests.
This document provides an overview of creating 3D graphics using the Three.js library. It describes the basic structure of a Three.js application including creating a scene, camera, and renderer. It also explains how to add objects like a cube to the scene, animate the cube by incrementally changing its rotation, and customize properties like color, size and rotation speed. The goal is to introduce the fundamentals needed to get started with Three.js and create simple animated 3D graphics in a web application.
The document discusses different techniques for animation and graphics rendering in web browsers, including CSS transforms and animations, Canvas, SVG, WebGL, and HTML5 video. It provides code examples and comparisons of performance between techniques like Canvas with JavaScript versus Flash. Key technologies mentioned are CSS transforms, requestAnimationFrame, Box2D physics engine, Raphael.js for vector graphics, and WebGL shaders.
This document provides an introduction to using the Three.js library for 3D graphics in web pages. It explains how to set up a basic Three.js application with a renderer, scene, and camera. It then demonstrates how to add 3D objects, textures, lighting, materials, load 3D models, and perform animations. The document also provides information on topics like cameras, textures, loading different 3D file formats, model conversion, and blending 3D content into HTML.
Ultra Fast, Cross Genre, Procedural Content Generation in Games [Master Thesis]Mohammad Shaker
In my MSc. thesis, I have re-tackled the problem of procedurally generating content for physics-based games I have previously investigated in my BSc. graduation thesis. This time around I propose two novel methods: the first is projection based for faster generation of physics-based games content. The other, The Progressive Generation, is a generic, wide-range, across genre, customisable with playability check method all bundled in a fast progressive approach. This new method is applied on two completely different games: NEXT And Cut the Rope.
Short, Matters, Love - Passioneers Event 2015Mohammad Shaker
Short, Matters, Love is a presentation I prepared for freshmen students at the Faculty of Information Technology in Damascus, Syria organised by Passioneers - 2015
This document discusses Unity3D and game development. It provides an overview of Unity3D and other game engines like Unreal Engine, comparing their features and costs. Examples are given of popular games made with each engine. The document also lists several games the author has made using Unity3D and provides some additional resources and references.
The document discusses various topics related to mobile application design including cloud interaction, Android touch and gesture interaction, UI element sizing, screen sizes, changing orientation, retaining objects during configuration changes, multi-device targeting, and wearables. It provides examples and guidelines for designing applications that can adapt to different devices and configurations.
The document discusses principles of interaction design, color theory, and game design. It covers topics like primary and secondary colors, color harmonies, using color to attract attention and set mood, the importance of white space and negative space in design, and how games like Journey, Fez, Luftrausers, Monument Valley, Ori and the Blind Forest, and Limbo effectively use techniques like the rule of thirds, establishing a sense of goal, and game feel.
This document discusses various topics related to typography including letter shapes like the letter "T", how words for concepts like water have evolved across languages, symbols for ideas like fish, and different writing styles such as styles that would be impossible to write. It examines typography from multiple perspectives like shapes, language evolution, symbols, and stylization.
Interaction Design L04 - Materialise and CouplingMohammad Shaker
This document discusses various aspects of coupling and interaction design in mobile applications. It addresses good and bad examples of coupling on Android and iOS, such as how apps are switched between. It also discusses using accurate text to represent backend processes, and using faster progress bars to reduce cognitive load on users. Visualizations are suggested to improve progress bars.
The document discusses various options for storing data in an Android application including SharedPreferences for simple key-value pairs, internal storage for private files, external storage for public files, SQLite databases for structured data, network connections for storing data on a web server, and ContentProviders for sharing data between applications. It provides details on using SharedPreferences, internal SQLite databases stored in the application's files, and ContentProviders for sharing Contacts data with other apps.
The document discusses various interaction design concepts in Android including toasts, notifications, threads, broadcast receivers, and alarms. It provides code examples for creating toasts, setting notification priorities, and scheduling alarms to fire at boot or at specific times using the AlarmManager. Broadcast receivers can be used to set alarms during device boot by listening for the BOOT_COMPLETED intent filter and implementing the onReceive callback.
This document provides an overview of various mobile development technologies and frameworks including Cloud, iOS, Android, iPad Pro, Xcode, Model-View-Controller (MVC), C, Objective-C, Foundation data types, functions calls, Swift, iOS Dev Center, coordinate systems, Windows Phone, .NET support, MVVM, binding, WebClient, and navigation. It also mentions tools like Expression Blend and frameworks like jQuery Mobile, PhoneGap, Sencha Touch, and Xamarin.
This document discusses various topics related to mobile app design including user experience (UX), user interface (UI), interaction design, user constraints like limited data/battery and screen size, and using context like location to improve the user experience. It provides examples of a pizza ordering app and making ATM machines smarter. It also covers design patterns and principles like focusing on user needs and testing designs through feedback.
This document discusses principles of visual organization and responsive grid systems for web design. It mentions laws of proximity, similarity, common fate, continuity, closure, and symmetry which help organize visual elements. It also discusses column-based and ratio-based grid systems as well as responsive grid systems that adapt to different screen widths, citing examples from Pinterest, Bootstrap, and the website www.mohammadshaker.com which demonstrates responsive design.
This document provides an overview comparison of key aspects of mobile app development for iOS and Android platforms. It discusses differences in app store policies, pricing, monetization options like ads and in-app purchases, development tools including engines like Unity and Unreal, and the publishing process. Key points mentioned include Android apps averaging over 2.5x the price of similar iOS apps, Apple's restrictive app review policies, the 70/30 revenue split in Google Play Store, and tools for user testing and publishing on both platforms. It also shares stats on the revenue and success of specific apps like Monument Valley.
The document discusses various ways to implement cloud functionality in Android applications using services like Parse and Android Backup. It provides code examples for backing up app data to the cloud using Android Backup, setting up a backend using Parse, pushing notifications with Parse, and performing analytics tracking with Parse.
This document discusses several topics related to developing Android apps including:
1. Adding markers to maps by setting an onMapClickListener and adding a MarkerOptions to the clicked location.
2. Signing into apps with Google accounts using the Google Identity API.
3. Following Material Design guidelines for visual style and user interfaces.
4. Maintaining multiple APK versions and using OpenGL ES for games.
This document discusses various techniques for styling Android applications including adding styles, overriding styles, using themes, custom backgrounds, nine-patch images, and animations. It provides links to tutorials and documentation on animating views with zoom animations and other motion effects.
This document provides information about various Android development topics including:
- ListAdapters and mapping models to UI using an MVVM-like pattern
- Creating custom lists
- Starting a new activity using an Intent and passing data between activities
- Understanding the Android activity lifecycle and methods like onPause() and onResume()
- Handling configuration changes that recreate the activity
- Working with permissions
The document discusses common patterns for working with lists, launching new screens, and handling activity state changes. It also provides code examples for starting a new activity, passing data between activities, and handling the activity lifecycle callbacks.
This document provides an overview of various topics related to mobile application development including cloud computing, interaction design, Android, iOS, web technologies like HTML5 and JavaScript, programming languages like Java and Objective-C, frameworks, gaming, user experience design, and more. It discusses tools for Android development and covers basics of creating an Android app like setting up the IDE, creating the UI, adding interactivity, debugging, and referencing documentation.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
2. SkyBox
• Advanced SkyBoxes
– With Terrain
• Riemers web site
– http://www.riemers.net/eng/Tutorials/XNA/Csharp/series4.php
– Rbwhitaker web site
• http://rbwhitaker.wikidot.com/skyboxes-1
5. Skydome
• Skydome
– You’ll use is a conventional 3D model, previously
made in a modeling tool and processed by the
Content Pipeline.
– Handled through XNA’s Model class!
6. Skydome
• Skydome
– You’ll use is a conventional 3D model, previously
made in a modeling tool and processed by the
Content Pipeline.
– Handled through XNA’s Model class!
7. Skydome
• Whenever the camera moves, the skybox or skydome should move with the
camera, so the camera always remains in the center of the volume
8. Skydome
• Skydome
– the sky is created as a hemisphere using only one texture, and is positioned above the scene
– is easy to animate its textures!
9. Skydome
• Skydome
– the sky is created as a hemisphere using only one texture, and is positioned above the scene
– is easy to animate its textures!
10. Skydome
• Skydome
– the sky is created as a hemisphere using only one texture, and is positioned above the scene
– is easy to animate its textures!
11. Skydome
• Skydome
– the sky is created as a hemisphere using only one texture, and is positioned above the scene
– is easy to animate its textures!
13. Creating skydome
• Loading the skydome “Hemisphere”
public void Load(string modelFileName)
{
model = Content.Load<Model>(GameAssetsPath.MODELS PATH + modelFileName);
}
14. Creating skydome
• Updating the Sky
public override void Update(GameTime time)
{
BaseCamera camera = cameraManager.ActiveCamera;
// Center the camera in the SkyDome
transformation.Translate = new Vector3(camera.Position.X,
0.0f, camera.Position.Z);
// Rotate the SkyDome slightly
transformation.Rotate += new Vector3(0,
(float)time.ElapsedGameTime.TotalSeconds *
0.5f, 0);
base.Update(time);
}
15. Creating skydome
• Drawing the Sky
public override void Draw(GameTime time)
{
GraphicsDevice.DepthStencilState = DepthStencilState.None;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
base.Draw(time);
}
16. Creating skydome
• Drawing the Sky
public override void Draw(GameTime time)
{
GraphicsDevice.DepthStencilState = DepthStencilState.None;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
base.Draw(time);
}
17. Creating skydome
• Drawing the Sky
public override void Draw(GameTime time)
{
GraphicsDevice.DepthStencilState = DepthStencilState.None;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
base.Draw(time);
}
18. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
19. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
20. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.DepthStencilState = DepthStencilState.None;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
base.Draw(time);
}
21. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
22. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
23. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
24. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
25. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
26. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
27. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
28. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
29. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
30. Creating skydome
public override void Draw(GameTime time)
{
GraphicsDevice.RenderState.DepthBufferEnable = false;
foreach (ModelMesh modelMesh in model.Meshes)
{
// We are only rendering models with BasicEffect
foreach (BasicEffect basicEffect in modelMesh.Effects)
SetEffectMaterial(basicEffect);
modelMesh.Draw();
}
GraphicsDevice.RenderState.DepthBufferEnable = true;
base.Draw(time);
}
51. Where to look for skybox and textures?
• Acquiring SkyBox Textures
– A Google image search for "skybox" will usually give you all sorts of good skyboxes
– .dds file format
– terathon.com
– http://developer.amd.com/archive/gpu/cubemapgen/pages/default.aspx
54. Terrain
• Advanced Terrains
– Rbwhitaker web site
• http://rbwhitaker.wikidot.com/skyboxes-1
– Riemers web site
• http://www.riemers.net/eng/Tutorials/XNA/Csharp/series4.php
• http://www.riemers.net/eng/Tutorials/XNA/Csharp/series1.php
– Innovative games web site
• http://www.innovativegames.net/blog/blog/2009/05/29/xna-game-engine-tutorial-12-introduction-
to-hlsl-and-improved-terrain/
61. Terrain
– Many ways to create a terrain
• From
– From File “image”
– From File “raw”
• With Without
– With Shaders
– Without Shaders
62. Terrain
• Using Planetside’s Terragen!
– Create your own customized terrain!
– www.planetside.co.uk/terragen/
• Using EarthSculptor
– http://www.earthsculptor.com/
90. Terrain
private int[] GenerateTerrainIndices()
{
int numIndices = numTriangles * 3;
int[] indices = new int[numIndices];
int indicesCount = 0;
for (int i = 0; i < (vertexCountZ - 1); i++)
{
for (int j = 0; j < (vertexCountX - 1); j++)
{
int index = j + i * vertexCountZ;
// First triangle
indices[indicesCount++] = index;
indices[indicesCount++] = index + 1;
indices[indicesCount++] = index + vertexCountX + 1;
// Second triangle
indices[indicesCount++] = index + vertexCountX + 1;
indices[indicesCount++] = index + vertexCountX;
indices[indicesCount++] = index;
}
}
return indices;
}
91. Terrain
private int[] GenerateTerrainIndices()
{
int numIndices = numTriangles * 3;
int[] indices = new int[numIndices];
int indicesCount = 0;
for (int i = 0; i < (vertexCountZ - 1); i++)
{
for (int j = 0; j < (vertexCountX - 1); j++)
{
int index = j + i * vertexCountZ;
// First triangle
indices[indicesCount++] = index;
indices[indicesCount++] = index + 1;
indices[indicesCount++] = index + vertexCountX + 1;
// Second triangle
indices[indicesCount++] = index + vertexCountX + 1;
indices[indicesCount++] = index + vertexCountX;
indices[indicesCount++] = index;
}
}
return indices;
}
92. Terrain
• Generating the Position and Texture Coordinate of the Vertices
for (float i = -halfTerrainDepth; i <= halfTerrainDepth; i += blockScale)
for (float j = -halfTerrainWidth; j <= halfTerrainWidth; j += blockScale)
vertices[vertexCount].Position = new Vector3(j,heightMap[vertexCount].R * heightScale,
i);
float terrainWidth = (vertexCountX - 1) * blockScale;
float terrainDepth = (vertexCountZ - 1) * blockScale;
float halfTerrainWidth = terrainWidth * 0.5f;
float halfTerrainDepth = terrainDepth * 0.5f;How to get the “Height” that
correspond to each “Color value”?!
93. for (float i = -halfTerrainDepth; i <= halfTerrainDepth; i += blockScale)
for (float j = -halfTerrainWidth; j <= halfTerrainWidth; j += blockScale)
vertices[vertexCount].Position = new Vector3(j,heightMap[vertexCount].R * heightScale,
i);
float terrainWidth = (vertexCountX - 1) * blockScale;
float terrainDepth = (vertexCountZ - 1) * blockScale;
float halfTerrainWidth = terrainWidth * 0.5f;
float halfTerrainDepth = terrainDepth * 0.5f;
Terrain
you’ll simply take the red color component of each
color as height for a vertex
94. Terrain
• Generating the Position and Texture Coordinate of the Vertices
for (float i = -halfTerrainDepth; i <= halfTerrainDepth; i += blockScale)
for (float j = -halfTerrainWidth; j <= halfTerrainWidth; j += blockScale)
vertices[vertexCount].Position = new Vector3(j,heightMap[vertexCount].R * heightScale,
i);
float terrainWidth = (vertexCountX - 1) * blockScale;
float terrainDepth = (vertexCountZ - 1) * blockScale;
float halfTerrainWidth = terrainWidth * 0.5f;
float halfTerrainDepth = terrainDepth * 0.5f;
95. Terrain
• Generating the Position and Texture Coordinate of the Vertices
for (float i = -halfTerrainDepth; i <= halfTerrainDepth; i += blockScale)
for (float j = -halfTerrainWidth; j <= halfTerrainWidth; j += blockScale)
vertices[vertexCount].Position = new Vector3(j,heightMap[vertexCount].R * heightScale,
i);
float terrainWidth = (vertexCountX - 1) * blockScale;
float terrainDepth = (vertexCountZ - 1) * blockScale;
float halfTerrainWidth = terrainWidth * 0.5f;
float halfTerrainDepth = terrainDepth * 0.5f;
Are we done just yet?
96. Terrain
• Generating the Position and Texture Coordinate of the Vertices
for (float i = -halfTerrainDepth; i <= halfTerrainDepth; i += blockScale)
for (float j = -halfTerrainWidth; j <= halfTerrainWidth; j += blockScale)
vertices[vertexCount].Position = new Vector3(j,heightMap[vertexCount].R * heightScale,
i);
float terrainWidth = (vertexCountX - 1) * blockScale;
float terrainDepth = (vertexCountZ - 1) * blockScale;
float halfTerrainWidth = terrainWidth * 0.5f;
float halfTerrainDepth = terrainDepth * 0.5f;
Texturing!
97. Terrain
• Generating the Position and Texture Coordinate of the Vertices
for (float i = -halfTerrainDepth; i <= halfTerrainDepth; i += blockScale)
for (float j = -halfTerrainWidth; j <= halfTerrainWidth; j += blockScale)
vertices[vertexCount].Position = new Vector3(j,heightMap[vertexCount].R * heightScale,
i);
float terrainWidth = (vertexCountX - 1) * blockScale;
float terrainDepth = (vertexCountZ - 1) * blockScale;
float halfTerrainWidth = terrainWidth * 0.5f;
float halfTerrainDepth = terrainDepth * 0.5f;
Each vertex also has a U and V texture coordinate that should vary
between (0, 0) and (1, 1), where (0, 0) corresponds to the top left,
(1, 0) to the top right and (1, 1) to the bottom right of the texture
110. Advanced Terrain – Normal Mapping Technique
Using the normal mapping technique, you can add the illusion of small-scale details
to the terrain’s mesh, without needing to increase the complexity of its mesh
You create this illusion by slightly manipulating the lighting in each pixel of your
terrain. Variations in lighting are created by the deviated normals.
Remember that the amount of lighting falling onto a triangle is determined by the
normals of its vertices.
120. Terrain
• Querying the Terrain’s Height?
– you first need to calculate this position relative to the terrain’s vertex grid.
– You can do this by subtracting the queried world position from the terrain’s origin position,
making sure to take the terrain’s world translation and rotation into account.
121. Terrain
• Querying the Terrain’s Height?
– you first need to calculate this position relative to the terrain’s vertex grid.
– You can do this by subtracting the queried world position from the terrain’s origin position,
making sure to take the terrain’s world translation and rotation into account.
• Then you need to know in which quad of the terrain grid the position you are
querying is located, which you can do by dividing the calculated position (relative
to the terrain) by the terrain’s block scale.
125. Terrain
• Querying the Terrain’s Height?
– you first need to calculate this position relative to the terrain’s vertex grid.
– You can do this by subtracting the queried world position from the terrain’s origin position,
making sure to take the terrain’s world translation and rotation into account.
• Then you need to know in which quad of the terrain grid the position you are
querying is located, which you can do by dividing the calculated position (relative
to the terrain) by the terrain’s block scale.
How to get our current
position?!
127. Terrain
• Querying the Terrain’s Height?
Creating our own Transformation class
You can store the transformations that are currently set on the
terrain (translate, rotate, and scale) inside the Terrain class, using
the Transformation class created in Chapter 10, Apress.
129. Terrain
• Querying the Terrain’s Height
// Get the position relative to the terrain grid
Vector2 positionInGrid = new Vector2(
positionX - (StartPosition.X + Transformation.Translate.X),
positionZ - (StartPosition.Y + Transformation.Translate.Z));
// Calculate the grid position
Vector2 blockPosition = new Vector2(
(int)(positionInGrid.X / blockScale),
(int)(positionInGrid.Y / blockScale));
130. Terrain
• Querying the Terrain’s Height
// Get the position relative to the terrain grid
Vector2 positionInGrid = new Vector2(
positionX - (StartPosition.X + Transformation.Translate.X),
positionZ - (StartPosition.Y + Transformation.Translate.Z));
// Calculate the grid position
Vector2 blockPosition = new Vector2(
(int)(positionInGrid.X / blockScale),
(int)(positionInGrid.Y / blockScale));
131. Terrain
• Querying the Terrain’s Height
// Get the position relative to the terrain grid
Vector2 positionInGrid = new Vector2(
positionX - (StartPosition.X + Transformation.Translate.X),
positionZ - (StartPosition.Y + Transformation.Translate.Z));
// Calculate the grid position
Vector2 blockPosition = new Vector2(
(int)(positionInGrid.X / blockScale),
(int)(positionInGrid.Y / blockScale));
132. Terrain
• Querying the Terrain’s Height
// Get the position relative to the terrain grid
Vector2 positionInGrid = new Vector2(
positionX - (StartPosition.X + Transformation.Translate.X),
positionZ - (StartPosition.Y + Transformation.Translate.Z));
// Calculate the grid position
Vector2 blockPosition = new Vector2(
(int)(positionInGrid.X / blockScale),
(int)(positionInGrid.Y / blockScale));
133. Terrain
• Querying the Terrain’s Height
// Get the position relative to the terrain grid
Vector2 positionInGrid = new Vector2(
positionX - (StartPosition.X + Transformation.Translate.X),
positionZ - (StartPosition.Y + Transformation.Translate.Z));
// Calculate the grid position
Vector2 blockPosition = new Vector2(
(int)(positionInGrid.X / blockScale),
(int)(positionInGrid.Y / blockScale));
134. Terrain
• Querying the Terrain’s Height
// Get the position relative to the terrain grid
Vector2 positionInGrid = new Vector2(
positionX - (StartPosition.X + Transformation.Translate.X),
positionZ - (StartPosition.Y + Transformation.Translate.Z));
// Calculate the grid position
Vector2 blockPosition = new Vector2(
(int)(positionInGrid.X / blockScale),
(int)(positionInGrid.Y / blockScale));
135. Terrain
• Querying the Terrain’s Height
// Get the position relative to the terrain grid
Vector2 positionInGrid = new Vector2(
positionX - (StartPosition.X + Transformation.Translate.X),
positionZ - (StartPosition.Y + Transformation.Translate.Z));
// Calculate the grid position
Vector2 blockPosition = new Vector2(
(int)(positionInGrid.X / blockScale),
(int)(positionInGrid.Y / blockScale));
136. Terrain
• A block in the terrain grid. If the x position inside the block is bigger than the z
position, the object is in the top triangle. Otherwise, the object is in the bottom
triangle.
137. Terrain
• After finding in which triangle the object is positioned, you can obtain the height
of a position inside this triangle through a bilinear interpolation of the height of
the triangle’s vertices.
• Use the following code for the GetHeight method to calculate the height of a
terrain’s position
private float GetHeight(float positionX, float positionZ)
141. Advanced Terrain - Ray and Terrain Collision
• Quite Straight forward
// A good ray step is half of the blockScale
Vector3 rayStep = ray.Direction * blockScale * 0.5f;
Vector3 rayStartPosition = ray.Position;
// Linear search - Loop until you find a point inside and outside the terrain
Vector3 lastRayPosition = ray.Position;
ray.Position += rayStep;
float height = GetHeight(ray.Position);
while (ray.Position.Y > height && height >= 0)
{
lastRayPosition = ray.Position;
ray.Position += rayStep;
height = GetHeight(ray.Position);
}
151. Terrain
• The demonstration used in this chapter shows how to create and implement a
height map using an 8-bit “.raw” grayscale image.
• Each pixel in the.raw image stores information about the elevation in a range
between 0 and 255.
• the height data in each pixel can then be accessed with the pixel row and column
number!
152. Terrain
• Proper texture covers your terrain,
• Original terrain created is 257 pixels wide by 257 pixels high
floorTexture = Content.Load<Texture2D>("Imagesterrain");
const int NUM_COLS = 257;
const int NUM_ROWS = 257;
153. Terrain
• The vertex buffer used for storing the terrain vertices must now use the height
information from the height map.
InitializeVertexBuffer() // for NUM_COLS, NUM_ROWS