SlideShare uma empresa Scribd logo
1 de 4
Working with Shaders (GLSL)
Timothy Kim

My project involved using different shaders to produce different effects in a bunny mesh.
I used the assignment 8 infrastructure as a base for my project.

1. The Lava Bunny

This effect involves animating lava on the bunny mesh.

I learned about animating textures in this tutorial by Jacobo Rodriguez Villar from Typhoon Labs:
http://www.opengl.org/sdk/docs/tutorials/TyphoonLabs/Chapter_3.pdf

First we apply a seamless lava texture to the bunny.
For the lava texture, we use a seamless 512x512 image called "lava2.ppm" in the project folder.
Credit for the image goes to Patrick Hoesly. His website is zooboing.com.
We enable the repeating of texture coordinates in the texture.
For the shaders, we use basic-gl3.vshader and bunny-gl3.fshader as bases.

Now to apply the texture to the bunny, we need some way of defining the texture coordinates a given
pixel of the bunny will use. We could started off by defining an in-attribute variable called aTexCoord.
However, since the texture is seamless and the repeating of texture coordinates is enabled, we don't
need to care about the texture coordinates as long as they are consistent. So we can use the vertex
positions as the texture coordinates.

Looking at basic-gl3.vshader, if we use vPosition as the texture coordinate in the fragment shader, we
will run into trouble because vPosition is multiplied by the ModelView matrix and the Projection
matrix. This results in the texture not being able to "stick on" to the bunny, and when we transform the
bunny, the texture acts like a fixed projection. To fix this problem, we use the in-variable aPosition,
which is vPosition without the matrices multiplied to it. We pass in the value of aPosition to the
fragment shader using a new out-variable called vPosition2.

To use the texture in the fragment shader, we need to define a new sampler2D uniform variable called
"uTexture." We can get the appropriate texel from the texture by writing:
vec4 texColor = texture(uTexture, vPosition2.xy * .25);
The multiplication of .25 helps enlarge the texture on the bunny.

Now to animate the lava on the bunny, we need to define a new uniform variable called "uTime" that
will hold a value that increases with time. We use a timer callback function called "lavaAnimate()"
(located in "final_project.h") that increases the value of uTime at a set interval. lavaAnimate can be
turned on by pressing the 'l' (lowercase "L") key.
We now add uTime as an offset to the texture coordinates:
vec4 texColor = texture(uTexture, vec2(vPosition2.x - uTime, vPosition2.y - uTime) * .25);
This has the effect of moving the lava texture diagonally up on the bunny.
Here are some pictures of the lava bunny:




The shaders for the lava bunny are called "lava-texture.vshader" and "lava-texture.fshader."


2. The Toon Bunny

This effect draws the bunny in a cartoony style.
Note that we will do not need to use a new vertex shader. It will remain as basic-gl3.vshader.
We use bunny-gl3.fshader as the base for the new fragment shader.

We can draw the bunny in a cartoony style by limiting the colors of the bunny to only 3 shades of its
original color. We can measure the intensity with which the lights in the environment hit a point on the
bunny. If the intensity is strong, we use the lightest of the 3 shades. If the intensity is moderate, we use
the 2nd lightest of the 3 shades. If the intensity is weak, we use the darkest of the 3 shades.

Credits to lighthouse3d.com for this first toon shading method.

We measure the intensity the lights hit a point by using the dot product of the normalized normal of the
point and the normalized vector from the point to a light. This dot products computes the cosine of the
angle between the normal of the point and the ray of light hitting the point. A cosine function is used
because it decreases as the angle increases from 0 to 90 degrees. A light ray parallel to the normal
should have a stronger effect than a light ray off from the normal. Luckily, this intensity is already
computed for us in the "diffuse" variable in bunny-gl3.fshader.

Here are the ways we separated the intensities:
if (diffuse > .91)
       diffuse = 1.0;
  else if (diffuse > .6)
    diffuse = .6;
  else
    diffuse = .4;
As you can see, only 3 shades of the bunny will be drawn.
This shader is called "toon1.fshader" in the project.

Now, we can stop here and we would have a decent toon shader, but one thing is missing: outlines.
We could compute the outlines by making a 4th shade in the toon shader, where when the light
intensity of a point is really low, we just color it black.
However, we run into a problem if we use a light intensity as the measure of when to color a point
black. Since the light intensity of a point is measured with respect to the ray of light hitting the point, if
we change our view of the bunny, the light ray will not change. This means the outline on the bunny
will not move with respect to the viewer. So if we change our view to see an area of the bunny that is
not hit strongly by the light, we will see areas of black on the bunny and not an appropriate outline.

What we really need to do is create the outline with respect to the viewer (or the camera).
We can compute the "intensity" with which the camera views a point on the bunny using the dot
product of the normal with the vector from the point to the camera (everything should be normalized of
course). If this intensity is low, we color the point black.
Since in our shaders we are working in eyespace, the position of the camera will always be (0,0,0).
This means that the vector from a point (with eyespace coordinate vector vPosition) to the camera will
be -vPosition.
Here's what this looks like in code:
vec3 toCamera = normalize(-vPosition);
float cameraHit = max(0.0, dot(normal, toCamera));

if (cameraHit < .4)
  intensity = vec3(0, 0, 0);
"intensity" here does not refer to the camera view intensity. It is a variable from bunny-gl3.fshader that
represents the final color output of a point. "cameraHit" represents the camera view intensity.

*This method for computing an outline was original and not taken from another source.

For our other 3 shades, we would still want them to be colored with respect to the light intensity.
Thus we only need to add a few lines of code to toon1.fshader.
The new toon shader is called "toon2.fshader" in the project folder.

Here are some pictures of the toon bunny:




The 1st picture shows the effect from our 1st toon shader. The 2nd picture shows us the problem we
can get if we use light intensity to compute the outlines. The 3rd and 4th picture are examples using the
corrected, 2nd toon shader.


3. The Eroding Bunny

This effect makes the bunny gradually crumble and disappear.

Credits to 3Dlabs (3dshaders.com) for the idea for this effect. I looked at the source code for the
shaders they used to make the erode effect. I only really used one of their coding ideas, but the
implementation of the effect into my project was original. Also, the texture used was original.
The vertex shader for this effect is "lava-texture.vshader."
The fragment shader for this effect used "lava-texture.fshader" as its base.

To make the bunny gradually disappear, we can use a texture. A texture is not only useful for applying
an image onto another object. It can be used as data as well.
We create a texture that only consists of black, white, and shades of gray in between. We can use these
texture values to represent the time at which a certain point on the bunny will disappear. To make the
disappearing smooth, we need to make sure that there are appropriate transitions of gray between black
and white points in the texture. So basically all we need to do is draw an image that has black dots on a
white background, and use a smudge tool to create the transitions between the black and white.

The texture we use is called "erosion_map.ppm" in the project folder.

With that texture in place, we can have a uniform variable "uTime" in the fragment shader that
increases with time through a timer callback function (this callback function is called
"erosionAnimate()" in "final_project.h").
After getting the color of a point on the texture in the fragment shader, we measure the darkness of that
point and compare it to a value. If the measured darkness is less than the value, we discard the
associated fragment (lighter values are higher than darker values). This comparison value should
increase with time to let lighter fragments get discarded.
Here's what this looks like in code:
vec4 texColor = texture(uTexture, vPosition2.xy);

if (texColor.r < 0 + uTime)
  discard;
As you can see, the uTime variable increases the comparison value.

We can turn on the eroding animation by pressing the 'e' key.
We also disable backface culling so that the eroding parts of the back of the bunny can be seen.

Here are some pictures of the eroding bunny:




You can see the bunny gradually disappearing. The last picture is a picture of the texture we used.

Resources:
1. The GLSL tutorials from lighthouse3d.com, Clockworkcoders
(http://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/ ), and Typhoon Labs
(http://www.opengl.org/sdk/docs/tutorials/TyphoonLabs/Chapter_3.pdf)
2. The shaders demo and source code from 3dshaders.com.

Mais conteúdo relacionado

Mais procurados

Radiation Physics Laboratory – Complementary Exercise Set
Radiation Physics Laboratory – Complementary Exercise SetRadiation Physics Laboratory – Complementary Exercise Set
Radiation Physics Laboratory – Complementary Exercise SetLuís Rita
 
Machine Learning - Neural Networks - Perceptron
Machine Learning - Neural Networks - PerceptronMachine Learning - Neural Networks - Perceptron
Machine Learning - Neural Networks - PerceptronAndrew Ferlitsch
 
K MEANS CLUSTERING
K MEANS CLUSTERINGK MEANS CLUSTERING
K MEANS CLUSTERINGsingh7599
 
Customer Segmentation using Clustering
Customer Segmentation using ClusteringCustomer Segmentation using Clustering
Customer Segmentation using ClusteringDessy Amirudin
 
L5 determination of natural frequency & mode shape
L5 determination of natural frequency & mode shapeL5 determination of natural frequency & mode shape
L5 determination of natural frequency & mode shapeSam Alalimi
 
K-means Clustering Algorithm with Matlab Source code
K-means Clustering Algorithm with Matlab Source codeK-means Clustering Algorithm with Matlab Source code
K-means Clustering Algorithm with Matlab Source codegokulprasath06
 
Statistica theromodynamics
Statistica theromodynamicsStatistica theromodynamics
Statistica theromodynamicsRaguM6
 
New chm 151_unit_4_power_points
New chm 151_unit_4_power_pointsNew chm 151_unit_4_power_points
New chm 151_unit_4_power_pointscaneman1
 
K mean-clustering
K mean-clusteringK mean-clustering
K mean-clusteringPVP College
 
Convolution
ConvolutionConvolution
Convolutionmuzuf
 
One particle to_onepartlce_scattering_18052020
One particle to_onepartlce_scattering_18052020One particle to_onepartlce_scattering_18052020
One particle to_onepartlce_scattering_18052020foxtrot jp R
 

Mais procurados (19)

Radiation Physics Laboratory – Complementary Exercise Set
Radiation Physics Laboratory – Complementary Exercise SetRadiation Physics Laboratory – Complementary Exercise Set
Radiation Physics Laboratory – Complementary Exercise Set
 
07. disjoint set
07. disjoint set07. disjoint set
07. disjoint set
 
linkd
linkdlinkd
linkd
 
Data miningpresentation
Data miningpresentationData miningpresentation
Data miningpresentation
 
Machine Learning - Neural Networks - Perceptron
Machine Learning - Neural Networks - PerceptronMachine Learning - Neural Networks - Perceptron
Machine Learning - Neural Networks - Perceptron
 
K MEANS CLUSTERING
K MEANS CLUSTERINGK MEANS CLUSTERING
K MEANS CLUSTERING
 
Customer Segmentation using Clustering
Customer Segmentation using ClusteringCustomer Segmentation using Clustering
Customer Segmentation using Clustering
 
K means
K meansK means
K means
 
L5 determination of natural frequency & mode shape
L5 determination of natural frequency & mode shapeL5 determination of natural frequency & mode shape
L5 determination of natural frequency & mode shape
 
K-means Clustering Algorithm with Matlab Source code
K-means Clustering Algorithm with Matlab Source codeK-means Clustering Algorithm with Matlab Source code
K-means Clustering Algorithm with Matlab Source code
 
K means Clustering Algorithm
K means Clustering AlgorithmK means Clustering Algorithm
K means Clustering Algorithm
 
Statistica theromodynamics
Statistica theromodynamicsStatistica theromodynamics
Statistica theromodynamics
 
New chm 151_unit_4_power_points
New chm 151_unit_4_power_pointsNew chm 151_unit_4_power_points
New chm 151_unit_4_power_points
 
Atomic Physics Homework Help
Atomic Physics Homework HelpAtomic Physics Homework Help
Atomic Physics Homework Help
 
K mean-clustering
K mean-clusteringK mean-clustering
K mean-clustering
 
Convolution
ConvolutionConvolution
Convolution
 
Neural nw k means
Neural nw k meansNeural nw k means
Neural nw k means
 
Back propderiv
Back propderivBack propderiv
Back propderiv
 
One particle to_onepartlce_scattering_18052020
One particle to_onepartlce_scattering_18052020One particle to_onepartlce_scattering_18052020
One particle to_onepartlce_scattering_18052020
 

Destaque

Crime And Mental Illness
Crime And Mental IllnessCrime And Mental Illness
Crime And Mental IllnessJeannieWeltzin
 
Mental illness and crime
Mental illness and crimeMental illness and crime
Mental illness and crimemzjay7
 
Crime & Mental Disorders
Crime & Mental DisordersCrime & Mental Disorders
Crime & Mental DisordersAleem Ashraf
 
Criminal psychology
Criminal psychologyCriminal psychology
Criminal psychologySusanorixter
 
Biological Theories of Crime
Biological Theories of CrimeBiological Theories of Crime
Biological Theories of Crimeknoxmodernstudies
 

Destaque (7)

Crime And Mental Illness
Crime And Mental IllnessCrime And Mental Illness
Crime And Mental Illness
 
Mental illness
Mental illnessMental illness
Mental illness
 
Crime
CrimeCrime
Crime
 
Mental illness and crime
Mental illness and crimeMental illness and crime
Mental illness and crime
 
Crime & Mental Disorders
Crime & Mental DisordersCrime & Mental Disorders
Crime & Mental Disorders
 
Criminal psychology
Criminal psychologyCriminal psychology
Criminal psychology
 
Biological Theories of Crime
Biological Theories of CrimeBiological Theories of Crime
Biological Theories of Crime
 

Semelhante a Shaders project

Volume Rendering in Unity3D
Volume Rendering in Unity3DVolume Rendering in Unity3D
Volume Rendering in Unity3DMatias Lavik
 
image-deblurring
image-deblurringimage-deblurring
image-deblurringErik Mayer
 
Data Science - Part XVII - Deep Learning & Image Processing
Data Science - Part XVII - Deep Learning & Image ProcessingData Science - Part XVII - Deep Learning & Image Processing
Data Science - Part XVII - Deep Learning & Image ProcessingDerek Kane
 
Estrazione automatica delle linee in un'immagine digitale
Estrazione automatica delle linee in un'immagine digitaleEstrazione automatica delle linee in un'immagine digitale
Estrazione automatica delle linee in un'immagine digitalefrancescapadoin
 
SinGAN for Image Denoising
SinGAN for Image DenoisingSinGAN for Image Denoising
SinGAN for Image DenoisingKhalilBergaoui
 
Digital image processing Tool presentation
Digital image processing Tool presentationDigital image processing Tool presentation
Digital image processing Tool presentationdikshabehl5392
 
Edge detection iOS application
Edge detection iOS applicationEdge detection iOS application
Edge detection iOS applicationKetan Raval
 
Modeling and texturing in 3 ds max
Modeling and texturing in 3 ds maxModeling and texturing in 3 ds max
Modeling and texturing in 3 ds maxsribalaji0007
 
Modelo y textrurizado 3 ds max
Modelo y textrurizado 3 ds maxModelo y textrurizado 3 ds max
Modelo y textrurizado 3 ds maxLuchIn Avalos
 
Direct Volume Rendering (DVR): Ray-casting
Direct Volume Rendering (DVR): Ray-castingDirect Volume Rendering (DVR): Ray-casting
Direct Volume Rendering (DVR): Ray-castingCaferYaarKarabulut
 
A computer vision approach to speech enhancement
A computer vision approach to speech enhancementA computer vision approach to speech enhancement
A computer vision approach to speech enhancementRamin Anushiravani
 
Order Independent Transparency
Order Independent TransparencyOrder Independent Transparency
Order Independent Transparencyacbess
 
Image Reconstruction Using Sparse Approximation
Image Reconstruction Using Sparse ApproximationImage Reconstruction Using Sparse Approximation
Image Reconstruction Using Sparse ApproximationChristopher Neighbor
 
Parametric Equations with Mathcad Prime
Parametric Equations with Mathcad PrimeParametric Equations with Mathcad Prime
Parametric Equations with Mathcad PrimeCaroline de Villèle
 

Semelhante a Shaders project (20)

Volume Rendering in Unity3D
Volume Rendering in Unity3DVolume Rendering in Unity3D
Volume Rendering in Unity3D
 
image-deblurring
image-deblurringimage-deblurring
image-deblurring
 
Data Science - Part XVII - Deep Learning & Image Processing
Data Science - Part XVII - Deep Learning & Image ProcessingData Science - Part XVII - Deep Learning & Image Processing
Data Science - Part XVII - Deep Learning & Image Processing
 
Estrazione automatica delle linee in un'immagine digitale
Estrazione automatica delle linee in un'immagine digitaleEstrazione automatica delle linee in un'immagine digitale
Estrazione automatica delle linee in un'immagine digitale
 
SinGAN for Image Denoising
SinGAN for Image DenoisingSinGAN for Image Denoising
SinGAN for Image Denoising
 
Vray lighting
Vray lightingVray lighting
Vray lighting
 
Digital image processing Tool presentation
Digital image processing Tool presentationDigital image processing Tool presentation
Digital image processing Tool presentation
 
Edge detection iOS application
Edge detection iOS applicationEdge detection iOS application
Edge detection iOS application
 
Poster cs543
Poster cs543Poster cs543
Poster cs543
 
Modeling and texturing in 3 ds max
Modeling and texturing in 3 ds maxModeling and texturing in 3 ds max
Modeling and texturing in 3 ds max
 
Modelo y textrurizado 3 ds max
Modelo y textrurizado 3 ds maxModelo y textrurizado 3 ds max
Modelo y textrurizado 3 ds max
 
Direct Volume Rendering (DVR): Ray-casting
Direct Volume Rendering (DVR): Ray-castingDirect Volume Rendering (DVR): Ray-casting
Direct Volume Rendering (DVR): Ray-casting
 
A computer vision approach to speech enhancement
A computer vision approach to speech enhancementA computer vision approach to speech enhancement
A computer vision approach to speech enhancement
 
ResearchPaper_Final
ResearchPaper_FinalResearchPaper_Final
ResearchPaper_Final
 
Final
FinalFinal
Final
 
Order Independent Transparency
Order Independent TransparencyOrder Independent Transparency
Order Independent Transparency
 
Image Reconstruction Using Sparse Approximation
Image Reconstruction Using Sparse ApproximationImage Reconstruction Using Sparse Approximation
Image Reconstruction Using Sparse Approximation
 
Report
ReportReport
Report
 
Parametric Equations with Mathcad Prime
Parametric Equations with Mathcad PrimeParametric Equations with Mathcad Prime
Parametric Equations with Mathcad Prime
 
Algorithm
AlgorithmAlgorithm
Algorithm
 

Último

Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1DianaGray10
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
 
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDELiveplex
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaborationbruanjhuli
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsSeth Reyes
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Websitedgelyza
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPathCommunity
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024SkyPlanner
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostMatt Ray
 
Designing A Time bound resource download URL
Designing A Time bound resource download URLDesigning A Time bound resource download URL
Designing A Time bound resource download URLRuncy Oommen
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesDavid Newbury
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Commit University
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7DianaGray10
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfDianaGray10
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxUdaiappa Ramachandran
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintMahmoud Rabie
 

Último (20)

Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
 
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and Hazards
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Website
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation Developers
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
 
Designing A Time bound resource download URL
Designing A Time bound resource download URLDesigning A Time bound resource download URL
Designing A Time bound resource download URL
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond Ontologies
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptx
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership Blueprint
 

Shaders project

  • 1. Working with Shaders (GLSL) Timothy Kim My project involved using different shaders to produce different effects in a bunny mesh. I used the assignment 8 infrastructure as a base for my project. 1. The Lava Bunny This effect involves animating lava on the bunny mesh. I learned about animating textures in this tutorial by Jacobo Rodriguez Villar from Typhoon Labs: http://www.opengl.org/sdk/docs/tutorials/TyphoonLabs/Chapter_3.pdf First we apply a seamless lava texture to the bunny. For the lava texture, we use a seamless 512x512 image called "lava2.ppm" in the project folder. Credit for the image goes to Patrick Hoesly. His website is zooboing.com. We enable the repeating of texture coordinates in the texture. For the shaders, we use basic-gl3.vshader and bunny-gl3.fshader as bases. Now to apply the texture to the bunny, we need some way of defining the texture coordinates a given pixel of the bunny will use. We could started off by defining an in-attribute variable called aTexCoord. However, since the texture is seamless and the repeating of texture coordinates is enabled, we don't need to care about the texture coordinates as long as they are consistent. So we can use the vertex positions as the texture coordinates. Looking at basic-gl3.vshader, if we use vPosition as the texture coordinate in the fragment shader, we will run into trouble because vPosition is multiplied by the ModelView matrix and the Projection matrix. This results in the texture not being able to "stick on" to the bunny, and when we transform the bunny, the texture acts like a fixed projection. To fix this problem, we use the in-variable aPosition, which is vPosition without the matrices multiplied to it. We pass in the value of aPosition to the fragment shader using a new out-variable called vPosition2. To use the texture in the fragment shader, we need to define a new sampler2D uniform variable called "uTexture." We can get the appropriate texel from the texture by writing: vec4 texColor = texture(uTexture, vPosition2.xy * .25); The multiplication of .25 helps enlarge the texture on the bunny. Now to animate the lava on the bunny, we need to define a new uniform variable called "uTime" that will hold a value that increases with time. We use a timer callback function called "lavaAnimate()" (located in "final_project.h") that increases the value of uTime at a set interval. lavaAnimate can be turned on by pressing the 'l' (lowercase "L") key. We now add uTime as an offset to the texture coordinates: vec4 texColor = texture(uTexture, vec2(vPosition2.x - uTime, vPosition2.y - uTime) * .25); This has the effect of moving the lava texture diagonally up on the bunny.
  • 2. Here are some pictures of the lava bunny: The shaders for the lava bunny are called "lava-texture.vshader" and "lava-texture.fshader." 2. The Toon Bunny This effect draws the bunny in a cartoony style. Note that we will do not need to use a new vertex shader. It will remain as basic-gl3.vshader. We use bunny-gl3.fshader as the base for the new fragment shader. We can draw the bunny in a cartoony style by limiting the colors of the bunny to only 3 shades of its original color. We can measure the intensity with which the lights in the environment hit a point on the bunny. If the intensity is strong, we use the lightest of the 3 shades. If the intensity is moderate, we use the 2nd lightest of the 3 shades. If the intensity is weak, we use the darkest of the 3 shades. Credits to lighthouse3d.com for this first toon shading method. We measure the intensity the lights hit a point by using the dot product of the normalized normal of the point and the normalized vector from the point to a light. This dot products computes the cosine of the angle between the normal of the point and the ray of light hitting the point. A cosine function is used because it decreases as the angle increases from 0 to 90 degrees. A light ray parallel to the normal should have a stronger effect than a light ray off from the normal. Luckily, this intensity is already computed for us in the "diffuse" variable in bunny-gl3.fshader. Here are the ways we separated the intensities: if (diffuse > .91) diffuse = 1.0; else if (diffuse > .6) diffuse = .6; else diffuse = .4; As you can see, only 3 shades of the bunny will be drawn. This shader is called "toon1.fshader" in the project. Now, we can stop here and we would have a decent toon shader, but one thing is missing: outlines. We could compute the outlines by making a 4th shade in the toon shader, where when the light intensity of a point is really low, we just color it black. However, we run into a problem if we use a light intensity as the measure of when to color a point black. Since the light intensity of a point is measured with respect to the ray of light hitting the point, if
  • 3. we change our view of the bunny, the light ray will not change. This means the outline on the bunny will not move with respect to the viewer. So if we change our view to see an area of the bunny that is not hit strongly by the light, we will see areas of black on the bunny and not an appropriate outline. What we really need to do is create the outline with respect to the viewer (or the camera). We can compute the "intensity" with which the camera views a point on the bunny using the dot product of the normal with the vector from the point to the camera (everything should be normalized of course). If this intensity is low, we color the point black. Since in our shaders we are working in eyespace, the position of the camera will always be (0,0,0). This means that the vector from a point (with eyespace coordinate vector vPosition) to the camera will be -vPosition. Here's what this looks like in code: vec3 toCamera = normalize(-vPosition); float cameraHit = max(0.0, dot(normal, toCamera)); if (cameraHit < .4) intensity = vec3(0, 0, 0); "intensity" here does not refer to the camera view intensity. It is a variable from bunny-gl3.fshader that represents the final color output of a point. "cameraHit" represents the camera view intensity. *This method for computing an outline was original and not taken from another source. For our other 3 shades, we would still want them to be colored with respect to the light intensity. Thus we only need to add a few lines of code to toon1.fshader. The new toon shader is called "toon2.fshader" in the project folder. Here are some pictures of the toon bunny: The 1st picture shows the effect from our 1st toon shader. The 2nd picture shows us the problem we can get if we use light intensity to compute the outlines. The 3rd and 4th picture are examples using the corrected, 2nd toon shader. 3. The Eroding Bunny This effect makes the bunny gradually crumble and disappear. Credits to 3Dlabs (3dshaders.com) for the idea for this effect. I looked at the source code for the shaders they used to make the erode effect. I only really used one of their coding ideas, but the implementation of the effect into my project was original. Also, the texture used was original.
  • 4. The vertex shader for this effect is "lava-texture.vshader." The fragment shader for this effect used "lava-texture.fshader" as its base. To make the bunny gradually disappear, we can use a texture. A texture is not only useful for applying an image onto another object. It can be used as data as well. We create a texture that only consists of black, white, and shades of gray in between. We can use these texture values to represent the time at which a certain point on the bunny will disappear. To make the disappearing smooth, we need to make sure that there are appropriate transitions of gray between black and white points in the texture. So basically all we need to do is draw an image that has black dots on a white background, and use a smudge tool to create the transitions between the black and white. The texture we use is called "erosion_map.ppm" in the project folder. With that texture in place, we can have a uniform variable "uTime" in the fragment shader that increases with time through a timer callback function (this callback function is called "erosionAnimate()" in "final_project.h"). After getting the color of a point on the texture in the fragment shader, we measure the darkness of that point and compare it to a value. If the measured darkness is less than the value, we discard the associated fragment (lighter values are higher than darker values). This comparison value should increase with time to let lighter fragments get discarded. Here's what this looks like in code: vec4 texColor = texture(uTexture, vPosition2.xy); if (texColor.r < 0 + uTime) discard; As you can see, the uTime variable increases the comparison value. We can turn on the eroding animation by pressing the 'e' key. We also disable backface culling so that the eroding parts of the back of the bunny can be seen. Here are some pictures of the eroding bunny: You can see the bunny gradually disappearing. The last picture is a picture of the texture we used. Resources: 1. The GLSL tutorials from lighthouse3d.com, Clockworkcoders (http://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/ ), and Typhoon Labs (http://www.opengl.org/sdk/docs/tutorials/TyphoonLabs/Chapter_3.pdf) 2. The shaders demo and source code from 3dshaders.com.