Mais conteúdo relacionado
Semelhante a Introduction to Probability (12)
Mais de Lukas Tencer (11)
Introduction to Probability
- 2. Random variables
2
A random variable x denotes a quantity that is
uncertain
May be result of experiment (flipping a coin) or a
real world measurements (measuring
temperature)
If observe several instances of x we get different
values
Some values occur more than others and this
information is captured by a probability
distribution
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 3. Discrete Random Variables
3
Ordered
- histogram
Unordered
- hintongram
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 4. Continuous Random Variable
4
- Probability density function (PDF) of a distribution, integrates to 1
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 5. Joint Probability
5
Consider two random variables x and y
If we observe multiple paired instances, then
some combinations of outcomes are more likely
than others
This is captured in the joint probability
distribution
Written as Pr(x,y)
Can read Pr(x,y) as “probability of x and y”
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 6. Joint Probability
6
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 7. Marginalization
7
We can recover probability distribution of any variable in a joint
distribution by integrating (or summing) over the other variables
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 8. Marginalization
8
We can recover probability distribution of any variable in a joint
distribution by integrating (or summing) over the other variables
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 9. Marginalization
9
We can recover probability distribution of any variable in a joint
distribution by integrating (or summing) over the other variables
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 10. Marginalization
10
We can recover probability distribution of any variable in a joint
distribution by integrating (or summing) over the other variables
Works in higher dimensions as well – leaves joint distribution
between whatever variables are left
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 11. Conditional Probability
11
Conditional probability of x given that y=y1 is relative
propensity of variable x to take different outcomes given
that y is fixed to be equal to y1.
Written as Pr(x|y=y1)
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 12. Conditional Probability
12
Conditional probability can be extracted from joint probability
Extract appropriate slice and normalize (because dos not
sums to 1)
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 13. Conditional Probability
13
More usually written in compact form
• Can be re-arranged to give
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 14. Conditional Probability
14
This idea can be extended to more than two
variables
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 15. Bayes’ Rule
15
From before:
Combining:
Re-arranging:
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 16. Bayes’ Rule Terminology
16
Likelihood – propensity Prior – what we know
for observing a certain about y before seeing
value of x given a certain x
value of y
Posterior – what we Evidence –a constant to
know about y after ensure that the left hand
seeing x side is a valid distribution
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 17. Independence
17
If two variables x and y are independent then variable x
tells us nothing about variable y (and vice-versa)
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 18. Independence
18
If two variables x and y are independent then variable x
tells us nothing about variable y (and vice-versa)
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 19. Independence
19
When variables are independent, the joint factorizes into
a product of the marginals:
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 20. Expectation
20
Expectation tell us the expected or average value of
some function f [x] taking into account the distribution of
x
Definition:
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 21. Expectation
21
Expectation tell us the expected or average value of
some function f [x] taking into account the distribution of
x
Definition in two dimensions:
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 23. Expectation: Rules
23
Rule 1:
Expected value of a constant is the constant
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 24. Expectation: Rules
24
Rule 2:
Expected value of constant times function is constant
times expected value of function
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 25. Expectation: Rules
25
Rule 3:
Expectation of sum of functions is sum of expectation of
functions
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 26. Expectation: Rules
26
Rule 4:
Expectation of product of functions in variables x and y
is product of expectations of functions if x and y are independen
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 27. Conclusions
27
• Rules of probability are compact and simple
• Concepts of marginalization, joint and conditional
probability, Bayes rule and expectation underpin all of
the models in this book
• One remaining concept – conditional expectation –
discussed later
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
- 28. 28
Thank You
for you attention
Based on:
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
http://www.computervisionmodels.com/