SlideShare uma empresa Scribd logo
1 de 12
Contrastive Divergence
Learning
Geoffrey E. Hinton
A discussion led by Oliver Woodford
Contents
• Maximum Likelihood learning
• Gradient descent based approach
• Markov Chain Monte Carlo sampling
• Contrastive Divergence
• Further topics for discussion:
– Result biasing of Contrastive Divergence
– Product of Experts
– High-dimensional data considerations
• Given:
– Probability model
• - model parameters
• - the partition function, defined as
– Training data
• Aim:
– Find that maximizes likelihood of training data:
– Or, that minimizes negative log of likelihood:
Maximum Likelihood learning
X = fxk
gK
k=1
p(x; £) = 1
Z(£)
f(x; £)
Z(£)
£
Z(£) =
R
f(x; £) dx
£
p(X; £) =
Q
K
k=1
1
Z(£)
f(xk
; £)
£
Toy example
Known result:
E(X; £) = K log(Z(£)) ¡ P
K
k=1
log(f(xk
; £))
f(x; £) = exp¡ (x¡¹)2
2¾2
£ = f¹; ¾g
Z(£) = ¾
p
2¼
• Method:
– at minimum
– Let’s assume that there is no linear solution…
Maximum Likelihood learning
@E(X;£)
@£
= 0
@E(X; £)
@£
=
@ log Z(£)
@£
¡ 1
K
KX
i=1
@ log f(xi
; £)
@£
=
@ log Z(£)
@£
¡
¿
@ log f(x; £)
@£
À
X
is the expectation of given the data distribution .
h¢i
X
¢ X
@E(X;£)
@£
= @ log(¾
p
2¼)
@£
+
¿
@ (x¡¹)2
2¾2
@£
À
X
@E(X;£)
@¹
= ¡
-
x¡¹
¾2
®
X
= 0 ) ¹ = hxi
X
@E(X;£)
@¾
= 1
¾
+
D
(x¡¹)2
¾3
E
X
= 0 ) ¾ =
p
h(x ¡ ¹)2i
X
– Move a fixed step size, , in the direction of steepest
gradient. (Not line search – see why later).
– This gives the following parameter update equation:
Gradient descent-based approach
´
£t+1 = £t
¡ ´
@E(X; £t
)
@£t
= £t
¡ ´
µ
@ log Z(£t
)
@£t
¡
¿
@ log f(x; £t
)
@£t
À
X
¶
– Recall . Sometimes this integral
will be algebraically intractable.
– This means we can calculate neither
nor (hence no line search).
– However, with some clever substitution…
– so
where can be estimated numerically.
Gradient descent-based approach
Z(£) =
R
f(x; £) dx
E(X; £)
@ log Z(£)
@£
@ log Z(£)
@£
= 1
Z(£)
@Z(£)
@£
= 1
Z(£)
@
@£
R
f(x; £) dx
= 1
Z(£)
R
@f(x;£)
@£
dx = 1
Z(£)
R
f(x; £)@ log f(x;£)
@£
dx
=
R
p(x; £)@ log f(x;£)
@£
dx =
D
@ log f(x;£)
@£
E
p(x;£)
D
@ log f(x;£)
@£
E
p(x;£)
£t+1 = £t
¡ ´
µD
@ log f(x;£t
)
@£t
E
p(x;£t
)
¡
D
@ log f(x;£t
)
@£t
E
X
¶
– To estimate we must draw samples from .
– Since is unknown, we cannot draw samples randomly
from a cumulative distribution curve.
– Markov Chain Monte Carlo (MCMC) methods turn random
samples into samples from a proposed distribution, without
knowing .
– Metropolis algorithm:
• Perturb samples e.g.
• Reject if
• Repeat cycle for all samples until stabilization of the distribution.
– Stabilization takes many cycles, and there is no accurate
criteria for determining when it has occurred.
Markov Chain Monte Carlo samplingD
@ log f(x;£)
@£
E
p(x;£)
p(x; £)
Z(£)
x0
k
= xk
+ randn(size(xk
))
x0
k
p(x0
k
;£)
p(xk
;£)
< rand(1)
Z(£)
– Let us use the training data, , as the starting point for our
MCMC sampling.
– Our parameter update equation becomes:
Markov Chain Monte Carlo sampling
X
£t+1 = £t
¡ ´
µD
@ log f(x;£t
)
@£t
E
X1
£t
¡
D
@ log f(x;£t
)
@£t
E
X0
£t
¶
Notation: - training data, - training data after cycles of MCMC,
- samples from proposed distribution with parameters .
n
X1
£
X0
£
Xn
£
£
– Let us make the number of MCMC cycles per iteration
small, say even 1.
– Our parameter update equation is now:
– Intuition: 1 MCMC cycle is enough to move the data from the
target distribution towards the proposed distribution, and so
suggest which direction the proposed distribution should
move to better model the training data.
Contrastive divergence
£t+1 = £t
¡ ´
µD
@ log f(x;£t
)
@£t
E
X1
£t
¡
D
@ log f(x;£t
)
@£t
E
X0
£t
¶
Contrastive divergence bias
– We assume:
– ML learning equivalent to minimizing , where
(Kullback-Leibler divergence).
– CD attempts to minimize
– Usually , but can sometimes bias
results.
– See “On Contrastive Divergence
Learning”, Carreira-Perpinan & Hinton, AIStats
2005, for more details.
PjjQ =
R
p(x) log p(x)
q(x)
dx
@E(X;£)
@£
¼
D
@ log f(x;£)
@£
E
X1
£
¡
D
@ log f(x;£)
@£
E
X0
£
X0
£
jjX1
£
X0
£
jjX1
£
¡ X1
£
jjX1
£
@
@£
(X0
£
jjX1
£
¡X1
£
jjX1
£
) =
D
@ log f(x;£)
@£
E
X1
£
¡
D
@ log f(x;£)
@£
E
X0
£
¡@X1
£
@£
@X1
£
jjX1
£
@X1
£
@X1
£
@£
@X1
£
jjX1
£
@X1
£
¼ 0
Product of Experts
Dimensionality issues

Mais conteúdo relacionado

Mais procurados

Aaex7 group2(中英夾雜)
Aaex7 group2(中英夾雜)Aaex7 group2(中英夾雜)
Aaex7 group2(中英夾雜)
Shiang-Yun Yang
 
Alg March 26, 2009
Alg March 26, 2009Alg March 26, 2009
Alg March 26, 2009
Mr. Smith
 

Mais procurados (20)

Graphing day 1 worked
Graphing day 1 workedGraphing day 1 worked
Graphing day 1 worked
 
Multicasting in Linear Deterministic Relay Network by Matrix Completion
Multicasting in Linear Deterministic Relay Network by Matrix CompletionMulticasting in Linear Deterministic Relay Network by Matrix Completion
Multicasting in Linear Deterministic Relay Network by Matrix Completion
 
4.7 inverse functions.ppt worked
4.7   inverse functions.ppt worked4.7   inverse functions.ppt worked
4.7 inverse functions.ppt worked
 
CLIM Undergraduate Workshop: Tutorial on R Software - Huang Huang, Oct 23, 2017
CLIM Undergraduate Workshop: Tutorial on R Software - Huang Huang, Oct 23, 2017CLIM Undergraduate Workshop: Tutorial on R Software - Huang Huang, Oct 23, 2017
CLIM Undergraduate Workshop: Tutorial on R Software - Huang Huang, Oct 23, 2017
 
Random Number Generators 2018
Random Number Generators 2018Random Number Generators 2018
Random Number Generators 2018
 
H2O World - GLRM - Anqi Fu
H2O World - GLRM - Anqi FuH2O World - GLRM - Anqi Fu
H2O World - GLRM - Anqi Fu
 
Generalized Low Rank Models
Generalized Low Rank ModelsGeneralized Low Rank Models
Generalized Low Rank Models
 
The Uncertain Enterprise
The Uncertain EnterpriseThe Uncertain Enterprise
The Uncertain Enterprise
 
Coq for ML users
Coq for ML usersCoq for ML users
Coq for ML users
 
Module 6.7
Module 6.7Module 6.7
Module 6.7
 
Raytracing Part II
Raytracing Part IIRaytracing Part II
Raytracing Part II
 
Quaternionic Modular Symbols in Sage
Quaternionic Modular Symbols in SageQuaternionic Modular Symbols in Sage
Quaternionic Modular Symbols in Sage
 
Aaex7 group2(中英夾雜)
Aaex7 group2(中英夾雜)Aaex7 group2(中英夾雜)
Aaex7 group2(中英夾雜)
 
Esmaeilzade sampling
Esmaeilzade   samplingEsmaeilzade   sampling
Esmaeilzade sampling
 
Gradient descent optimizer
Gradient descent optimizerGradient descent optimizer
Gradient descent optimizer
 
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
 
Lecture 7 Derivatives
Lecture 7   DerivativesLecture 7   Derivatives
Lecture 7 Derivatives
 
Alg March 26, 2009
Alg March 26, 2009Alg March 26, 2009
Alg March 26, 2009
 
Asymptotics 140510003721-phpapp02
Asymptotics 140510003721-phpapp02Asymptotics 140510003721-phpapp02
Asymptotics 140510003721-phpapp02
 
Maximum flow
Maximum flowMaximum flow
Maximum flow
 

Semelhante a Contrastive Divergence Learning

Multiplicative Interaction Models in R
Multiplicative Interaction Models in RMultiplicative Interaction Models in R
Multiplicative Interaction Models in R
htstatistics
 
Prob-Dist-Toll-Forecast-Uncertainty
Prob-Dist-Toll-Forecast-UncertaintyProb-Dist-Toll-Forecast-Uncertainty
Prob-Dist-Toll-Forecast-Uncertainty
Ankoor Bhagat
 

Semelhante a Contrastive Divergence Learning (20)

PS
PSPS
PS
 
Introduction to PyTorch
Introduction to PyTorchIntroduction to PyTorch
Introduction to PyTorch
 
Multiplicative Interaction Models in R
Multiplicative Interaction Models in RMultiplicative Interaction Models in R
Multiplicative Interaction Models in R
 
Efficient Hill Climber for Multi-Objective Pseudo-Boolean Optimization
Efficient Hill Climber for Multi-Objective Pseudo-Boolean OptimizationEfficient Hill Climber for Multi-Objective Pseudo-Boolean Optimization
Efficient Hill Climber for Multi-Objective Pseudo-Boolean Optimization
 
Prob-Dist-Toll-Forecast-Uncertainty
Prob-Dist-Toll-Forecast-UncertaintyProb-Dist-Toll-Forecast-Uncertainty
Prob-Dist-Toll-Forecast-Uncertainty
 
Teknik Simulasi
Teknik SimulasiTeknik Simulasi
Teknik Simulasi
 
Pseudo Random Number Generators
Pseudo Random Number GeneratorsPseudo Random Number Generators
Pseudo Random Number Generators
 
PRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling MethodPRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling Method
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
 
Optim_methods.pdf
Optim_methods.pdfOptim_methods.pdf
Optim_methods.pdf
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
derivative.ppt
derivative.pptderivative.ppt
derivative.ppt
 
derivative.ppt
derivative.pptderivative.ppt
derivative.ppt
 
Fast parallelizable scenario-based stochastic optimization
Fast parallelizable scenario-based stochastic optimizationFast parallelizable scenario-based stochastic optimization
Fast parallelizable scenario-based stochastic optimization
 
Phase diagram at finite T & Mu in strong coupling limit of lattice QCD
Phase diagram at finite T & Mu in strong coupling limit of lattice QCDPhase diagram at finite T & Mu in strong coupling limit of lattice QCD
Phase diagram at finite T & Mu in strong coupling limit of lattice QCD
 
CLIM Fall 2017 Course: Statistics for Climate Research, Spatial Data: Models ...
CLIM Fall 2017 Course: Statistics for Climate Research, Spatial Data: Models ...CLIM Fall 2017 Course: Statistics for Climate Research, Spatial Data: Models ...
CLIM Fall 2017 Course: Statistics for Climate Research, Spatial Data: Models ...
 
Identification of the Mathematical Models of Complex Relaxation Processes in ...
Identification of the Mathematical Models of Complex Relaxation Processes in ...Identification of the Mathematical Models of Complex Relaxation Processes in ...
Identification of the Mathematical Models of Complex Relaxation Processes in ...
 
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
 
Website designing company in delhi ncr
Website designing company in delhi ncrWebsite designing company in delhi ncr
Website designing company in delhi ncr
 
Website designing company in delhi ncr
Website designing company in delhi ncrWebsite designing company in delhi ncr
Website designing company in delhi ncr
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 

Último (20)

Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 

Contrastive Divergence Learning

  • 1. Contrastive Divergence Learning Geoffrey E. Hinton A discussion led by Oliver Woodford
  • 2. Contents • Maximum Likelihood learning • Gradient descent based approach • Markov Chain Monte Carlo sampling • Contrastive Divergence • Further topics for discussion: – Result biasing of Contrastive Divergence – Product of Experts – High-dimensional data considerations
  • 3. • Given: – Probability model • - model parameters • - the partition function, defined as – Training data • Aim: – Find that maximizes likelihood of training data: – Or, that minimizes negative log of likelihood: Maximum Likelihood learning X = fxk gK k=1 p(x; £) = 1 Z(£) f(x; £) Z(£) £ Z(£) = R f(x; £) dx £ p(X; £) = Q K k=1 1 Z(£) f(xk ; £) £ Toy example Known result: E(X; £) = K log(Z(£)) ¡ P K k=1 log(f(xk ; £)) f(x; £) = exp¡ (x¡¹)2 2¾2 £ = f¹; ¾g Z(£) = ¾ p 2¼
  • 4. • Method: – at minimum – Let’s assume that there is no linear solution… Maximum Likelihood learning @E(X;£) @£ = 0 @E(X; £) @£ = @ log Z(£) @£ ¡ 1 K KX i=1 @ log f(xi ; £) @£ = @ log Z(£) @£ ¡ ¿ @ log f(x; £) @£ À X is the expectation of given the data distribution . h¢i X ¢ X @E(X;£) @£ = @ log(¾ p 2¼) @£ + ¿ @ (x¡¹)2 2¾2 @£ À X @E(X;£) @¹ = ¡ - x¡¹ ¾2 ® X = 0 ) ¹ = hxi X @E(X;£) @¾ = 1 ¾ + D (x¡¹)2 ¾3 E X = 0 ) ¾ = p h(x ¡ ¹)2i X
  • 5. – Move a fixed step size, , in the direction of steepest gradient. (Not line search – see why later). – This gives the following parameter update equation: Gradient descent-based approach ´ £t+1 = £t ¡ ´ @E(X; £t ) @£t = £t ¡ ´ µ @ log Z(£t ) @£t ¡ ¿ @ log f(x; £t ) @£t À X ¶
  • 6. – Recall . Sometimes this integral will be algebraically intractable. – This means we can calculate neither nor (hence no line search). – However, with some clever substitution… – so where can be estimated numerically. Gradient descent-based approach Z(£) = R f(x; £) dx E(X; £) @ log Z(£) @£ @ log Z(£) @£ = 1 Z(£) @Z(£) @£ = 1 Z(£) @ @£ R f(x; £) dx = 1 Z(£) R @f(x;£) @£ dx = 1 Z(£) R f(x; £)@ log f(x;£) @£ dx = R p(x; £)@ log f(x;£) @£ dx = D @ log f(x;£) @£ E p(x;£) D @ log f(x;£) @£ E p(x;£) £t+1 = £t ¡ ´ µD @ log f(x;£t ) @£t E p(x;£t ) ¡ D @ log f(x;£t ) @£t E X ¶
  • 7. – To estimate we must draw samples from . – Since is unknown, we cannot draw samples randomly from a cumulative distribution curve. – Markov Chain Monte Carlo (MCMC) methods turn random samples into samples from a proposed distribution, without knowing . – Metropolis algorithm: • Perturb samples e.g. • Reject if • Repeat cycle for all samples until stabilization of the distribution. – Stabilization takes many cycles, and there is no accurate criteria for determining when it has occurred. Markov Chain Monte Carlo samplingD @ log f(x;£) @£ E p(x;£) p(x; £) Z(£) x0 k = xk + randn(size(xk )) x0 k p(x0 k ;£) p(xk ;£) < rand(1) Z(£)
  • 8. – Let us use the training data, , as the starting point for our MCMC sampling. – Our parameter update equation becomes: Markov Chain Monte Carlo sampling X £t+1 = £t ¡ ´ µD @ log f(x;£t ) @£t E X1 £t ¡ D @ log f(x;£t ) @£t E X0 £t ¶ Notation: - training data, - training data after cycles of MCMC, - samples from proposed distribution with parameters . n X1 £ X0 £ Xn £ £
  • 9. – Let us make the number of MCMC cycles per iteration small, say even 1. – Our parameter update equation is now: – Intuition: 1 MCMC cycle is enough to move the data from the target distribution towards the proposed distribution, and so suggest which direction the proposed distribution should move to better model the training data. Contrastive divergence £t+1 = £t ¡ ´ µD @ log f(x;£t ) @£t E X1 £t ¡ D @ log f(x;£t ) @£t E X0 £t ¶
  • 10. Contrastive divergence bias – We assume: – ML learning equivalent to minimizing , where (Kullback-Leibler divergence). – CD attempts to minimize – Usually , but can sometimes bias results. – See “On Contrastive Divergence Learning”, Carreira-Perpinan & Hinton, AIStats 2005, for more details. PjjQ = R p(x) log p(x) q(x) dx @E(X;£) @£ ¼ D @ log f(x;£) @£ E X1 £ ¡ D @ log f(x;£) @£ E X0 £ X0 £ jjX1 £ X0 £ jjX1 £ ¡ X1 £ jjX1 £ @ @£ (X0 £ jjX1 £ ¡X1 £ jjX1 £ ) = D @ log f(x;£) @£ E X1 £ ¡ D @ log f(x;£) @£ E X0 £ ¡@X1 £ @£ @X1 £ jjX1 £ @X1 £ @X1 £ @£ @X1 £ jjX1 £ @X1 £ ¼ 0